Machine learning in trading: theory, models, practice and algo-trading - page 4

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
NS did very well.
Random Forest could not handle such a problem, where the interaction of a set of variables. And the individual significance of each predictor was intentionally zero.
I don't see any evidence that NS has handled anything.
Overlearning is a worldwide evil in science and in model building in particular.
Therefore an error is needed for three sets:
The last two sets are without mixing, since they come in terminal, bases behind the bar.
There should be about the same error on all three sets. At the same time you have to fix the set of predictors that you take when training the model.
Random Forest could not handle such a problem, where the interaction of a set of variables. And the individual significance of each predictor was intentionally zero.
Your idea to account for the interaction between predictors is a revolution in statistics. Until now, I thought the interaction between predictors was evil. Not only are predictors themselves usually non-stationary, but we're trying to account for relationships between these non-stationary random processes.
In machine learning, it is considered mandatory to get rid of interacting variables. Moreover, there are extremely efficient algorithms such as principal components method that allows getting rid of interaction and converting interacting set of predictors into a set of independent predictors.
I see no evidence that NS has coped with anything.
Overlearning is a worldwide evil in science and in model building in particular.
Therefore an error is needed for three sets:
The last two sets are without mixing, since they come in terminal, bases behind the bar.
There should be about the same error on all three sets. That said, you will have to fix the set of predictors that you take when you train the model.
Let's put it this way. Despite the fact that this is not part of the assignment. I lay out a validation sample on which to run the trained model and measure the accuracy of the prediction.
But again, this is not necessary. Note that I wrote the validation again on the basis of the inferred pattern.
A pattern embedded in the data:
I do not see evidence that the NS has coped with something.
Neuronka solved this problem, atachment log with code from Rattle. A couple of changes in the code when calling the neuron - I increased the maximum number of iterations, and removed the links that go from the input directly to the output, bypassing the middle layer (skip=TRUE). Because these two limitations spoil everything.
I did validation on the new file, the errors in both cases are almost 0% (there is one single error when validating from the second file).
But since NS is like a black box, there is no way to know the logic of the solution. You can look at the weights, determine the average absolute value to each input, and draw a diagram. And find out that 1, 3, 5, 7, 9, 11 are more important than the rest. But at the same time other inputs are also used for some reason, there are no zero weights anywhere. That is, it turns out on the contrary, first we learn, then we can define important inputs.
Neuronka solved this problem, atachment log with code from Rattle. A couple of changes in the code when calling the neuron - I increased the maximum number of iterations, and removed connections that go from the output of the incoming to the output immediately bypassing the middle layer (skip=TRUE). Because these two limitations spoil everything.
I did validation on the new file, the errors in both cases are almost 0% (there is one single error when validating from the second file).
But since NS is like a black box, there is no way to know the logic of the solution. You can look at the weights, determine the average absolute value to each input, and draw a diagram. And find out that 1, 3, 5, 7, 9, 11 are more important than the rest. But the other inputs are also used for some reason; there are no zero weights anywhere. So, it turns out that it is the other way around - first we learn, and then we can define important inputs.
I see no evidence that NS has coped with anything.
Overlearning is a worldwide evil in science and in model building in particular.
Therefore an error is needed for three sets:
The last two sets are without mixing, since they come in terminal, bases behind the bar.
There should be about the same error on all three sets. In doing so, you will have to fix the set of predictors you take when training the model.
Does the obvious need to be proven? During training, the weights of inputs carrying contradictory data decreased, i.e. we can say that the incoming contradictory data is blocked.
There is no problem of retraining in this case, because the trained network is not used for any other purpose.
How feasible is this method, that is the question. Isn't it a bit heavy artillery.
How feasible it is to use such a method, that is the question. Isn't the artillery a bit heavy?
You can try another way. But it seems to me that the remedy fits the problem.
It does the job, and it does it well. But I always wonder if there is something more effective and easier.