Machine learning in trading: theory, models, practice and algo-trading - page 486

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
For example on this option was 1000 inputs of which 600 worked in the plus (I guessed) 400 worked in the red (no guess). so the error is the number of non-guessing options in relation to all variants, in this example, error = 400/1000 = 0.4
Respectfully.
I do not know how it works, mine counts the percentage of non-guessing options.
For example on this variant was 1000 inputs of which 600 worked in the plus (I guessed) 400 worked in the minus (did not guess). so the error is the number of non-guessing options in relation to all variants, in this example, the error = 400/1000 = 0.4
Respectfully.
Here I understand the final error is divided by the number of examples multiplied by the number of inputs for some reason, if you remove it:
return(result/(npoints*df.m_nclasses));
If you multiply it backwards it will be quite clear result, for example 0.5578064232767638 :)
Here I understand the final error is divided by the number of examples multiplied by the number of inputs for some reason, if you remove it:
If you multiply it back then the result is quite clear, for example 0.5578064232767638 :)
means _Point (points) like guessed points from ... or vice versa.
Regards.
most likely
Means _Point (points) like guessed points from ... or vice versa.
Regards.
No, here npoints means the length of the input vector :)
And nclasses is the number of outputs, oh how
I.e. final error should be multiplied by length of training sample multiplied by number of outputs (if 1, then outputs are omitted).
May be useful for someone
I hope I did not mix up anything and did it right :) at least the error values became clearNo, here npoints means the length of the input vector :)
Respectfully.
In this case, you need to see what is rezult, since the divider is the input parameters.
Sincerely.
In short, it's just an average error over all examples, and you don't need it... rezult returns just the total error, and then it is divided by the number of examples in the sample (this can be removed)
In short, it's just an average error over all samples, and you don't need it... rezult returns just the total error, and then it is divided by the number of examples in the sample
This means you need to return it to normal, which you did by multiplying by the divisor.
Т.е. итоговую ошибку нужно домножить на длину обучающей выборки умноженную на кол-во выходов (если 1 то выходы опускаем)
Respectfully.
So it needs to go back to normal, which you did by multiplying by the divider.
Sincerely.
That feeling when you tweak something and then rejoice like a child :)
The feeling when you tweak something and then rejoice like a child :)
Sincerely.
In theory, there should be little error inrandom forests, because during their construction all variables are used in decision trees and there is no restriction on memory usage like in neural networks - the number of neurons. There you can only use separate operations to "blur" the result, such as level restriction, tree trimming or bagging. I don't know if there is pruning in MQ implementation of algib, but there is tagging
If you make this variable smaller than 1, the error should increase.