Machine learning in trading: theory, models, practice and algo-trading - page 75

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
That's why Dr.Trader couldn't run full-fledged libVMR rewritten in R - too many calculations and it wastes memory.
I had an error in my code in the large kernel conversion function. Atach has the same old version 3.01 but with fixes. Memory is now ok, so is the big kernel machine. But it will be slower than java.
I had an error in my code in the large kernel conversion function. Atach has the same old version 3.01, but with a fix. The memory is fine now, so is the big kernel machine. But the speed is slower than java.
It's the most disgusting thing is that the speed is below the plinth.
Also libVMR is a binary classifier, which is not good. Ternary classifier can make your shit look good:
In the predictor itself the level of generalization of data is 90%, but in the unloaded model it is only 47% Not clear.... Well, I haven't managed to run it in MQL yet....
Slowly I've increased up to 100% generalization for the model, let's see how it works in the future :-)
100% generalizability is not the limit. We can further improve it by selecting predictors by bias. If two ternary classifiers have 100% generalizing ability, but different biases, then the classifier with the lowest bias will be the best - it has more significant predictors.
The lower the bias, the less examples in the test sample are marked with a dash (uncertainty).
100% generalizability is not the limit. We can further improve it by selecting predictors by bias. If two ternary classifiers have 100% generalizability but different biases, the classifier with the lowest bias will be the best, because it has more significant predictors.
The smaller bias, the less examples in the test sample are marked by a dash (uncertainty).
For a long time I have been interested and I can say that I am tormented by the question. What does the parameter Indicator by Reshetov mean and what does it mean? What does it mean?
The point is that it's a good indicator for learning ability, but it doesn't make any sense for generalization ability. That's why in next versions of jPrediction I'll remove it, so it won't be a nuisance.
The point is that it's a good indicator for the learning ability, but it doesn't make any sense for the generalization ability. That's why in next versions of jPrediction I'll remove it, so it won't be a nuisance.
Yuri, a question. Can the predictor output probabilities instead of classes?
Yuri, a question. Can a predictor produce probabilities instead of classes?
No, probabilities were calculated in the earliest versions of libVMR, but there was a big problem, which is that all predictors for correct calculation of the probability value must be strictly independent of each other. And observing such a condition in many application areas is not realistic at all. For example, almost all indicators and oscillators in trading correlate with each other, i.e. they are not independent. In addition, the condition of independence in the algorithm, in its absence in the data, has a negative impact on the generalization ability. Therefore, we had to abandon such a dead-end direction.
Now jPrediction does not pay any attention to the independence of the predictors, but only to the value of generalizability. This is because several predictors can complement each other, i.e. some examples will give good results for some predictors, others for others, and their combinations for others. Calculating probabilities under such conditions can have a very large and highly questionable error.