Machine learning in trading: theory, models, practice and algo-trading - page 75

 
Yury Reshetov:

That's why Dr.Trader couldn't run full-fledged libVMR rewritten in R - too many calculations and it wastes memory.

I had an error in my code in the large kernel conversion function. Atach has the same old version 3.01 but with fixes. Memory is now ok, so is the big kernel machine. But it will be slower than java.

Files:
libVMR.txt  12 kb
 
Dr.Trader:

I had an error in my code in the large kernel conversion function. Atach has the same old version 3.01, but with a fix. The memory is fine now, so is the big kernel machine. But the speed is slower than java.

It's the most disgusting thing is that the speed is below the plinth.

Also libVMR is a binary classifier, which is not good. Ternary classifier can make your shit look good:

Mihail Marchukajtes:
In the predictor itself the level of generalization of data is 90%, but in the unloaded model it is only 47% Not clear.... Well, I haven't managed to run it in MQL yet....
That is, the binary classifier generalizes only 47% of examples, which is much worse than random - 50%. And the ternary one filters out the trash, obtaining at the remaining examples 90% of generalization ability.
 
Slowly increased the level of generalization for the model to 100%, let's see how it works in the future :-)
 
Mihail Marchukajtes:
Slowly I've increased up to 100% generalization for the model, let's see how it works in the future :-)

100% generalizability is not the limit. We can further improve it by selecting predictors by bias. If two ternary classifiers have 100% generalizing ability, but different biases, then the classifier with the lowest bias will be the best - it has more significant predictors.

The lower the bias, the less examples in the test sample are marked with a dash (uncertainty).

 
Yury Reshetov:

100% generalizability is not the limit. We can further improve it by selecting predictors by bias. If two ternary classifiers have 100% generalizability but different biases, the classifier with the lowest bias will be the best, because it has more significant predictors.

The smaller bias, the less examples in the test sample are marked by a dash (uncertainty).

LLong I am interested in and I can say that I am tormented by the question. What does the parameter Indicator by Reshetov and its meaning mean? What does it mean? And Bias is equal to zero in my training at 100% generalization...
 
Mihail Marchukajtes:
For a long time I have been interested and I can say that I am tormented by the question. What does the parameter Indicator by Reshetov mean and what does it mean? What does it mean?

The point is that it's a good indicator for learning ability, but it doesn't make any sense for generalization ability. That's why in next versions of jPrediction I'll remove it, so it won't be a nuisance.

 
Yury Reshetov:

The point is that it's a good indicator for the learning ability, but it doesn't make any sense for the generalization ability. That's why in next versions of jPrediction I'll remove it, so it won't be a nuisance.

Yury, here's a question. Can the predictor give out probabilities instead of classes?
 
I wonder if this helps us in any way https://news.mail.ru/society/26600207/?frommail=10
 
Alexey Burnakov:
Yuri, a question. Can the predictor output probabilities instead of classes?
If by probability you mean the degree of feature expression, then yes it can. Only not in committee, because it gives out either 0 or 1 or -1, but in binary. Build a model t on the whole market and you will see how the model jumps above zero, and the higher the zero value of the model, the more probable the class. BUT in percentage ratio....mmm... well unless from the maximum value to take for 100% and from it to calculate. Suppose I have a buy signal, and the model is above zero with a value of, say, 0.1, and the maximum value was 1, it means that this buy signal has a state of truth of 10%, as it were.... if that's what I think it is....
 
Alexey Burnakov:
Yuri, a question. Can a predictor produce probabilities instead of classes?

No, probabilities were calculated in the earliest versions of libVMR, but there was a big problem, which is that all predictors for correct calculation of the probability value must be strictly independent of each other. And observing such a condition in many application areas is not realistic at all. For example, almost all indicators and oscillators in trading correlate with each other, i.e. they are not independent. In addition, the condition of independence in the algorithm, in its absence in the data, has a negative impact on the generalization ability. Therefore, we had to abandon such a dead-end direction.

Now jPrediction does not pay any attention to the independence of the predictors, but only to the value of generalizability. This is because several predictors can complement each other, i.e. some examples will give good results for some predictors, others for others, and their combinations for others. Calculating probabilities under such conditions can have a very large and highly questionable error.

Reason: