Machine learning in trading: theory, models, practice and algo-trading - page 1396

 
Elibrarius:
I'm into the woods. If I go back to the NS, it won't be for a long time. I've already spent a year on them.

ok )

 
elibrarius:

I do not strive for the absolute correctness of a prediction. For me, all trades that make a profit are correct.
Examples:

1) predicted -10 got -8 - this is an excellent profit, not an error at all
2) predicted -4.8 got -13 - profit much more than predicted, this is all the more not an error.
3) predicted -3.5 got +5, there will be a loss - this is a mistake. Just like all points on the left and above 0. Only they will make a loss and it is a mistake to trade on them.

You are absolutely correct in your understanding. All we need is the skewing of the probability distribution of a successful trade to our side. In addition, no one has canceled the stops, trailing stops, and other methods of transaction support, and the real result will be better than the stupid closing of the deal in 5 meters. This is the case in the picture.

The robustness Maxim is talking about on the market is an unfounded fantasy. If there are real deviations from the regression (it does not matter if it is in training or in tests), no model is able to bring these deviations to zero. In any case, the deviations will remain as they were. The regression will always go to the center of the distribution, and nothing more.

Maxim, unfortunately, has retreated into the woods and RL, and perceives all external information as an attack on his worldview, and, perhaps, on the significance of his results). Everything is wrong with him, all nonsense, kindergarten, etc.

As for switching from NS to forest-trees. They are roughly equivalent models, and you do not get anything new, or at the level - the same eggs, only in profile. To see what kind of beast it is, of course, not superfluous.

Switching to Alglib in the MCL may give you something in the short term. In the long term, it's nothing but a loss of momentum, a dead end. The same Maxim is already crawling to Python, and you are agitating MO in Alglib-MCL to take over. It's funny.

 
Yuriy Asaulenko:

You've got it exactly right. All we need is to skew the probability distribution of a successful trade in our direction. In addition, no one cancelled stops, trailing stops, and other methods of transaction support, and the actual result will be better than the stupid closing of the deal after 5 meters. This is the case in the picture.

The robustness Maxim is talking about on the market is an unfounded fantasy. If there are real deviations from the regression (whether in training or in tests), no model is capable of reducing them to zero. In any case, they will remain as they were. The regression will always go to the center of the distribution, and nothing more.

Maxim, unfortunately, has gone into the woods and RL and perceives all external information as an attack on his worldview, and perhaps the significance of his results). All wrong, all nonsense, kindergarten, etc.

As for switching from NS to forest-trees. They are roughly equivalent models, and you get nothing new, or at the level - the same eggs, only in profile. To see what kind of beast it is, of course, not superfluous.

Switching to Alglib in the MCL may give you something in the short term. In the long term, it's nothing but a loss of momentum, a dead end. The same Maxim is already crawling to Python, and you are agitating MO in Alglib-MCL to take over. It's funny.

The scaffold has fewer parameters of the model itself. And in NS there are more of them and it is difficult to choose the optimum combination, plus you need rationing and alignment. And rationing and scaling floats over time, as a result the same retraining will not be correct. And scaffolds digest everything in abs. values.

 

At least learn to interpret the scatter plot correctly, and that's a good thing

Then you will begin to understand that errors tend to grow rather than cancel each other out, and that what is taken as a skewed probability distribution of one point is a momentary thing that adds up to large errors

 
elibrarius:

The forest has fewer parameters of the model itself. In the NS there are more of them and it is difficult to find the optimum combination, plus you need normalization and alignment. And rationing and scaling over time varies, as a result the same retraining will not be correct. And forests digest everything in abs. values.

I can't speak for all of them, but those scaffolding models I've seen also need scaling and other things. By the way, I do not understand how all this can work without preprocessing the input signals. Just imagine two roughly identical signals, one of which has a starting price 10% higher than the other. And the volatility of the larger (identical) one will automatically be 10% higher than that of the first one. How can it be handled by the NS or the forest? The result will be identical.

 
Maxim Dmitrievsky:

At least learn to interpret the scatter plot correctly, and that's a good thing

Then you will begin to understand that errors tend to grow rather than cancel each other out, and that what is taken as a skewed probability distribution of a point is a momentary thing that adds up to large errors

Maxim, learn to read the graphs, not just your own. If my graphs don't tell you anything, I can't help it. If you do not want, do not deal with them, no one is not forced. Honestly, I'm tired of you. I don't want to say you are talking nonsense, but I have to.

 
Yuriy Asaulenko:

Maxim, learn to read charts, not just your own. Honestly, I'm tired of you. I don't want to say that you are talking nonsense, but I have to.

How do you read your charts? Are there any special ways of reading them, contrary to the generally accepted ones? There's Elibrarius, I don't know about his daddy, he couldn't even read it properly.

I just wrote what I see on these charts... no conclusions can be drawn from this

and specifically - the error is a little better than random, perhaps about 40%

 
Yuriy Asaulenko:

I can't speak for all of them, but those scaffolding models I've seen also need scaling and other things. By the way, I do not understand how all this can work without preprocessing the input signals. Just imagine two roughly identical signals, one of which has a starting price 10% higher than the other. And the volatility of the second (identical) will automatically be 10% higher than the first. How can it be handled by the NS or the forest? The result will be identical.

The forest will process inputs with values and volatility difference by orders of magnitude.

One of them will be divided into nodes, for example by 0.00014, and the second by 41548.3.

And for NS we need to reduce all inputs to the same scale.

 
Maxim Dmitrievsky:

How do you read your chart? Are there any special ways of reading, contrary to the conventional ones? There's Elibrarius, I don't know how to read it, he couldn't even read it properly.

I just wrote what I see on these charts... no conclusions can be drawn from this

and specifically the error is slightly better than random, probably on the order of 40%

So what? What's wrong with that? If you take 10 minutes it will be unrecognizable, and if you take an hour it's hard to imagine).

 
elibrarius:

The forest will digest inputs with value differences and volatility differing by orders of magnitude.

One of them will divide in knots for example by 0.00014, and the second by 41548.3.

And for the NS you need to reduce all the inputs to one scale.

I am not aware of it. I spoke only about the forests I saw. For the whole of Odessa will not say.

Reason: