Machine learning in trading: theory, models, practice and algo-trading - page 447

 

How does he do it? :) I continue to reeducate every week, 3rd week of real account tests, I made one stop this week but later I got it back and current trade is at +100 on breakeven. It makes up a little bit of loss from the previous week and makes up about 45% in 3 incomplete weeks with 7% loss. Funny, what. The most interesting thing is that I do not understand why he opens trades back and forth. With all previous systems he always knew what is where and how, it is unusual.


 
Maxim Dmitrievsky:

How does he do it? :) I continue to retrain every week, I made one stop this week and lost it, and current trade is at +100 on the break-even line. It makes up a little bit of loss from the previous week and makes up about 45% in 3 incomplete weeks with 7% loss. Funny, what. The most interesting thing is that I do not understand why he opens trades back and forth. With all his previous systems he always knew where and how, it's unusual.



I would be interested to follow him.

 
Evgeny Belyaev:

The alarm would have posted, it would be interesting to watch him.

The accounts are dirty, a lot of systems are trading\testing, then.
 
Maxim Dmitrievsky:

How does he do it? :) I continue to reeducate every week, 3rd week of real account tests, I made one stop this week but later I got it back and current trade is at +100 on breakeven. It makes up a little bit of loss from the previous week and makes up about 45% in 3 incomplete weeks with 7% loss. Funny, what. The most interesting thing is that I do not understand why he opens trades back and forth. With all previous systems he has always known where and how, it is unusual.


May I have another picture? I wonder what happened next this week...
 
mytarmailS:
I'm always trying to improve it, I'm always experimenting with predictors and different ways to open positions. ) I wonder what happened next for you this week...

Last week, the whole report did not fit on the screen :) I could be better, but I closed the week on the plus side all the same. I also added DAX, but it was not very profitable the first time. I improved it little by little, experimenting with predictors and different ways of opening positions.


 
Maxim Dmitrievsky:

How does he do it? :) I continue to reeducate every week, third week of real account tests, I made one stop this week but later I got it back and current trade is at +100 on breakeven. It makes up a little bit of loss from the previous week and makes up about 45% in 3 incomplete weeks with 7% loss. Funny, what. The most interesting thing is that I do not understand why he opens trades back and forth. With all his previous systems he always knew where and how, it's unusual.

I am jealous). I am immersed in the study of neural network theory and reading books on NS - there is no end in sight). I have prepared some training sequences, but I have not got down to details - I haven't touched the computer for a week. Yesterday I came home, and on Wednesday I have to leave again. Maybe I will have time to do something.

If you are still working with Reshetovsky neuron, it turned out that it is not the NS, but some kind of adaptive filter implementation (AF) - also a very interesting thing. I haven't tried it and won't get around to it soon, but judging by the theory it's perfect for building predictors.

 
Yuriy Asaulenko:

It makes me jealous). I am immersed in studying the theory of neural networks and reading highly artistic books on NS - there is no end in sight). I prepared some training sequences, but did not get to the specifics - I did not go near the computer for a week. Yesterday I came home, and on Wednesday I have to leave again. Maybe I will have time to do something.

If you are still working with Reshetovsky neuron, it turned out that it is not the NS, but some kind of adaptive filter implementation (AF) - also a very interesting thing. I haven't tried it and won't get around to it soon, but judging by the theory it's a good choice for building predictors.

Yes, I'm still using it, it's not ns, it's an expert system, trained on the principle of ns. Yuriy Asaulenko: Yes, I still have it, it's not ns, it's an expert system, trained on ns principle. The most important thing is predictors, soon I will adapt them to random forest, in general NS has no advantages over RF, they take too long, the error is larger... If you want to train fast then RF+optimizer is exactly the same
 
Maxim Dmitrievsky:
Yes, it's still on it, it's not ns, expert system, trained simply by ns. I will soon adapt them to random forest, in general NS has no advantages over RF, it takes long time to train, the error is more... If you want to train quickly then unambiguously RF+optimizer

Again, judging from the books, NS and RF are completely different and for the most part not interchangeable designs. Therefore, it is probably not necessary to say unambiguously what is better and what is worse. For certain classes of tasks this or that design might be better.

For my design, the NS is probably better, because in my case, it should not replace the TS, but only complement it. Due to this combination, according to the architect's plan)), both the NS and the TS itself should become much simpler.

 
Yuriy Asaulenko:

Again, judging from the books, NS and RF are completely different and for the most part not interchangeable designs. Therefore, it is probably not necessary to say unambiguously what is better and what is worse. For certain classes of tasks this or that design might be better.

For my design, the NS is probably better, because in my case it should not replace the TS, but only supplement it. At the expense of such unification, according to the architect's plan)), both the NS and the TS itself should become much simpler.

The usual MLP has no advantages, sometimes even linear regression or SVM or polynomial give better results, no matter how many layers you add there :) and it takes considerably longer to train. In general I came to the conclusion, that MLP is an ugly beast, ugly retarded and unpromising for trading, especially as it copies the mechanism of real neurons in a very primitive way, not in the way it actually happens in the brain :) The only normal and perspective NS is convolutional neurons for pattern recognition, they are not able to predict, and if so an ensemble of simple and fast classifiers will be enough.

Bayesian classifier is better, but worse than RF.

 
Maxim Dmitrievsky:

Last week, the whole report did not fit on the screen :) I could be better, but I closed the week on the plus side all the same. I also added DAX, but it was not very profitable the first time. I improve it little by little, experimenting with predictors and different ways of opening positions.

What is the target function in your classifier?
Reason: