Machine learning in trading: theory, models, practice and algo-trading - page 2053

 
Maxim Dmitrievsky:

And this is how the catbust was trained on the same data (in 5 seconds)

52: learn: 0.7964708 test: 0.7848837 best: 0.7860866 (27) total: 604ms remaining: 5.09s

Source dataset:

Trained model (the second half of the trade is the test sample):


Not always, of course, depending on the sampling (and it's random, i.e., it needs oversampling). Sometimes like this:

34: learn: 0.5985972 test: 0.5915832 best: 0.5927856 (9) total: 437ms remaining: 5.81s



Maxim, I have a question, what are your values on the axes on the diagrams, and did you do a graph for the net convergence?

 
Alexander Alexeyevich:

Maxim, I have a question, what are the values on your charts on the axes?

The number of trades, by y profit in pips

I have only to save the model in Metac and check it in his tester

 
Maxim Dmitrievsky:

The number of trades, by y profit in pips

If you remember the last time we were talking about the paternoster, I put it for training on the same day, it is still learning.

 
Alexander Alexeyevich:

Remember the last time we talked about the wrenches? on the paternoster, put it to learn the same day, still learning. it seems like a long time.

(Is the network written in metatrader? ) I have already commented on this subject

 
Maxim Dmitrievsky:

network is written in metatrader? ) I have already commented on this topic

i have already commented on it in metatrader) - but this is a plus)))) it does not retrain), make a graph of your network errors, i would like to see)

 
Maxim Dmitrievsky:

And this is how the catbust was trained on the same data (in 5 seconds)

52: learn: 0.7964708 test: 0.7848837 best: 0.7860866 (27) total: 604ms remaining: 5.09s

Source dataset:

Trained model (the second half of the trade is the test sample):


Not always, of course, depending on the sampling (and it's random, i.e., it needs oversampling). Sometimes like this:

34: learn: 0.5985972 test: 0.5915832 best: 0.5927856 (9) total: 437ms remaining: 5.81s

correct result 0.59


you can't just sample a time series, it's not fischer's irises))))

you're looking into the future ... sampling is strictly for the track, first divide, then sample

not vice versa as you did

 
Alexander Alexeyevich:

On the mete, but there is a plus)))) it does not retrain)), make a graph of the errors of your network, I would like to see)

you cannot use algorithms that take so long to learn... you may go gray.


 
mytarmailS:

the correct result is 0.59


you can't just sample time series, it's not Fisher's irises)))

you're looking into the future ... sampling can be strictly trace, first divide, then sample

not the other way around like you did

what do you mean correct result? these are errors for different datasets

Not time series are sampled, but labels. see the videos
 
Maxim Dmitrievsky:

You can't use algorithms that take so long to learn... you can go gray.


Akurashi I guess that's prediction accuracy? And logloss? There shouldn't be learning on test, and error should be the same regardless of number of passes? or -+ at least, but it shouldn't decrease

 
Maxim Dmitrievsky:

What do you mean the correct result? these are errors for different datasets

it's not the time series, but the labels. see the videos

I understand correctly that you're training the network to predict time series, right?

Reason: