Machine learning in trading: theory, models, practice and algo-trading - page 2019

 
Aleksey Vyazmikin:

It is possible in person.

I can't get to my personal account), the site is glitchy. If it works, I will write to you and Maxim.

 
Maxim Dmitrievsky:

not complicated, you just have to figure it out

You don't need any power at all. I have LSTM on my laptop learning in a few minutes without any video cards. About power it's a myth.

)), well... questionable statement.

... Обучение закончено...
 ============ Обучение заняло(миллисекунд): 26832. Эпох: 2693 ===========
 eta: 0.0100, alpha: 0.0050
 max_long_result: 0.9986, max_short_result: 0.9996
 min_long_result: 0.9979, min_short_result: 0.9950
The same operation in MQL takes 10 minutes or more. The speed could be increased if the host had more cores in the processor or the processors themselves))).
 
Farkhat Guzairov:

)), well... controversial statement.

The same operation with MQL takes 10 minutes or more. The speed could be increased if the host had more cores in processor or processors themselves))).

2700 epochs in 30 seconds is too fast

 
Dr.mr.mom:

I've got a situation paradox: the site is glitchy. I will write to you and Maxim when it clears up.

I wrote to you, did the message get through?

 
Maxim Dmitrievsky:

2700 epochs in 30 seconds is too fast

The data is small, the array is like 400 epochs deep, and if you load it on deep history, even C++ with threads will go into a tizzy )))), but the saddest thing is that in the end I can not get a well-trained system, I have a threshold of 30000 epochs, it stops training, but you know this is not the trained crap, why so ... I think it's because of collisions, it seems to me there is a set of data, which in one case say that the model is a short, and in the other case this same model is a long, if so it's certainly my fault, but to deal with it no longer forces ((. If it is so, then of course it is my fault, but I have no energy to deal with it ((())).

 
Maxim Dmitrievsky:

Man... it's not complicated in the sense that you can understand

Usually a couple of layers are enough, you don't need much depth in forex

Just architecturally there are more advanced networks for VR, cooler than lstm. There may be profits from there, haven't checked yet. All the "classics" like boostings and perseptrons are not suitable for BP.

Whether more layers are required or not can only be judged by the results achieved, I think...

What other networks are out there, can you name them - I'm not very good at the variety of networks at all.

Can I drop you a sample for a fashionable network?

 
Farkhat Guzairov:

There's not much data, the array is like 400 deep, but if I load it on deep history, even C++ with threads will go to the back of my mind )))), but the saddest thing is that in the end I cannot get a well-trained system, I have 30000 epoch milestone, it stops training, but you understand it is not a fully trained shit, why so... I think it's because of collisions, it seems to me there is a set of data, which in one case say that the model is a short, and in the other case this same model is a long, if so it's certainly my fault, but to deal with it no longer forces ((. If i'm wrong it's my fault, of course, but i don't have the strength to deal with it anymore ((())).

i can usually get down to 1000 or even 100 epochs with dynamic looping rate.

 
Aleksey Vyazmikin:

Whether or not more layers are needed can only be judged by the results achieved, I think...

What other networks are out there, can you name them - I'm not very good at variety of networks.

Can I drop you a sample to run in a fancy network of some kind?

I'm just learning neural networks. I've already written here. These are new convolutional and transformers, and so on, mostly used for language and sound processing.

Datasets for them are prepared in a special way, normal datasets will not work
 
Maxim Dmitrievsky:

I'm just learning about neural nets. Already wrote here. These are the new convolutions and transformers and so on, mostly used for language and sound processing.

Datasets for them are prepared in a special way, ordinary datasets won't do

If you know how to prepare it, you can...

 
Maxim Dmitrievsky:

Why such a huge number of epochs... usually enough up to 1000 or even 100, with dynamic lerning rate

Since initial weights are set randomly, sometimes epochs are less than 1000.

Reason: