Machine learning in trading: theory, models, practice and algo-trading - page 2014

 
elibrarius:

This is a question for everyone:
I also get batches of unidirectional signals from the network. It's similar to this one.

Sometimes I get 100 or even 200 losses in a row. There is only one way out - to trade with microscopic lots, like 0.5% of the deposit.
Who has ideas how to avoid trading hundreds of signals in a row?
Trade the first one and then not trade until the open one closes? I think it is not the best option.

What are your options?

You are training on very noisy data (signs\target), like on increments, so the signal is very noisy. You can try to smooth out the signs and the target, or the final signal itself, to reduce the noise in the signal, but that won't help much or take away any profits at all.

 
Elibrarius:

This is a question for everyone:
I also get batches of unidirectional signals from the network. It's similar to this one.

Sometimes I get 100 or even 200 losses in a row. There is only one way out - to trade with microscopic lots, like 0.5% of the deposit.
Who has ideas how to avoid trading hundreds of signals in a row?
Trade the first one and then not trade until the open one closes? I think it is not the best option.

What are the options?

The opening is one logic, the maintenance and closing is another. I feel closer to it. Stops are insurance. But it's better to limit the number of open orders when opening with stops.

 
elibrarius:

This is a question for everyone:
I also get batches of unidirectional signals from the network. It's similar to this one.

Sometimes I get 100 or even 200 losses in a row. There is only one way out - to trade with microscopic lots, like 0.5% of the deposit.
Who has ideas how to avoid trading hundreds of signals in a row?
Trade the first one and then not trade until the open one closes? It seems to me this is not the best option.

What are the options?

Why trade them at all, it's like Russian roulette with a small-caliber gun :)

 
Look pliz indicator) works only in 2020.
 
Maxim Dmitrievsky:

What's up, any luck with the test?

 
mytarmailS:

What's up, any luck with the test?

It's still the same... at the beginning of the week it works fine after 'pre-training'. Then it starts pouring. Reworked it again, tomorrow I'll put it to the tests :D

i have already tried it and it seems that the traders do not calculate the reward correctly.

I am also working on the recurrence nets in torch.

yellow - beginning of weeks, first 1-3 days


 

Has anyone tried using replicative (recurrency?) networks to reduce dimensionality?

Article.

Video.


Лекция 6 | Нейросетевые технологии
Лекция 6 | Нейросетевые технологии
  • 2018.10.14
  • www.youtube.com
Дата: 08.10.2018 Лектор: Дорофеев Евгений Александрович Лекции в формате PDF - https://goo.gl/Xwzg4a
 
Aleksey Vyazmikin:

Has anyone tried using replicative (recurrency?) networks to reduce dimensionality?

Article.

Video.


Recurrent autoencoders are used. haven't tried them.

 
Maxim Dmitrievsky:

Recurrent autoencoders are used. haven't tried

I can't understand how model quality is evaluated at once by all output neurons, which number is equal to input neurons...

If you see something similar, suitable for use, please let me know.

I have a sample of almost 3000 predictors, and there's a concern that they can be compressed, because they describe a similar area.

 
Aleksey Vyazmikin:

I can't understand how model quality is evaluated at once by all output neurons, the number of which is equal to input neurons...

If you see something similar, suitable for use, please let me know.

I have a sample approaching 3000 predictors, and there was a concern that they could be highly compressed, since they describe a similar domain.

The input and output are all fics, there are fewer neurons in the hidden layer. It just compresses the information through minimizing the error in the output. Input should equal output (ideally). Then the second part of NS is discarded after training, at the output you get compressed features equal to number of neurons in the hidden layer

you can add recurrent layers, etc.

google autoencoder. and their variants.

Reason: