Machine learning in trading: theory, models, practice and algo-trading - page 1977

 

Sadness is pity, increasing TF worsens the result, rangy is better.

I added MA and thinning. Without MA, thinning works as a change in TF and makes the distribution normal. RMS = root of the thinning step. If MA is twice as big as the thinning then we have kosher downsampling, prediction works with high accuracy, but we need a tester to calculate the correct expected payoff. The zigzag is ready, but I do not know what form it should take: arrays of indexes with minimums and maximums, or one array of indexes, or a price array at once.

I can get any other filter instead of the MA, but I only need to know the impulse characteristic. In the code MA is done as [1/per]*per, which will expand for per=4 into [0.25, 0.25, 0.25, 0.25].

 
Rorschach:

Forest: 55.89% correct answers, 2.36 expectation

Forest cumulative increments: 55.89% correct answers, 2.36 expectation, identical results

Still, there is a difference, the increments are better.

Problems with the zigzag, it's not clear how to limit the minimum change, always micro-switching


 

More like nonsense, but I'll ask.


Can NS predict such series?


Like the probability of occurrence of the next symbol. And is there any dependence of series A on series B.

 
Evgeniy Chumakov:

More like nonsense, but I'll ask.


Can NS predict such series?


Like the probability of occurrence of the next symbol. And is there any dependence of series A on series B.

This is their direct task.

 

After I switched to TensorFlow 2.3 I got an error

"WARNING:tensorflow:11 out of the last 11 calls to triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to creating @tf.function repeatedly in a loop."

I.e. some tf.function frowns that it is in the loop. I don't have any tf.function, but in the loop it polls models prediction=model.predict(data).
The tf.function is described here
This is clearly some unknown entity, does anyone understand what it is?

UPD
Anyway, this is some kind of cool thing, you can't do without it, so I'll have to look into it. It solves TensorFlow compatibility problems with Python.

 

I think I finished the agent trader. I monitored the demo for alpha tests. The logic is non-trivial, so there may be bugs. Let's test it.

Now I am interested to try LSTM and transformers (but it may break your brain trying to figure it out)
 
Maxim Dmitrievsky:

I think I finalized the trader. I monitored the demo for alpha tests. The logic is non-trivial, so there may be bugs. Let's test it.

Now I'm interested to try LSTM in different variants and Mb Transformers (but it may break your brain trying to figure it out)

The logic is more branched out. On the one hand it is good. On the other hand, bugs in unfamiliar areas. What is the transformer?

 
Valeriy Yastremskiy:

The logic is more branched out. On the one hand this is good. And on the other hand bugs in unfamiliar areas. Transformer is what?

A new type of network for working with temporal sequences, they say it is better than lstm. In text recognition, machine translation, etc. are used, to highlight the context of sentences. That is, when a word is related to others (previous) by some context.

self-attention transformers. The mechanism is the analogue of human attention.

 
Maxim Dmitrievsky:

a new type of temporal sequence network, said to be better than lstm. In text recognition, machine translation, etc., they are used to highlight the context of sentences. That is, when a word is related to others (previous) by some context.

self-attention transformers. Mechanism - the analogue of human attention.

Well, such significant complications. First the memory is long and short, and then there is a semblance of attention in the model. My brain will not be able to handle it right away.) But it should work better.

 
Valeriy Yastremskiy:

Well, such significant complications. First, the memory is long and short, and also a semblance of attention in the model. My brain can't handle it right away.)) But it should work better.

It does work better, but it is a hell of a lot harder) if you attach the RL

In general, conventional back propagation networks like mlp are not suitable for time series, not at all. At a minimum you need RNN
Reason: