Machine learning in trading: theory, models, practice and algo-trading - page 520

 

The whole problem of theorists is drawing theories that do not exist in reality. But reality sometimes resembles some theory. It remains only to determine how similar? That is what the theorists do. Frankly speaking, a trader is like a tester. He is testing a strategy or a theory on a deposit with his broker (the most fearless trader just does without using any theories :)) ... Well, the cowardly (like me) ... And how does it really happen? ... We don't know for sure, but we don't know for sure. We can't know for sure, we can only know what can affect it. I think we need to understand exactly what affects it? For example cryptocurrency growth? A decrease in a barrel of oil. A sharp fluctuation in the yen, etc., etc. Where do we get this information from? Honestly, I don't know. I look at the experience of the previous ones... the resource from which they took the data just disappeared and ... So fundamental analysis is for fast and fearless traders. Okay, I'll get to the reality... all we can do... is to catch the trend and the recurrence of that trend in history. We can just bluntly ...

1. catch the same thing at a certain time (down trend, up trend, swing, stop, breakout)

2. to try some parameters that contribute to the trend ( resistance lines, support lines)

3. price reactions to significant or known events.

That's it. I do not know what else can serve as a Signal? We can use the parameters of price drawing on the chart ... it looks like a dancing with tambourine trader shaman! :) People actually like it. Well, the shaman knows what's what :) Of course, it would be nice to know the information from a reliable source, but reliability is often lower and lower. Therefore, just stupid ... copy what was :) How do you do it? Well ... a little define some parameters of copying What could it be, right? What would it be?

 
Yuriy Asaulenko:

I don't understand what this is about?

Everything correlates with everything on the instrument. And this is inevitable, since everything is obtained by transformations from the same time series, from the same data.

By the way, through PCA you can try to get rid of multicollinearity and reduce dimensionality of inputs, or singular decomposition

I have no problem at all in principle with fitting NS to a piece of history, I have a problem with adaptive modes and self-learning now. I haven't found any articles or other information, revealing the topic well yet... there are some small developments of my own, but it doesn't work very well yet. This is why my article on RF and adaptive system has stalled for the time being, since I wanted to cover the subject of adaptivity as much as possible

Now I see the solution in adaptive predictors that will give +-right signals on different charts, and the basic ns, in fact, will be trained only once or not very often, i.e. we need to get rid of the problem - when to retrain the NS and the problem of retraining itself, when the market changes predictors stop working

 
Maxim Dmitrievsky:

By the way, through PCA you can try to get rid of multicollinearity and reduce dimensionality of inputs, or singular decomposition

I have no problem with adapting NS to a bit of history, I have a problem with adaptive modes and self-training now. I have not yet found any articles or other information, well revealing the topic... there are some small developments of my own, but it does not work very well so far. This is why my article on RF and the adaptive system has stalled so far, since I wanted to cover the subject of adaptivity as much as possible

I got the following result on the model a week ago.

x - trade number, y - total profit in p. Trade with a fixed lot.

I must say I was very surprised. I've been working on my real system for about a week now.

 
Yuriy Asaulenko:

I got this result on the model a week ago.

x - trade number, y - total profit in p. Trade with a fixed lot.

I must say I was very surprised. I've been building a real system for about a week now.

I'm trying to use real spreads and commissars, but it's cool, I'd like to see the results. + For example, i got a slippage for short trades.

and forward :)

 
Maxim Dmitrievsky:
If these symbols do not work then the real spreads and the commissars will be depressed. + I have no idea how to use slippage.

The model seems to take all this into account. It is considered that on each transaction, successful or not, 30 p. is lost. (these are futures) - that's a lot for them. Test on data not used during debugging Debugging on futures -6.17 - Test on later futures -9.17. Everything is taken into account if possible).

However, there will be nuances, of course. I suppose, as usual,) the real will be a little worse.

Now I have reached the moment of deal opening. I have watched about a dozen entries in the order online. I think they are correct.

 

There is something wrong on ALGLIB with SOFTMAX version of NS. All answers are skewed to the first output (for me it's BUY).
On the same NS data - regression with 3 outputs (with linear activation) gives more reliable results:

Buy Sell NA
0.10302356, 0.01091621, 0.88606040
0.09705416, 0.01083526, 0.89211080
0.08283979, 0.12548789, 0.79167247
1.02522414,-0.00119697,-0.02403573
0.09498582, 0.01529507, 0.88971917
1.01878489,-0.00111341,-0.01767998
0.07906346, 0.05960769, 0.86132762
0.00201949, 0.00497863, 0.99300189

in >0.5 will give plausible answers.

Les, by the way, also skewed to the 1st output.

 

What is the activation on your output neurons? I see negative values. It shouldn't be. You should use softmax activation in the output neurons. Its values are in the 0-1 range.

 
Grigoriy Chaunin:

What is the activation on your output neurons? I see negative values. It shouldn't be. You should use softmax activation in the output neurons. Its values are in the 0-1 range.

The example above is from regression with linear outputs (as a working variant). I feed from 0 to 1 during training.

On softmax it was either clearly 1, or a few hundredths less; but all on the 1st output, the other 2 outputs always = 0. I.e. there is something wrong with softmax in ALGLIB...

 

Read about neuron activation functions. You can supply whatever values you want to the output, but the wrong activation function will produce negative values. This is usually a hyperbolic tangent. The softmax loss function will not work correctly with it. Although much depends on the library and the implementation of the loss function. For example in Tensorflow on output neurons for softmax loss function should not have any activation function. And for correct use of trained network it is necessary to add softmax activation.I haven't worked withALGLIB, maybe they did something wrong. In any case, a trained network with softmax should not give negative values.

 

This happens if you have a very large number of classes in training examples compared to other classes. For example 2000 training examples for buy and only 1000 for sell. The neuron can always generate "Buy" and will be right in 66% of cases. It is better to make the number of training examples of each class equal.

Reason: