Machine learning in trading: theory, models, practice and algo-trading - page 593

 
Maxim Dmitrievsky:

Focused Forward Propagation Networks with Time Delay

In structural pattem recognition it is common to use static neural networks. In contrast, temporal pattem recognition requires processing images that change over time, and generating a response at a particular point in time, which depends not only on the current, but also on several previous values.

Are there such architectures? :) Exactly the type of such architectures will work in Forex, in theory... but you have to experiment. It's easy to do, just add a couple of "interesting" neurons to MLP, or connect 2 models.

Where do you get the model?
 
Yuriy Asaulenko:
Where do you get the model?

I'll do it myself ) haven't finished reading it yet, there are a lot of interesting things in books, as it turns out

The idea of "memory" for the markets should be good... but not super slow recurrence, but something simpler and more specialized.

 

The simplest example:

By an external "shifter" we can mean some f-from, for example, previous trades, volatility, or some other hyperparameters of the system

But it would be better if the shifter were embedded into 1 or more neurons, then it would become kind of non-linear

 

Don't hope that a new kind of neuronkey or python package will appear that will solve all problems - and it won't overfit the model, and it won't be afraid of non-stationarity, etc.

No matter how fancy the model is, it will be based on a simple principle - it takes data for training prepared by a human, and the model just creates a simplified description of how to calculate the result from the input data. All this is not far off from nearest-neighbor prediction, but the usual models predict orders of magnitude faster (although it does take a long time to learn), so they are better liked.

The key phrase is "human-prepared data for training." If the expert correctly prepares the data, then the model will be trained on them and you will trade in the black, look for example those tables for training and testing that were shared by SanSanych, Vizard, Michael.
I`m amazed when I look at them. And no amazing rnn neuronics with non-linear input filters and dozens of layers will do it for you.

 
Dr. Trader:

Don't hope that a new kind of neuronkey or python package will appear that will solve all problems - and it won't overfit the model, and it won't be afraid of non-stationarity, etc.

No matter how fancy the model is, it will be based on a simple principle - it takes data for training prepared by a human, and the model only creates a simplified description of how to calculate the result from the input data. All this is a far cry from nearest-neighbor prediction, but the usual models predict orders of magnitude faster (although it takes a long time to learn), so they are better liked.

The key phrase - "data for training prepared by a human". If the Expert Advisor correctly prepares the data, then you will use them to teach the model and trade profitably. Take for example those tables for training and testing that were posted by SanSanych, Vizard, Michael.
I`m amazed when I look at them. And no amazing rnn neuronics with non-linear input filters and dozens of layers will do it for you.


Unfortunately (maybe only for me), most of what is discussed here is neurostatics. For it, you really need very well prepared data, statistical properties of which do not change over time, which is why there are so many different ways to select and filter predictors. I find this approach very difficult because of preprocessing.

But you can also look at neurodynamics with different variants of "memory" and adaptive things... it seems to me a simpler and kind of natural approach, and I am not sure about efficiency, it depends on how to do it.

It depends from what point of view to consider the market - as a set of patterns or as a system evolving according to certain laws.

 
Maxim Dmitrievsky:

But it is possible to look in the direction of neurodynamics with different variants of "memory" and adaptive things... it seems to me it is more simple and as if natural approach, and about efficiency I am not sure, it depends on how to do it.
...
a system evolving according to certain laws.
...

It needs to be studied, undoubtedly. But mankind has not yet invented a suitable tool (at least not in the public domain).

lstm neurons are quite interesting for this thread, they can describe time series much more accurately than ordinary neurons, using fewer neurons. But the problem with them is overfit.
Let's say by training regular neuronka you can select some data for crossvalidation, and thus fight with overfit. But for lstm neural network it is important the order of data arrival, each new prediction uses the internal state of the neural network and changes it. As a result, the whole time series is predicted in strict order, each prediction depends on the past and affects the future. If some examples are randomly removed for later use in crossvalidation, then the sequence is broken, which is bad and puts all this learning into question. If we divide data into two parts sequentially for training and testing - again we get overfits because it does not help in forex.
All you can do is train lstm to maximum accuracy and hope. But forex does not forgive such irresponsibility.

When this problem is solved it will be possible to create a grail.

 

MQL by itself is neither bad nor good. Its syntax is close to C++. In general, it is a standard language. The problem is in the availability of required libraries for it. They are either missing or are of poor quality. That's why we need to integrate Python. I have already given a link to its integration with MQL. I will give you some more. The library is quite usable now. Download.

 
Dr. Trader:

This needs to be studied, undoubtedly. But mankind has not yet invented a suitable tool (at least not in the public domain).

lstm neurons are pretty interesting for this thread, they can describe time series more accurately than usual neurons using fewer neurons. But the problem with them is overfit.
Let's say by training regular neuronka you can select some data for crossvalidation, and thus fight with overfit. But for lstm neural network it is important the order of data arrival, each new prediction uses the internal state of the neural network and changes it. As a result, the whole time series is predicted in strict order, each prediction depends on the past and affects the future. If some examples are randomly removed for later use in crossvalidation, then the sequence is broken, which is bad and puts all this learning into question. If we divide data into two parts sequentially for training and testing - again we get overfits because it does not help in forex.
All you can do is train lstm to maximum accuracy and hope. But Forex does not forgive irresponsibility.

It takes years of academic work on the subject of overriding lstm neurons, when this problem is solved, it will be possible to start creating the grail.


You need an ns that plays with itself in forex :) it's not lstm. lstm doesn't use BP as an external agent that smacks it on the forehead when it makes a mistake

Yuri already wrote about it, just to summarize

 
Grigoriy Chaunin:

MQL by itself is neither bad nor good. Its syntax is close to C++. In general, it is a standard language. The problem is in the availability of required libraries for it. They are either missing or are of poor quality. That's why we need to integrate Python. I have already given a link to its integration with MQL. I will give you some more. The library is now quite usable. Download.


Thank you for your work!

 

Can't sleep - did a little reading on the Internet. I liked this:

"The fact that increments are used is not really so bad against the general background, most of the logarithmic price is fed to the input, the increments is a step forward, although both and so the fit.

I know people who pulled the grail from NS, but those guys are so closed to communication and even hints about what they do, I'm a beginner so sure I have no chance. I only know that everything is complicated, it's not Vels, not Metatrader and not even S#, but C++ and MatLab with some chips that decode and interpret the data coming from the caliders, it turned out that this is the same methodology, I heard it and got scared, it works with them uncle who used to grind terabytes a day at CERN looking for new particles in quantum chaos.

That's funny. I stand by my opinion - the input to the NS should be pure as a tear, price increments. It is the increments that are the key to everything. They form the basis of the solution to this problem. In fact, in Forex we follow a pseudo-stationary process of movement of a wave packet ( probability density function) of these increments. And nothing more. (I wrote this paragraph already :)))

Reason: