Machine learning in trading: theory, models, practice and algo-trading - page 336

 
Yuriy Asaulenko:


In general, they say that the Internet and C++ neuronoks are full. But I haven't looked for them.


http://www.opennn.net/

https://www.neuraldesigner.com/

OpenNN | Open Neural Networks Library
  • www.opennn.net
The main advantage of OpenNN is its high performance. It is developed in C++ for better memory management and higher processing speed, and implements CPU parallelization by means of OpenMP and GPU acceleration with CUDA.
 
And yet satisfactory results of trading as I understand it - no, even on the demo!
 
elibrarius:

I'm starting to study neural networks.

I am looking at variants, which can be implemented directly in MT5.

I am interested in the variant using ALGLIB (https://www.mql5.com/ru/articles/2279), but from the description of the network it follows that it is a sequential network without feedbacks. And the disadvantage is that it can be trained only by one thread of the processor (which processes the Expert Advisor with the neural network).

I think it would not be too difficult to add 2 hidden sequential layers to the neural network from the article https://www.mql5.com/ru/articles/497 and then train it either by full brute force or genetic in the tester. But in this case you can use a lot more computational threads (the core of your CPU, in the network and in the cloud). Do I get it right?

How do I add manual instructions for correct answers (buying and selling locations) to training such a network?

Maybe there is already a library for multilayer sequential network somewhere?

And also, I don't quite understand the usefulness of using inner layers, for the purpose of forex/exchange trading. Does it make sense to add them? Why?


MLP is not good for market forecasting, see above videos, you need RNN, i.e. network with memory

https://nplus1.ru/material/2016/11/04/recurrent-networks

Азбука ИИ: «Рекуррентные нейросети»
Азбука ИИ: «Рекуррентные нейросети»
  • 2016.11.04
  • Тарас Молотилин
  • nplus1.ru
N+1 совместно с МФТИ продолжает знакомить читателя с наиболее яркими аспектами современных исследований в области искусственного интеллекта. В прошлый раз мы писали об общих принципах машинного обучения и конкретно о методе обратного распространения ошибки для обучения нейросетей. Сегодня наш собеседник — Валентин Малых, младший научный...
 
Renat Akhtyamov:
And yet, as I understand there are no satisfactory trading results, even on the demo!

It's a matter of optimization, there's no sense to tesitize all versions yet, I'll test it when I implement all of them completely
 
Yuriy Asaulenko:

The experiment to train the neural network (NS) to cross two MAs failed. Learning was conducted to recognize only the intersection upwards.

For the experiment the NS - 3,3,3,1 was selected and tested for training and recognition of artificially created patterns. However, after learning to recognize the MA, not a single crossover was recognized. The reason - the NS needs more contrast images, and does not care about all differences of 0.01-0.1 between inputs.

For a given structure of the NS it is quite possible to get reliable recognition at a signal difference of at least 0.2-0.3.


MLP you all the time and will give out such a mess, I've already experimented with them, there is a lot of layers to choose - then retrain, then retrain not enough layers, then their quantity
 
Renat Akhtyamov:
I can see no satisfactory results even on demo trades!

No one is going to spend months developing a strategy and then go bragging about it on a demo account. This kind of stuff is traded on a real account, and the transaction history is hidden from everyone. I even read on the forum how people purposely trade at two brokers, taking turns losing on one and compensating losses on the other, so that even the broker does not know which deals were made by the strategy and which ones were fake.

There are results. Sometimes found good combinations of predictors and the model bring a profit for a couple of months, more often - less. But they are replaced by others.


My personal opinion - neuronics, forests, regressions - all this is too weak for forex. The reason is that the price behavior is constantly changing, the rules that are profitable today could be unprofitable a week ago. And the standard approach - take indicators and price for a couple of months and train the neuronics - means that it must find the same rules of price behavior for all two months. And there are no such rules and nobody knows what it will find, but it will be wrong 99% of the time. Sometimes it can be lucky and the model will fall into that 1% but it is too far from the grail and such Expert Advisors usually trade well until the first stop loss and then they can be thrown out.

I'm studying models of pattern recognition which look at price behavior after similar patterns in history and use those statistics to trade.
In R I haven't seen a package that does everything I need, I have a model piecemeal assembled from others, plus my own bikes. The closest I've seen to a model description in another thread, I'd advise starting to build your grail with this (quote below). New problems will appear in the process, you'll have to think and experiment on them.

Forum on trading, automated trading systems and testing trading strategies

This is the first step to analyzing the most important STATISTICAL characteristics of the pattern and selecting a trading method using this pattern.

Vladimir, 2017.04.06 06:20

Look for my nearest neighbor indicator in the codebase. The method is quite simple. You set the length of the current pattern, find similar patterns from history (e.g. use correlation as distance between patterns), predict future price behavior from past patterns by weighting their individual predictions. This is essentially the same as clustering, or RBF, or SVM, or GRNN. It all depends on how you measure the distance from the current pattern to similar past patterns. Read about GRNN and Bayes. It describes prediction theory in terms of statistical distributions. There is a lot written about GRNN and the aforementioned prediction methods, and it all boils down to one simple formula:


prediction y = SUM y[k]*exp(-d[k]/2s^2) / SUM exp(-d[k]/2s^2)


where y[k] is the k-th past pattern, d[k] is the distance from k-th pattern to the current pattern. If distances have Gaussian distribution, then d[k] = (x - x[k])^2. For an arbitrary (super Gaussian) distribution, d[k] = |x - x[k]|^p, where you choose p depending on whether you want to give more weight to the closest neighbors (big p), or give all neighbors almost the same weight (small p) as in socialism. With p=0, we have complete socialism.

Once you get acquainted with nearest neighbors and GRNN, the next obvious question arises. How to measure the distance between current pattern and past patterns if you take into account distortions along the time axis (i.e. past patterns may look like current pattern but either stretched or compressed in time). This is where the dog is hidden.


 

Has anyone tried genetic programming as an ML method?

Like Chaos Hunter?

 
Maxim Dmitrievsky:


MLP is not suitable for market forecasting, see videos above, it needs RNN, i.e. network with memory

https://nplus1.ru/material/2016/11/04/recurrent-networks

If I am not mistaken, then RNN will be extremely difficult to implement in MT5, and for good results we need either purchased or in-house development with huge labor costs.

If to MLP in addition to information about price, indicators on the current bar - transmit all the same for 10-30 previous bars, it will be a kind of memory. Part of neurons will process the current state, and another part will process the development of the situation in the nearest past.

When trading manually we also look at the last few bars, and not only at the current state of indicators.

It is clear that the amount of calculations will increase, that is why I was interested in the possibility of transferring the calculations from one core to all cores of the processor, network or cloud. How to do this? At least for MLP.

 
nowi:

Has anyone tried genetic programming as an ML method?

Like Chaos Hunter?

What about like Chaos Hunter's? Give me a specific link.
 
elibrarius:

It is clear that the amount of calculations will increase, so I was interested in the possibility of transferring calculations from one core to all cores of the processor, network or cloud. How could this be done? At least for MLP.


On OpenCL, not if you're not lazy ))
Reason: