Machine learning in trading: theory, models, practice and algo-trading - page 681

 
I'm starting a topic on how to use Python in training in the forum where I live. There will be described everything that may be useful for a trader to develop MTS. There will be no ready-made strategies. The task is to find an "algorithm" for creating profitable MTS. And it doesn't matter what methods will be used. The main thing is that they must be useful. The link will be in my newsfeed.
 
Grigoriy Chaunin:
The neural networks in MT are simple. There is a library by Microsoft called CNTK. It is implemented for Python, C# and C++. All the network analysis and training is done in Python, while C++ is used to write a DLL that loads the trained network and leads calculations with it. In my opinion this is the best option. The second option, the connection of Python to MT. I have written a simple library for this. Library. We connect it and can use everything that is available in Python. And there's a lot of stuff available. I'm wondering if I should start writing about machine learning in my blog.
Could you please explain one thing, after initializing the script it hangs in memory? That is, the modules loaded by the python script should also hang in memory, right?
 
Yes. The MQL code is supposed to call the Python script functions on every tick.
 
Maxim Dmitrievsky:

It would be interesting to read about strategies and personal thoughts/experiences... for me personally

cause most of the stuff here is just trash about searching for different libraries and what's better to write in... it's an epidemic, and everything goes to waste.

Although the basic idea was voiced by fxsaber a long time ago - with such an approach the topic might as well be closed, because it's wrong from the very beginning

I agree.

The topic has turned into complete garbage without any specific examples or theory.

Let's remedy the situation.

For a start - I advise everyone to read how to define the sample size for prediction (see attached file).

 
Alexander_K2:

I agree.

The topic has turned into complete garbage with no concrete examples or theory.

Let's fix the situation.

First of all, I advise everyone to read how to determine the sample size for forecasting (see attached file).

Alexander, thank you, maybe you can get something useful out of it, I'll read it

 
Alexander_K2:

I agree.

The topic has turned into complete garbage with no concrete examples or theory.

Let's fix the situation.

First, I advise everyone to read how to determine the sample size for forecasting (see attached file).

It's a great article in its own right. The results outlined in the article are sorely lacking. It is the lack of such work that gives rise to the flourishing of the "garbage in, garbage out" branch.


But the article is set out in the conditional tradition a la Sovietus - beautiful theory with practical uselessness: there is no code.

 

At the moment, there is a stable model. It still needs to be worked on, but is busy with other tasks.

Input = price + 13, 26 yema;

Preprocessing, output = sigmoid (max, min, close), delta (open-close), derivative (max, min, close), logarithm of derivative (max, min, close), detrend (close-eme13, close-eme26), ema(13, 26), derivative ema(13, 26); Convert dataset to dataset of timeseries (1 candle + 5 last candle). Total 16 x 6 = 96 parameters.

Architecture:
BatchNormalization(96);
GRU(96, L2, Dropout=0.5, 'elu');
GRU(64, L2, Dropout=0.5, 'elu');
BatchNormalization(64, Dropout=0.3);
Dense(32, Dropout=0.3, 'elu');
Dense(16, 'elu');
Dense(8, 'elu');
Dense(3, 'softmax');
Optimizer=Nadam;
Loss=Categorical crossentropy;

BouncedMA signals exit a candle ahead (above in the branch);

Estimations: loss ~0.7-0.8; accuracy ~0.55;
But these estimations are not indicative of the quality of the signals. They are lower, due to the fact that the signals are training = 1.0, 0.95, 0.0, -0.95, -1.0, And the buy/sell class forecast fluctuates ~abs(0.45, 0.7).
Also, out of a dataset of 5000 lines, the training runs on 0.8 parts, which means the model doesn't even see the latest quote data (~1000 lines). The prediction is made on the last 100 candlesticks.

Pronosis

model

lossacc

As you can see, training can be stopped at ~45 epochs.

predict

Code, Indicator.

 
Vizard_:

)))

Learning with reinforcement = learning without a teacher. A very greedy algorithm.

A stoned Maximka comes to the casino. In front of him is a pile of "one-armed bandit" machines. He begins to play with each and records the results.

It turns out that one of the machines is better and joyful Maxim begins to play only with it. But his happiness did not last long...

http://www.machinelearning.ru/wiki/index.php?title=Reinforcement_learning


yes, I already passed the bandits :) you can make epsilon greedy and periodically run from one to another

 
 

That's a very good post. Thank you, I never thought of that.

It goes like this.

I ) We take a neuron, make it trade in live, and at the same time pick up its configuration. It makes dealing crazy with its profits, it starts to move the price against it, but it is somehow ready for that and still trades on the upside, dealing copies trades to the interbank market to make profit, it makes exchange trading robots go crazy, they become stupid and the global market goes to hell. The agent interacts with the environment. It's backed trading. I think on a recent Monday some google was testing their new trading robots with reinforcement, this fits right in perfectly.

II ) We take a neuron, make it trade on history. Find the ideal option of weights and neuronics configuration that allows it to trade perfectly on history. We cannot reinforce it, it is learning without a teacher. And most likely it will overfeed and fail.


In short, the plan is to have a hundred cent live accounts, with a neuronkey trading on each of them for a couple of days. Genetics collects the results of neuronics and selects their new configuration, generating new robots every couple of days and then waiting for the results to create new ones again.

Reason: