Machine learning in trading: theory, models, practice and algo-trading - page 285

 

Mihail Marchukajtes:
J About deep learning. Now a lot of people are talking about it, I want to know more. And another question. Who has implemented the recurrent network without a teacher on MT4?

=====================================

Read here, here.

As for the second question, I don't even know what to say.

Good luck

 
Vladimir Perervenko:
Y Well, now I see. In other words, first we train, say, a network without a teacher. Then we write the obtained weights into classifier weights, and then we train the classifier with a teacher. Suppose we received weights of neuronal fine-tuning, weights continue to be optimized????? In other words, by pre-training without a teacher, we set initial weights for the classifier that bring it to the global minimum. Is this how it works?
 

I use the target in training only as a certain base, to which the model strives during training, but never reaches. And to estimate the model I don't use prediction accuracy, but build a graph of trade balance by such predictions, and estimate this graph through, for example, the Sharpe Ratio or Recovery Factor or something else. It is very good to work with a "color of the next bar" objective. It is possible to go more complicated and improve this target using genetics in order not to affect by trades on small movements and just to sit them out, plus you may not be bothered if the size of the next candle is less than the spread. In general, the genetically selected solution (buy/sell) on each bar should be the largest profit.

I take certain indicators and other instances, train the model, then I make predictions with the same training data and use these predictions to build a balance chart and estimate it. Then I carefully choose parameters for the model and indicators and their combinations in order to get a balance chart with the best estimation. As a result I obtain accuracy of no more than 60% but the model fits into trends and does not make deals at every bar.
This is just the tip of the iceberg, under the water there are many more secret details such as assessing the usefulness of intuitions and making sure that the model gives less weight to bad intuitions; selection of a suitable model from all the hundreds of R models; crossvalidation; selection of the length of history the model is trained on; and tons more.

At first I managed to do it only on D1, but then I managed to switch to H1. For me all smaller timeframes are already unpredictable noise.

 
Idon't know:


I recommend https://www.accern.com/ to try it, I use it, I'm very satisfied.

It looks cool, but is a bit pricey.

I would like to train on something free, see how it works in real time without delay, but there is a demo with a giant lag.

Can you describe in a nutshell how this signal is used in trading and MO? If it's no secret, when important news are released, do you have time to trade, or is it just a second, half a second before someone starts to hit the market and take the bait?

 
Dr.Trader:

And to evaluate the model I do not use the accuracy of the prediction, but plot the balance of the trade on such predictions, and evaluate this chart for example through the Sharpe Ratio or Recovery Factor or something else.

That's exactly what I've always done, the predictions are always bad, but the trade itself is not always.... In general the ideal target is not a vector with "00110" values, but a search for a global minimum, for example, let's just ask the grid to trade so that the recovery factor does not fall below 4, and let it pick its own weights until it finds a solution, it does not matter how it will do it... This method kills all disadvantages of usual vectorial target and has a lot of advantages and also this method is absolutely objective in relation to trade, while a vectorial target is absolutely subjective (everyone sees it differently)

It's too "popsy" in this sense, everything there is "rails", everything by template, that's good and bad at the same time. If someone is able to do it, I'd study the code with great interest, and if it were done with R, it would be a fairy tale...

I take certain indicators and other instances, train the model, make predictions using the same training data, build a balance chart and estimate it. Then I carefully choose parameters for the model and indicators and their combinations in order to get a balance graph with the best estimation. As the result the accuracy of the prediction is no more than 60% but the model fits into the trends and does not frequently perform trades on every bar.

I can tell you how to significantly improve the network trading (if interested) by giving it some critical thinking, I almost always get better results than without it

 

mytarmailS:

In R to implement such a target is not realistic, it is too "popsovy" in this sense, there all on "rails" all in a pattern, this is good and bad at the same time. So you have to write the network yourself, I'm not able to do it, if someone is able to, I would study the code with great interest, and if it was done in R, it would be a fairy tale...


You've got the wrong idea. Put it simply: "I don't know the language well enough and I don't know how to do what I want to do". You shouldn't evaluate something you don't know deeply. The R language gives you the power to realize any idea you want, as long as you know it well enough. If something isn't in R, use Python.

Of course, strictly speaking, learning any models with a teacher is not "learning". Deep learning comes close to it, but the most promising is pure learning without a teacher and its implementation - reinforcement learning. In R so far there is only one package that allows to implement this method - RNeat. But this level can be reached only with excellent understanding of all the previous ones. This area is well developed in Python. And it integrates very well with R.

Good luck

 
Vladimir Perervenko:


And still, clarify this point about deep learning. When we get the weights of neurons at the first stage. We write these weights into a hidden layer of the classifier. And we start training the classifier by output variable. If we set a hard value of weights what then is optimized. What does the fine tuning look like???? Can you explain????
 
Vladimir Perervenko:

You don't formulate it correctly. Keep it simple: "I don't know the language well enough to do what I want to do. You shouldn't evaluate something you don't know deeply. The R language gives you the possibility to realize any idea you have, as long as you have enough knowledge. If something isn't in R, use Python.

Of course, strictly speaking, learning any model with a teacher is not "learning". Deep learning comes close to it, but the most promising is pure learning without a teacher and its implementation - reinforcement learning. In R so far there is only one package that allows to implement this method - RNeat. But this level can be reached only with excellent understanding of all the previous ones. This area is well developed in Python. And it integrates very well with R.

Good luck

Nevertheless, you don't have an answer to the question how to implement this target using R, if I understood correctly
 
Mihail Marchukajtes:
Y I see. In other words, we first train a network without a teacher. Then we write the obtained weights to the weights of the classifier, and then we train the classifier with the teacher. Suppose we received weights of neuronal fine-tuning, weights continue to be optimized????? In other words, by pre-training without a teacher, we set initial weights for the classifier that bring it to the global minimum. Is that how it works?

You grasp quickly and not quite correctly. In R there are two packages implementing deep neural networks - deepnet and darch v.0.12. The first one is quite simplified, without many settings and features, but allows to try and assess it. The second one has a lot of possibilities to design and configure a neural network, it would be extremely difficult for an untrained user to use it. But it has the right deep learning capabilities.

1. Pre-training an auto-associative network (SAE/SRBM) on the maximum possible amount of input unlabeled data.

2. Transferring weights into hidden layers of the neural network and training only the upper layer on the unplaced data. The package allows you to specify which specific layers you want to train.

Fine-tuning of the whole network with a small number of epochs (2-3) and low-level training on a small amount of marked-up data. This is where you need to apply examples around peaks.

A very important feature of implementation of this package is possibility of pre-training of neural network.

Of course it is possible to train network without pre-training.

The network is very fast, but requires experience and knowledge.

Good luck

 

Mihail Marchukajtes:
And so, please, clarify this point about deep learning. When you get neuron weights at the first stage. We write these weights into a hidden layer of the classifier. And we start training the classifier by output variable. If we set a hard value of weights what then is optimized. What does the fine tuning look like???? Can you explain????

======================================

More and more I ask myself, "Why do I write articles in which I try to chew up, to the best of my ability, the basic concepts of the topic?"

Have you read my articles on "Deep Learning"? There I explained everything in great detail, as it seems to me. There is no time to repeat what I have written. If there is a question that is not reflected in them, ready to answer.

Good luck

Reason: