Machine learning in trading: theory, models, practice and algo-trading - page 338
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
don't understand your idea (
it's not my idea, it's the principle of teaching NS with a teacher
This is not my idea, this is the principle of teaching NS with a teacher
I agree, you can train more complex networks this way. But in this example, the training is based on the results of trading in the tester, without your own instructions on where to trade. I.e. it is not training, but optimization for maximal profit. I.e. it is not exactly a neural network but rather an Expert Advisor with weighting of indicator values.
If we return to training for this particular example, there is 1 output in the code, if it is > 0.5 then we buy, if it is <0.5 then we sell. Where do I attach the teacher's answer 0/1? What do I do with it?
I agree, you can train more complex networks this way. But in this example, the training is based on the results of trading in the tester, without your own instructions on where to trade. I.e. it is not training, but optimization for maximal profit. I.e. it is not exactly a neural network but rather an Expert Advisor with weighting of indicator values.
If we return to training for this particular example, there is 1 output in the code, if it is > 0.5 then we buy, if it is <0.5 then we sell. Where do I attach the teacher's answer 0/1? And what to do with it?
there and screw it in at the moment of training, and after training the output will be a prediction
Oh I see, it's just a neuron )which gives the result for sigmoid
then there's no way
there and screw it in at the time of training, and after training the output will be a prediction
Oh, I see, it's just a neuron) which produces a sigmoid result
no way
Sadly(
And other neural networks will then count on the same core, which will be many times longer.
In that example for 10 inputs we get 1.6 *1013 runs. Only genetics will save time. How much to count it completely on the 1-th core - can not even imagine. And if you multiply the inputs to 100, it would probably be unreal to calculate at all.
How long did it take you to train the network and for how many inputs/neurons?
That's too bad.
And other neural networks will then count on the same core, which will be many times longer.
And in that example for 10 inputs we get 1.6 *1013 passes. Only genetics will save time. How much to count it on the 1-th core is not even imaginable. And if you multiply the inputs to 100, it would probably be impossible to calculate at all.
How long did it take to train your network and for how many inputs/neurons?
it depends mostly on the amount of history (training examples) there, from a couple of minutes to infinity ) on the 1st core counting a complex grid is not an option, I agree
but a straightforward grid of this kind only on GPU
What about Chaos Hunter? Give me a specific link
here is the link
Interestingly, I have never seen a free library with a similar implementation of genetic programming...all only networks networks....
here is the link
Interestingly, I have never seen a free library with a similar implementation of genetic programming...all only networks networks....
I don't know what genetic programming is, but there are genetic optimization algorithms everywhere - from MT5 to SciLab and ScyPy. I do not know what genetic algorithms are everywhere - from МТ5 to SciLab and ScyPy.
It is clear that there are genetic algorithms everywhere..... but it is not the same thing although the principle is similar...
In genetic algorithms, the program itself remains unchanged, while all its parameters undergo evolution, crossing mutations, selection, etc..
In genetic programming also evolution - but the algorithms themselves, the programs themselves are grown from available data and using any mathematical symbols + - / * cos sin etc. according to a given function...
If you give a number of close candlesticks for a period n and stochastic data and regression slope for example, this method will randomly multiply, divide and add these data by any possible combinations, gradually forming a certain mathematical formula that matches the search function...
So we're done with the grids, now let's buy some money:
http://www.nvidia.ru/object/ai-accelerated-analytics-ru.html
Why is a sigmoid used to calculate a neuron? Isn't it better to have a linear distribution (from zero to the number of inputs)? After all, "the function has a smooth form on the interval [-5,5]".
It is good if there are only 5 inputs, but what if there are a hundred? Then practically all of the values will be outside of this segment. The article https://www.mql5.com/ru/articles/497 applies an additional factor to account for 10 inputs. So for each network you have to recalculate this coefficient.