Machine learning in trading: theory, models, practice and algo-trading - page 338

 
elibrarius:
don't understand your idea (

it's not my idea, it's the principle of teaching NS with a teacher
 
Maxim Dmitrievsky:

This is not my idea, this is the principle of teaching NS with a teacher

I agree, you can train more complex networks this way. But in this example, the training is based on the results of trading in the tester, without your own instructions on where to trade. I.e. it is not training, but optimization for maximal profit. I.e. it is not exactly a neural network but rather an Expert Advisor with weighting of indicator values.

If we return to training for this particular example, there is 1 output in the code, if it is > 0.5 then we buy, if it is <0.5 then we sell. Where do I attach the teacher's answer 0/1? What do I do with it?

 
elibrarius:

I agree, you can train more complex networks this way. But in this example, the training is based on the results of trading in the tester, without your own instructions on where to trade. I.e. it is not training, but optimization for maximal profit. I.e. it is not exactly a neural network but rather an Expert Advisor with weighting of indicator values.

If we return to training for this particular example, there is 1 output in the code, if it is > 0.5 then we buy, if it is <0.5 then we sell. Where do I attach the teacher's answer 0/1? And what to do with it?


there and screw it in at the moment of training, and after training the output will be a prediction

Oh I see, it's just a neuron )which gives the result for sigmoid

then there's no way

 
Maxim Dmitrievsky:


there and screw it in at the time of training, and after training the output will be a prediction

Oh, I see, it's just a neuron) which produces a sigmoid result

no way

Sadly(

And other neural networks will then count on the same core, which will be many times longer.
In that example for 10 inputs we get 1.6 *1013 runs. Only genetics will save time. How much to count it completely on the 1-th core - can not even imagine. And if you multiply the inputs to 100, it would probably be unreal to calculate at all.

How long did it take you to train the network and for how many inputs/neurons?

 
elibrarius:

That's too bad.

And other neural networks will then count on the same core, which will be many times longer.
And in that example for 10 inputs we get 1.6 *1013 passes. Only genetics will save time. How much to count it on the 1-th core is not even imaginable. And if you multiply the inputs to 100, it would probably be impossible to calculate at all.

How long did it take to train your network and for how many inputs/neurons?


it depends mostly on the amount of history (training examples) there, from a couple of minutes to infinity ) on the 1st core counting a complex grid is not an option, I agree

but a straightforward grid of this kind only on GPU

 
elibrarius:
What about Chaos Hunter? Give me a specific link


here is the link

Interestingly, I have never seen a free library with a similar implementation of genetic programming...all only networks networks....

ChaosHunter formula optimization software
  • www.chaoshunter.com
"I just played with the samples - what an amazing piece of software!!! Love the fact that I have an equation I can work with. I love Classifier and Predictor but can see how you can use this software to create a classification formula in Neuroshell Trader and save a lot of time. Can't wait to start playing with my own data. Is the final...
 
nowi:


here is the link

Interestingly, I have never seen a free library with a similar implementation of genetic programming...all only networks networks....

I don't know what genetic programming is, but there are genetic optimization algorithms everywhere - from MT5 to SciLab to ScyPy. I think there are also in R, but it is better to ask SanSanych - he is an expert in R.
 
Yuriy Asaulenko:
I don't know what genetic programming is, but there are genetic optimization algorithms everywhere - from MT5 to SciLab and ScyPy. I do not know what genetic algorithms are everywhere - from МТ5 to SciLab and ScyPy.


It is clear that there are genetic algorithms everywhere..... but it is not the same thing although the principle is similar...

In genetic algorithms, the program itself remains unchanged, while all its parameters undergo evolution, crossing mutations, selection, etc..

In genetic programming also evolution - but the algorithms themselves, the programs themselves are grown from available data and using any mathematical symbols + - / * cos sin etc. according to a given function...

If you give a number of close candlesticks for a period n and stochastic data and regression slope for example, this method will randomly multiply, divide and add these data by any possible combinations, gradually forming a certain mathematical formula that matches the search function...

 

So we're done with the grids, now let's buy some money:

http://www.nvidia.ru/object/ai-accelerated-analytics-ru.html

The NVIDIA DGX-1 is available in select countries for $129,000
Искусственный интеллект и решения NVIDIA ускоряют анализ данных для цифрового бизнеса
  • www.nvidia.ru
ОБЕСПЕЧЬТЕ СЕБЕ КОНКУРЕНТНОЕ ПРЕИМУЩЕСТВО, ИСПОЛЬЗУЯ ИСКУССТВЕННЫЙ ИНТЕЛЛЕКТ ДЛЯ РЕШЕНИЯ СЛОЖНЫХ ЗАДАЧ, ПОВЫШЕНИЯ ПРИБЫЛИ И БОЛЕЕ ЭФФЕКТИВНОГО УПРАВЛЕНИЯ ПРЕДПРИЯТИЕМ. Каждый день генерируется и собирается невероятно большой объем данных. Одни компании пытаются выжить в бурном потоке информации, в то время как другие процветают, испытывая...
 

Why is a sigmoid used to calculate a neuron? Isn't it better to have a linear distribution (from zero to the number of inputs)? After all, "the function has a smooth form on the interval [-5,5]".

It is good if there are only 5 inputs, but what if there are a hundred? Then practically all of the values will be outside of this segment. The article https://www.mql5.com/ru/articles/497 applies an additional factor to account for 10 inputs. So for each network you have to recalculate this coefficient.

Нейронные сети - от теории к практике
Нейронные сети - от теории к практике
  • 2012.10.06
  • Dmitriy Parfenovich
  • www.mql5.com
В наше время, наверное, каждый трейдер слышал о нейронных сетях и знает, как это круто. В представлении большинства те, которые в них разбираются, это какие-то чуть ли не сверхчеловеки. В этой статье я постараюсь рассказать, как устроена нейросеть, что с ней можно делать и покажу практические примеры её использования.
Reason: