Machine learning in trading: theory, models, practice and algo-trading - page 630

 
Yuriy Asaulenko:
I will not assert, but it seems to me that these are illusions. Just from general considerations.
Why, Vladimir Perervenko has some information in his articles, they learn very fast on hundreds of inputs
 
Maxim Dmitrievsky:
Why, Vladimir Perevenko has some information in his articles, they are trained very fast on hundreds of inputs

I have not read the articles and will not argue. I've only seen pictures).

MLP, say, can be perfectly trained in 10-15 minutes, and it will function perfectly. Yes, but that's if the data is well classified, sets are separated.

If you say there are no separable sets in market (or in your training samples), then you can train anything you like forever and there will be no results.

 
Maxim Dmitrievsky:
Why, Vladimir Perevenko has information in his articles, they learn very fast on hundreds of inputs

It all depends on architecture and amount of data.
Networks for pattern recognition learn for a week on GPU. And there are dozens of layers with three-dimensional tensors.

 
Aleksey Terentev:

It all depends on the architecture and amount of data.
Networks for pattern recognition take a week to learn on a GPU. There are dozens of layers with three-dimensional tensors.

There he described the simpler ones - Boltzmann net + MLP, for example

https://www.mql5.com/ru/articles/1103#2_2_2

Третье поколение нейросетей: "Глубокие нейросети"
Третье поколение нейросетей: "Глубокие нейросети"
  • 2014.11.27
  • Vladimir Perervenko
  • www.mql5.com
Нейросети второго поколения Глубокое обучение Практические эксперименты Программная реализация (индикатор и эксперт) Введение В статье будут рассмотрены основные понятия по теме "Глубокое обучение" (Deep Learning), "Глубокие нейросети" (Deep Network) без сложных математических выкладок, как говорят, "на пальцах". Будут проведены эксперименты с...
 
Yuriy Asaulenko:

I have not read the articles and will not argue. I've only seen pictures).

MLP, say, can be perfectly trained in 10-15 minutes, and it will function perfectly. Yes, but that's if the data is well classified, sets are separated.

If you say there are no separable sets in market (or in your training samples), then you can train anything you like forever and there will be no results.

Let's simply conduct an experiment for the sake of "scientific knowledge".
Let's choose data, dimensions, MLP architecture, output data.
And everyone will do their own tests with their own tools.

The amount of flaming will be less.
By the way, we can make such a tradition and test every new architecture with the whole world. =)

 
Aleksey Terentev:

Let's just do an experiment for the sake of "scientific knowledge".
Let's choose data, dimensions, MLP architecture, output data.
And everyone will do their own tests with their own tools.

The amount of flaming will be less.
By the way, we can make such a tradition and test every new architecture with the whole world. =)

(I'm afraid I feel so weak). I see no point in solving abstract problems. And the exchange of opinions is not flam at all. I got a lot out of this flam. And went in a different direction.) And without the flamming, I might have gone that way.
 

I am sharing the first results of my NS. The architecture is as described in the god, I did not change anything.

The plateau is quite even, the NS has learned well already at 1000 passes, the results did not improve much further.

Trained for the last month at 15 minutes. I spent ~0.65$ for training. My monthly number of deals is ~300.

My results during last 2 months are not bad, but not too bad either.

I will try to add one more hidden layer and look for errors again :) and then I will try to train for a longer period.

 

Maxim Dmitrievsky:
Why, Vladimir Perervenko has information in his articles, they learn very fast on hundreds of inputs


All articles contain data sets and scripts that you can reproduce and get real data about the learning time specifically on your hardware. The learning time of DNN with two hidden layers is up to 1 minute.

Good luck

 
Aleksey Terentev:

Let's just do an experiment for the sake of "scientific knowledge".
Let's choose data, dimensions, MLP architecture, output data.
And everyone will do their own tests with their own tools.

The amount of flaming will become less.
By the way, we can make such a tradition and test every new architecture with the whole world. =)

Give me an example. Start with
 
Maxim Dmitrievsky:

I am sharing the first results of my NS. The architecture is as described in the god, I did not change anything.

The plateau is quite even, the NS has learned well already at 1000 passes, the results did not improve much further.

Trained for the last month at 15 minutes. I spent ~0.65$ for training. My monthly number of deals is ~300.

My results during last 2 months are not bad, but not too bad either.

I will try to add one more hidden layer and look for errors again :) and then I will try to train for a longer period.

Do you have three neurons at the input of the second layer processed by sigmoid? How do you select weights on the second layer which range is chosen from -1 to 1 with step of 0.1 for example.

In my network the number of deals fell down after the second layer was processed and the result did not improve much. In contrast to fitting a perceptron with 9 inputs and one output neuron and then taking another independent perceptron and fitting it again with saved settings of the first one, etc.

Reason: