Machine learning in trading: theory, models, practice and algo-trading - page 629

 
Yuriy Asaulenko:

Maxim, well, don't train the network in the MT optimizer. The network trainer and optimizer are completely different algorithms with completely different criteria of optimality.

If you still use that NS structure, which you drew before, then it is too simple - too weak for the market. I've already written that I succeeded only when I got to 15-20-15-10-5-1 structure. And it is only for one type of deals. I also did everything by the methods described by Haikin, i.e. nothing new, no tricks.

Simpler structures were poorly trained.

Have you tried to take not the number of layers, but the number of neurons in a layer? For example 15-200-1 or 15-200-20-1?

Yuriy Asaulenko:

In fact, the data for training is not a lot, but a lot. On a small sample size NS will not get anything useful.

And how much data do you take? I take 86000 strings for training.

 
elibrarius:

1) Have you tried to take not the number of layers, but the number of neurons in a layer? For example 15-200-1 or 15-200-20-1?

2) And how much data do you take? I took 86000 lines for training.

1. I haven't tried it. 20 in the first layer is quite enough. I was going by increasing both neurons and layers.

2. I had about 12000 rows in training, with intermediate agitation between N epochs. After a number of epochs, the training data was replaced by other data, which wasn't involved in the training before.

 
Aleksey Terentev:

Of course I apologize for the attack, but you have to reread your post. It looks quite ambiguous.
And in general you are right, but only about the first layer of neural network. If the feedback goes to the second and subsequent layers, or in general to parallel network layers, then your statement will lose force.
In that case Maxim should think about deepening the network and bringing feedback to hidden layers.

And what about:

Same thing. MLPs are not relevant anymore, deep learning has been on the rise for a long time. And one network is quite capable of processing heterogeneous data.

I agree, but how to combine the depth with all those tricks that came up :) in the optimizer will be trained long ... but very, very high quality, because there and the trade happens at once

I think that more than 30 weights is not an option for the optimizer.

+ Many people forget that there's a cloud, which is generally cool to work with all these things, but you have to be very good at optimizing code

 
Maxim Dmitrievsky:

I agree, that's just how to combine the depth with all those tricks that came up with :) in the optimizer will be trained long ... but very, very high quality, because there and the trade happens at once

I think that more than 30 weights is not an option for the optimizer.

+ There's a cloud, through which I think it's fun to work with all these things, but you have to optimize the code very well

Try duplicating the input layer.

 
Aleksey Terentev:


Same thing. MLPs are not relevant for a long time, deep learning has been trending for a long time. And one network is quite capable of processing heterogeneous data.
If an MLP can solve a problem, what difference does it make whether they are up-to-date or not? Especially with MLPs it is not necessary to make a special effort - there is everything for them and almost everywhere.
 
Aleksey Terentev:

Try duplicating the input layer.

theme, and mix up the weights :)
 
Yuriy Asaulenko:
If MLPs can solve a problem, what difference does it make whether they are relevant or not? Especially with MLPs it is not necessary to strain - for them there is everything and almost everywhere.

I'm not trying to bend you to anything. In-depth training begins with MLPs.
Just when it comes to representation of data inside a network, their movement and transformations, then naturally questions about activation, recursive layers, regularization, combination of layers, etc. are raised. Now this is deep lerning.

In addition there is everything and everywhere for deep learning. =)
 
Yuriy Asaulenko:
If MLPs can solve a problem, what difference does it make whether they are relevant or not? The more so with MLPs you don't have to strain yourself - for them there is everything and almost everywhere.
It's just that the in-depth one can learn much faster, all other things being equal... not 10 hours but 5 minutes, for example :)
 
Aleksey Terentev:

Well I do not incline you to anything. Deep learning begins just with MLP.
Just when it comes to representation of data inside the network, their movement and transformations, then naturally questions about activation, recursive layers, regularization, combining layers, etc. are raised. This is deep lerning.

I understand that, but I'm talking about something else. You don't need higher mathematics for the two-pipe problem; arithmetic is enough. Whether arithmetic is relevant or not is another matter.

That is, first of all, you need to define the problem, and then choose the methods of solution.

As for large and complex tasks DM - DL, MLP there is certainly long past stage.

 
Maxim Dmitrievsky:
It's just that the deep one can learn much faster still, all other things being equal... not 10 hours but 5 minutes, for example :)
I can't say for sure, but it seems to me that these are illusions. Just from general considerations.
Reason: