Machine learning in trading: theory, models, practice and algo-trading - page 605

 
elibrarius:
I want to automatically determine the number of neurons in the network. What are the formulas to calculate it?

I read somewhere that the inner layer should be half the size of the input layer, a maximum of two layers, there is no point in doing more

it's for MLP

 
He is no longer teaching, he got enough suckers and opened a brokerage house:

If you're serious about investing in education, go to Gerchik or go straight to Perepelkin.


Perepelkin no longer teaches, he recruited enough suckers and opened a brokerage house.

 
Maxim Dmitrievsky:

I read somewhere that the inner layer should be half the size of the input layer, a maximum of two layers, there is no point in doing more

it is for MLP

Absolute nonsense. Might be less or more, and neurons and layers - it depends on the problem. Heikin (I think you have it) describes how and why.
 
Yuriy Asaulenko:
Absolute nonsense. It may be less or more - it depends on the task. Haykin (I think you have it) describes how and why.

I haven't seen such information from him, maybe I didn't read it carefully

 
Maxim Dmitrievsky:

I haven't seen any such information from him, maybe I didn't read it carefully.

The 2nd layer is usually larger than the input layer, because it starts extracting attributes, and there can be a lot of such attributes, even if you have a yes/no classification.

Heikin has it for sure, and it's much better than mine.)

 
Yuriy Asaulenko:
Absolute nonsense. There may be less and more neurons and layers - it depends on the problem. Haykin (I think you have it) describes how and why.

I've seen a variant about the number of inputs / 2 and others.
How do you automatically calculate the optimal variant?

 
elibrarius:

I saw a variant about the number of inputs / 2 and others.
How to automatically calculate the best option?

I could be wrong, but in my opinion - no way. It is chosen based on general considerations, and then according to the results of training, neurons are added/removed to the layers, or even additional layers are added/removed.
 
Yuriy Asaulenko:

The 2nd layer is usually larger than the input layer, because it starts extracting signs, and there can be a lot of such signs, even if you have a yes/no classification.

Heikin definitely has this, and it's much better written than mine.)


The rule of thumb is that the size of this [hidden] layer is somewhere between the size of the input layer ... and the size of the output layer ....

To calculate the number of hidden nodes, we use the general rule: (Number of inputs + outputs) x 2/3

This is the most common recommendation... but in general there are methods of determination, you have to read on google, but it's complicated there

The NS does not extract any signs, the signs are fed to the input. It either downsamples or notches all combinations (with increasing number of neurons).

 

A quote about choosing the number of layers:

A network with three layers (numLayers=3: one input, one hidden, and one output) is usually sufficient in the vast majority of cases. According to Tsybenko's theorem, a network with one hidden layer is capable of approximating any continuous multidimensional function with any desired degree of accuracy. A network with two hidden layers is capable of approximating any discrete multivariate function.

I wonder if bar analysis refers to a continuous or discrete function.

 
Maxim Dmitrievsky:
NS doesn't extract any features, the features are fed to the input. It either downsizes or notches all combinations (with increasing number of neurons).
That's why overestimation of the number of neurons is also harmful. It won't be generalizing, but remembering together with noise.
Reason: