Market etiquette or good manners in a minefield - page 51

 

Zeroing in, of course.

The question as such is no longer there. I've been accumulating the correction square wrong, so I had to warm the girl up a lot. Only noticed it when I started writing the code for the double layer.

 
Neutron >> :


Yes, I only predict one step ahead and then retrain the grid. I predict the direction of the expected movement, not its magnitude or its duration in time.

>> Please clarify, are you predicting the "colour" of the future +1 bar or are you estimating the movement in a more complex way, taking into account the history relative to the current bar?

 
I want to ask if it's normal that the hidden layer weights are on average an order of magnitude greater than the output layer weights - i.e. my hidden layer weights are one and tens, and my output layer weights are within one
 
paralocus писал(а) >>
I want to ask if it is normal that the weights of the hidden layer are on average an order of magnitude greater than the weights of the output - i.e. my hidden weights are units and tens, and my output weights are within one

In principle, yes. The point is that your input spread for the hidden layer is in the range +/-2...5 and for the output +/-1 (due to FA at the hidden layer output). This explains the effect you noticed.

grasn wrote >>

Could you please specify, are you predicting the "colour" of the future +1 bar or assessing the movement in a more complex way, taking into account the history relative to the current bar?

Only the "colour", not the bar, and not the history.

 

A double layer on the sine:

By the way, can you tell me the optimal number of neurons in a hidden layer? It takes a long time to calculate.

 

Each neuron in the hidden layer is responsible for a face of a multidimensional cube that limits the area allocated by the NS in the feature space. Naturally, the more edges (neurons) the more accurately we can localize the region of interest... but the longer will be the optimal length of the training sample and not the fact that at this length the market interests will not become different. Thus, optimal number of neurons in the hidden layer, is defined by characteristic lifetime (measured in events of the Market) of quasi-stationary states of the System. Unfortunately, there is no exact answer to this question. Only experimental approach on a particular instrument is possible. But the minimal number of neurons is known precisely - 2.

P.S. Show me the length of error vector.

My two-headed vector on EURUSD hourly bars:

 

Sent her off with two neurons in a hidden layer to count GBPUSD, been waiting 30 minutes for her to come back -:)

And the error vector length is something new... we didn't go through -:)

 

How many eras must it be?

 
300
 
There must be something wrong with her.
Reason: