Machine learning in trading: theory, models, practice and algo-trading - page 2450

 
Andrey Dik #:

Is the average value of neural network weights, taken modulo, an indicator of the quality of its training?

Suppose there are two identical neurons, trained on the same data, one has a value of 0.87 and the other 0.23, which one is trained better?

In a balanced one it should be closer to 0.5. If you take a simple network such as a mlp. For binary classification.
 
Andrey Dik #:

Is the average value of neural network weights, taken modulo, an indicator of the quality of its training?

Suppose there are two identical neurons, trained on the same data, one of them has a value of 0.87, and the other one 0.23, which one is trained better?

It's a big deal...

Well probably the one that guesses better, shouldn't that be the first to come to mind????

 
Maxim Dmitrievsky #:
A balanced one should be closer to 0.5, in theory. If you take a simple network like mlp. For binary classification.

Please, explain your point.

on the contrary, I think that the closer the value is to 1.0, the clearer the neural network is trained. and if it is about 0.0, then it is kind of "hesitant", if you can put it that way with respect to the neural network.

 
mytarmailS #:

this is a pisser...

Well probably the one that guesses better, shouldn't that be the first thing that comes to mind????

everyone is smart in hindsight, neurons even more so...

 
Andrey Dik #:

Is the average value of neural network weights, taken modulo, an indicator of the quality of its training?

Suppose there are two identical neurons, trained on the same data, one of them has 0.87, and the other 0.23, which one is trained better?

No. Especially if some regression, where baes(ax+b) in space. What's the point of these indirect metrics, anyway? Run on points outside the training sample and everything is clear.

 
Andrey Dik #:

clarify the thought, please.

On the contrary, I think that the closer the indicator is to 1.0, the clearer the neural network is trained. and if it is about 0.0, then it is kind of "hesitant", if you can say so with respect to the neural network.

Half of neuron activations for the 1st class, half for the other one. Based on such primitive logic. If skewed, perhaps the classes are poorly balanced. And extreme values seem to cause the gradient to explode or fade
 
Andrey Dik #:

everyone is smart in the back mind, and neurons even more so...

there is a front one, it is called validation, and weights have nothing to do with prediction ability, they can be anything...

weights are roughly a formula , and prediction is the result of calculating a formula - compare one to the other == schizophrenia

 
mytarmailS #:

there is a front one, it is called validation, and weights have nothing to do with prediction ability, they can be anything...

weights are roughly a formula, and prediction is the result of calculating the formula - to confront one with the other == schizophrenia

Zhi, shi are spelled with an i.

 
Alexei Tarabanov #:

Zhi, shi is spelled with an i.

Thank you, Russian is not my language, I did not learn it...

 
mytarmailS #:

Thank you, Russian is not my language, I didn't learn it...

It's all ahead of me.

Reason: