What to feed to the input of the neural network? Your ideas... - page 81

 
Ivan Butko #:
At the moment, looking back at all the years of effort with these networks, one thing I can say for sure: there is not enough functionality for model selection.
.

Well yes, that's the next step of the problem - then data drift and TC stopping criterion.

 
Maxim Dmitrievsky #:

L1, L2 normalisations. (regularisations) are described. Usually already embedded in the MO.

L1/L2-norms can suppress weak but statistically significant patterns.

It is no different from groundlessly assigning old random power values to new random power values (as long as there is something there to summarise at ns-ka).
 
Aleksey Vyazmikin #:

Well yes, that's the next step of the problem - then data drift and the TC stopping criterion.

If we achieve a result at which the trained model (group of models) can be guaranteed to work for the next +-~N periods, the problem will be solved simply by early shutdown and additional training.

That is, solving the primary problem will essentially solve the secondary one as well.
 
Mathematical method only works here, i.e. filtering of input values by priority without changing the values and their number.Two synchronous models should be, one is a filter and the other is a predictor.The fact that you endlessly chase these noises will never do any good.You need to combine everything possible and more into this method.I have no idea how to put it together in a proper implementation, but the fact that the information is accurate is out of the question.Whether you believe it or not is up to you.
 
Рaра Нoth #:
Mathematical method only works here, i.e. filtering of input values by priority without changing the values and their number.Two synchronous models should be, one is a filter and the other is a predictor.The fact that you endlessly chase these noises will never do any good.You need to combine everything possible and more into this method.I have no idea how to put it together in a proper implementation, but the fact that the information is accurate is out of the question.Whether you believe it or not is up to you.
Expand the thought.

I don't understand anything.
 
Ivan Butko #:
Expand the thought.

Nothing is clear
One model divides the input values by priority, the second model makes a prediction only on these priority values from the first model.The point is that the inputs are divided into harmonics and all noise is removed from them, and the remainder is subject to accurate training, since it has no noise.In any data, noise is always not constant, so standard neural networks will never work there.How to teach these models correctly and what exactly to teach, I have no idea, I know that the structure of direct propagation in two layers, without any quirks, is quite enough.
 
Рaра Нoth #:
Mathematical method only works here, i.e. filtering of input values by priority without changing the values and their number.Two synchronous models should be, one is a filter and the other is a predictor.The fact that you endlessly chase these noises will never do any good.You should combine everything that is possible and more into this method.I have noidea how to put it together in a proper implementation, but the fact that the information is accurate is not even discussed.It's up to you whether you believeit or not

The idea is absolutely sensible, but realisations do not always satisfy the criterion of graality :)

https://www.mql5.com/ru/articles/11147 - a possible implementation has arrived

 
Maxim Dmitrievsky #:

The idea is absolutely sound, but the implementations don't always satisfy the criterion of graality :)

https://www.mql5.com/ru/articles/11147 - possible implementation has arrived

It is possible to memorise history without these perversions, not even close. Look at how sound is converted, by tweaking some decibels according to a formula, it is parsed into hormones, i.e. pure frequencies are obtained from which it consists. This is approximately the same, but all neurons do it and they parse the input data, selecting the correct interpretation for each group, comparing them with each other and with the history of quotes, trying to catch the correct logic between all of this
 
Рaра Нoth #:
It is possible to memorise history without these perversions. Not even close. Look at how sound is converted, by tweaking some decibels according to a formula, it is broken down into hormones, i.e. pure frequencies are obtained from which it consists. This is about the same, but all neurons do it.
Did you do it yourself or are you making it up?
 
Forester #:
Did you do it yourself or are you making it up?
I didn't do it myself, but I saw the result with my own eyes and caught the whole essence at once, to believe or not your problems.