Machine learning in trading: theory, models, practice and algo-trading - page 3647
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Question for MO experts:
Each time Python finishes training, the error has a unique number.
Each time training ends with a unique set of weights.
If you loop the training with constant reinitialisation of weights, is there any chance that after some time the error will go to the bottom on both the training sample and the test sample ?
Python error?)
I was chatting with the code.
There, when it runs the NS script (of any architecture), an error is printed during training, which is decreasing-decreasing.... decreases. And then Python is like, "okay, I'm shutting it down, the error is not decreasing anymore."
I was chatting with the code.
Well, like, no, it won't. But if you wait for infinity and randomly initialise the weights, without training as such, it might.
Chat said the same thing.
Says wait a billion years.
Chat said the same thing.
He says wait a billion years.
Each time the training ends with a unique set of weights.
If we loop the training with constant reinitialisation of weights, is there a probability that after some time the error will fly to the bottom both on the training sample and on the test sample?
There, when it runs the NS script (of any architecture), an error is printed during training, which decreases-decreases.... decreases. And then Python is like "good, I'll switch it off, the error doesn't decrease anymore".
This is called "sticking" of the optimisation method used during training, so the results are always different, sometimes significantly different.
There are two ways to go about this:
1) To loop the training and wait for further error reduction to occur.
2) Change the optimisation method to a more advanced one, which will skip the problematic place in the search space in a much smaller number of iterations.
The second method is much less costly in terms of time, energy and nerves.
Optimisation methods are like petrol, if the petrol is bad, the car can simply stall (get stuck), and if it is good, the car goes faster and does not stall. Where exactly the car will go is another matter and is related to the set of metrics used.
The MO is for children nowadays.
I watched a few videos - it is doubtful, of course, that it is a child, but if it is NS, it is very impressive - the voice is indistinguishable from a live one, with intonations, turns, words with parasites....
And here we are all in our own sandbox ...
Here, by the way, the task for NS would be more difficult....
There is such a question on variable length features.
Will it be different from the division of BP into several states?
After all, in fact, the different lengths of the attributes should somehow differentiate these states for the model.
From this point of view, there is no point in bothering, is there? If there is already a division into states.