Machine learning in trading: theory, models, practice and algo-trading - page 910

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Where to get documentation?
Install RStudio and you will be happy: packages, documentation, examples, theory, and publications
This cannot be the case by definition. Each run of the ELM neural network generates a network with weights initiated randomly and does not use backprop. Read the description of this specific neural network model.
And which NS in R are trainable? And in which ones can you change training parameters in the course of the play, for example for manual annealing?
Install RStudio and you will be happy: packages, documentation, examples, theory, and publications
Thank you! Now I will put it and see what will make me happy :)
And if you look for good predictors in a set in this way:
1. Reduce the number of predictors until it is impossible to train the model (3-5), we obtain a bounded set of
2. Try each predictor with a limited initial set, and fix the result
Consider those models with added predictor that give the best result to be good/useful.
Will this method work?
Barabashkin, did you delete your post? Admit it))))
Communicate culturally and sanctions will be avoided :)
Che Teacher, you get four models, for example with probabilities 0.51, 0.55, 0.52, 0.59.
You put it through the threshold of 0.5 or something similar, getting 0-1 signals, and you're surprised that everything is the same?))
Don't be sad Sensei, everything will be fine, but it's not exactly)))))) hilarious...
Well actually I have models with R-score not less than 0.71, this is firstly, and secondly I take the average of two polynomials of the committee. And yes, in the end all signals are the same. The signals change if I change the training interval...
Which NS in R are trainable? And in which ones can the learning parameters be changed in the course of the play, say, for manual annealing?
1. In the package darch(v0.12.0) finetuning can be done repeatedly on a new portion of data. I haven't tested how long this will work.
In keras/tensorflow all models are trainable, and from any stage of previous training. Of course intermediate results of training have to be saved.
2. What play do you want to change the training parameters and what parameters? What is manual annealing?
Good luck
Well actually I have models with R-score not less than 0.71, this is firstly, and secondly I take the average of two polynomials of the committee. And yes, in the end all signals are the same. The signals change if I change the training interval...
Can you test my set of predictors for significance in their combinations in your script?
In keras, all models are trainable, and from any stage of previous training. Of course, it is necessary to save intermediate learning results.
Where can I read about retraining inside? I thought only Bayesian models are retrained well
Can you test my set of predictors for significance in their combinations in your script?
Sure thing. Go ahead. Only I should warn you that the target should be balanced...