Machine learning in trading: theory, models, practice and algo-trading - page 3417

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
No, that's not right. I'll try to explain it again, forget about the models for now...
You've got a lot of TCs optimised for the track and there's a test.
Create a dataset for the model:
target = By test we see if the TS worked on the test (this is target YES/NO).
dataset = (attributes) are parameters of the TS, capital curve, trades, FS, Sharpe (if the TS is based on MO, the guts of the model).
Then we traim already as if a real model to answer whether a particular TS will work on the test or not.
I don't buy the essence yet, if you make an example - maybe you'll get it )
or an articleThere is another non-obvious thing that can affect the training results. It is, for example, to train a classifier not only to predict buy/sell labels, but at the same time to train it to classify SEALs (rough example). That is, teach the main task and various side tasks.
This might have something to do with the internal structure of the model. I haven't seen any such studies.
It seems to be simple - see comments in Russian.
Thanks, I'll try to figure it out.
What is the average of all? This is the centre of the cluster on this column.
Probably - different terminology most likely.
I found my kmeans test with predicate f-key in an old file:
.
Thanks for the code! Too bad it doesn't compile.
Where does the displacement come from?
If there is no bias, the model will work adequately on new data.
Leaves that fall under selection usually contain 5% of responses from the whole sample, which with a sample of a couple of thousand examples is not enough for any interval analysis.
I did all this last year.
Thanks for the code! Too bad it doesn't compile.
I'll have to tweak it a bit. For example
replace with . I had it in the dt class. Maybe something else to tweak somewhere else. But I think the point is clear.There is another non-obvious thing that can affect the training results. It is, for example, to train a classifier not only to predict buy/sell labels, but at the same time to train it to classify SEALs (rough example). That is, teach the main task and various side tasks.
This might have something to do with the internal structure of the model. I haven't seen any such studies.
It's going to take a little tweaking. For example
replace with I had it in the class dt. Maybe some other tweaks somewhere else. But I think the point is clear.Yes, the main thing is that there is a function for prediction, as I understand - it is self-written and was not originally included in the class.
It's a pity that the standard clustering function doesn't include a seed for randomiser, which is useful for debugging.
Yes, the main thing is that there is a function for prediction, as I understand it - it is self-written and was not originally included in the class.
It's a pity that the standard clustering function doesn't include a seed for randomiser, which is useful for debugging.
.
Each Restarts - with different starting points to start with. There is randomisation, but (probably) not repeatable - haven't checked. I think this could be refined if you really need to....
There is another non-obvious thing that can affect the training results. It is, for example, to train a classifier not only to predict buy/sell labels, but at the same time to train it to classify SEALs (rough example). That is, teach the main task and various side tasks.
This might have something to do with the internal structure of the model. I haven't seen any such studies.
A normal multiclass. Not 2 class training, but for example 5 classes. The model will just give some of the answers/leaves to other classes and will be less likely to predict the main ones. I think multiple single-task models are better.