Machine learning in trading: theory, models, practice and algo-trading - page 3417

 
mytarmailS #:

No, that's not right. I'll try to explain it again, forget about the models for now...

You've got a lot of TCs optimised for the track and there's a test.


Create a dataset for the model:

target = By test we see if the TS worked on the test (this is target YES/NO).

dataset = (attributes) are parameters of the TS, capital curve, trades, FS, Sharpe (if the TS is based on MO, the guts of the model).


Then we traim already as if a real model to answer whether a particular TS will work on the test or not.

I don't buy the essence yet, if you make an example - maybe you'll get it )

or an article
 

There is another non-obvious thing that can affect the training results. It is, for example, to train a classifier not only to predict buy/sell labels, but at the same time to train it to classify SEALs (rough example). That is, teach the main task and various side tasks.

This might have something to do with the internal structure of the model. I haven't seen any such studies.

 
Forester #:

It seems to be simple - see comments in Russian.

Thanks, I'll try to figure it out.

Forester #:

What is the average of all? This is the centre of the cluster on this column.

Probably - different terminology most likely.

Forester #:
I found my kmeans test with predicate f-key in an old file:

.

Thanks for the code! Too bad it doesn't compile.

 
mytarmailS #:
Where does the displacement come from?

If the examples in the sheets are not enough and the models will be cast, why talk about the sheets at all.

If there is no bias, the model will work adequately on new data.

Leaves that fall under selection usually contain 5% of responses from the whole sample, which with a sample of a couple of thousand examples is not enough for any interval analysis.

I did all this last year.

 
Aleksey Vyazmikin #:

Thanks for the code! Too bad it doesn't compile.

I'll have to tweak it a bit. For example

dt.MatrixLearn
replace with
MatrixLearn
. I had it in the dt class. Maybe something else to tweak somewhere else. But I think the point is clear.
 
Maxim Dmitrievsky #:

There is another non-obvious thing that can affect the training results. It is, for example, to train a classifier not only to predict buy/sell labels, but at the same time to train it to classify SEALs (rough example). That is, teach the main task and various side tasks.

This might have something to do with the internal structure of the model. I haven't seen any such studies.

I'm not sure if it works for wooden models.
 
Forester #:

It's going to take a little tweaking. For example

replace with I had it in the class dt. Maybe some other tweaks somewhere else. But I think the point is clear.

Yes, the main thing is that there is a function for prediction, as I understand - it is self-written and was not originally included in the class.

It's a pity that the standard clustering function doesn't include a seed for randomiser, which is useful for debugging.

 
Aleksey Vyazmikin #:

Yes, the main thing is that there is a function for prediction, as I understand it - it is self-written and was not originally included in the class.

It's a pity that the standard clustering function doesn't include a seed for randomiser, which is useful for debugging.

Self-written. But there is a check code there - the results of clustering from it and from KMeansGenerate on the training matrix coincided completely when I checked it.
.


Each Restarts - with different starting points to start with. There is randomisation, but (probably) not repeatable - haven't checked. I think this could be refined if you really need to....

 
Maxim Dmitrievsky #:

There is another non-obvious thing that can affect the training results. It is, for example, to train a classifier not only to predict buy/sell labels, but at the same time to train it to classify SEALs (rough example). That is, teach the main task and various side tasks.

This might have something to do with the internal structure of the model. I haven't seen any such studies.

Normal multiclass. Not 2 class training, but for example 5 classes. The model will just give some of the answers/leaves to other classes and be less likely to predict the main ones. I think multiple single-task models are better.
 
Forester #:
A normal multiclass. Not 2 class training, but for example 5 classes. The model will just give some of the answers/leaves to other classes and will be less likely to predict the main ones. I think multiple single-task models are better.
No, 2 classes, but on different datasets with different objects. Combine all of them into one.

I'll be doing kozul for multicurrency that way. Different pairs. But you can add defamiliarisation things in general, like kitties.
Reason: