Machine learning in trading: theory, models, practice and algo-trading - page 1477

 

I also have an idea how to slice the price.

We take a price and cluster it into, say, 10 clusters, train the network, see the error...

Then we drop one cluster, let's say the tenth one, train the network again, and look for the error. And so we try all combinations until we find something interesting... In the end it may turn out that only 1,3,9 clusters should be left in the series to make good predictions.

 
mytarmailS:

I also have an idea how to slice the price.

We take a price and cluster it into, say, 10 clusters, train the network, see the error...

Then we drop one cluster, let's say the tenth one, train the network again, and look for the error. And so we try all combinations until we find something interesting... In the end it may turn out that we should keep only 1,3,9 clusters to make good predictions.

The analogy is to throw leaves out of the tree, like Alexei.

But the problem is that 1 tree always gives worse results than 100-200 trees in the forest

 
elibrarius:
The analogy is to throw leaves out of the tree, like Alexei

No, that's different...

Throwing out leaves is changing the rules in the decision tree that predicts the process.

I propose to change the process itself

 
Maxim Dmitrievsky:

It's not a shit, it's just a joke.)

Koldun starts another four-month vacation, the teacher continues to plow on "to raise the image. What a hoot))) hilarious...

 
mytarmailS:

Another idea came up on how to slash the price.

Dimensionality can be reduced in many ways, it would be a good idea. The simplest example. Black thin - cloze, blue dots
intersections, red - a primitive attempt to reconstruct the original vr by syn.dots. There are plenty of recovery methods.
The "quality of thinning" can be estimated, for example, by the simplicity of the function used for reconstruction. Simpler is better...


 
elibrarius:

The analogy is throwing leaves out of a tree, like Alexei.

But the problem is that 1 tree always gives worse results than 100-200 trees in the forest

Not throwing out, but culling. It's like assembling different mini strategies into one big pool. And then either collegiate decision, or give each leaf a fixed lot, which is what I do now.

mytarmailS:

Dropping the leaves means changing the rules in the decision tree that predicts the process.

Why would the rules change? No, it just removes those leaves that are more confident in their results by eliminating those who would rather blurt out any prediction for the sake of the system. In other words, there may not just be a solution to a situation in one tree, but when hundreds of different trees are used and a selection is made on them, the chance of not having a solution to the situation becomes slim.

 
Vizard_:

Koldun begins another four months vacation, the teacher continues to plow on "to raise the image. This is funny))) hilarious...

You're already talking about yourself in the third person, you're completely nuts)) image is fine

 
Vizard_:

The dimensional changes may be done in different ways to have a positive effect. The simplest example. The black thin one is kloz, the blue dots...
The red one is a primitive attempt to restore the initial image on syn.dots. There are plenty of recovery methods.
The "quality of thinning" can be estimated, for example, by the simplicity of the function used for reconstruction. Simpler is better...


Thanks, for the interesting! Are there any scientific names in "thinning" and "recovery functions" ? it would be interesting to read about it



Why are the rules changing? No, there's just a screening of those leaves who are more confident in their results, due to the exclusion of those who just want to blurt out any prediction for the sake of the system. In other words, one tree may not have just a solution for a situation, but when hundreds of different trees are used and a selection is made on them, the chance of not having a solution for a situation becomes insignificant.

that's a rule change, but which way it's changed is another question.

 
mytarmailS:

I look at your drawing with the crossing of the wagons and am amazed at how cool and often the price reverses at crossings, only in the opposite direction)) against the crowd signals

But of course it doesn't always work due to the volatility of the market, I need an adaptive indicator. And I got the idea of teaching the NS to guess the "right" waving periods in real time mode to catch reversals accurately?

Who has ideas about the target and what price parameters should be taken as predictors?

I've already written above about the prediction of the optimum of the TS properties and results (Equity/Pnl...).

If "directly", the principle is the same as for returnees or volos, for each sample divide the sample into "before" and "after" by a moving point price(t), calculate any figures for {price(t-N),price(t)} and target figures {price(t+1),price(t+K)} and run t through the entire series. In this case, the targets will be the waving optimums on {price(t+1),price(t+K)} on some window in the future, and the features can be basically anything from Stochastics or momentums of different periods, to waving optimums or other TS on the previous period{price(t-N),price(t)}.

 
Farkhat Guzairov:

What version of JPrediction do you use?

14 it seems