Machine learning in trading: theory, models, practice and algo-trading - page 1291

 
Maxim Dmitrievsky:

There is also a cool software KNIME all sorts of boosting, analysis and visualization of data

It's free and there is no programming.

You can't learn all the software.) You can do with R or Python or something else. Stop at something. And that's enough. Unless you have to.)

Botany experts, all sorts of trees and forests can build anything, well, at least diagonal or slanted? In all instances I see only combinations of horizontal-vertical divisions.

 
Yuriy Asaulenko:

You can't learn all the software.) You can get by with just one thing - R, Python, or something else. That's enough. Well, unless you have to.)

Botany experts, all sorts of trees and forests can build anything, well, at least diagonal or slanted? I see only combinations of horizontal-vertical divisions on all copies.

I don't think they can... he hasn't liked ovals since he was a kid, he's been drawing angles since he was a kid. There are binary splits, not sigmoid.

boosting seems to be able to, but I'm not sure
 
Maxim Dmitrievsky:

They can't seem to... he hasn't liked ovals since he was a kid, he's been drawing angles since he was a kid. They're binary splits, not sigmoids.

I want a polygon, not a square-parallelepiped, but at least a polygon. Couldn't 2x+3u-7z > N? It's just an inclined plane. How do I know in advance that this division is the best.

 
Yuriy Asaulenko:

I don't want a square-parallelepiped, but at least a polygon. Couldn't 2x+3u-7z > N? It's just a plane. How do I know for sure that this division is the best.

It is clear that it is better to feed the tree with ready-made features for classification, but the allocation of these features, i.e. the creation of predictors - a feasible task for neural networks.

By the way, maybe there are neural networks for clustering, when the task is to find a more complicated trait in the sample transformed from other simple features?

 
Yuriy Asaulenko:

I don't want a square-parallelepiped, but at least a polygon. Couldn't 2x+3u-7z > N? It's just a plane. How do I know for sure that this division is right.

It doesn't work well on known linearly separable points, otherwise they say there is no difference. Something heard that the forest is used more to search for unknown "patterns", while the NS is used to process known signals.

 
Aleksey Vyazmikin:

By the way, maybe there are neural networks for clustering, when the task is to find a more complex feature transformed from other simple features in the sample?

Quite a feasible task for NS. The question is in the preparation of the data, and whether there is anything to search-transform? Otherwise the NS will find something that does not really exist, because any data will always have some regularities). It will perfectly learn and steadily find them, but only on this BP). In other VRs, it just doesn't exist, and the NS will talk nonsense. Some people confuse this with overtraining.

 
Yuriy Asaulenko:

It is quite an achievable task for the NS. The question is in the preparation of the data, and whether there is anything to look for and transform at all? Otherwise the NS will find something that does not really exist, because in any data there will always be some regularities). It will perfectly learn and steadily find them, but only on this BP). In other VRs, it just doesn't exist, and the NS will talk nonsense. Some people confuse this with overtraining.

Rather I speak about transformation and generalization of predictors, for example, to simplify, we have 2 predictors, and carrying out between them any not difficult mathematical operations leads to the same answer, it is their common feature for definition in one cluster, well absolutely primitively - number in degree zero, but there can be some such transformations due to formulas in neurons and NS principle.

Such clusters, as additional features, can improve the classification of already trees/forests/bustings.
 
By the way, another problem with automatic tree building is the loss of logical relationships within groups of predictors. This is when you have, say, 10 predictors to measure, say, points in space, and you know that it is the combination of these predictors that will reveal any connections within the group before you add connections from other groups of predictors there.
 
Yuriy Asaulenko:

If you know this, and it really does exist (let's call it: the developmental cycle of a phenomenon, which again is a regularly recurring event), then you can easily use it.

I can only see this kind of thing on history, when things have already happened. On real-time, I'll pass.) By the way, it's common for us to identify a signal only after it's over. In signal processing this is often the case.

Why only on history?

we look at volumes.

the maximum number of ticks is always at one and the same time

We can easily predict and set the time in which either a trend or a flat strategy will work

:

 
Renat Akhtyamov:

Why only on history?

We look at volumes.

the maximum number of ticks is always at the same time

We can easily predict and fix the time when either a trend or a flat strategy will work

:

Volumes help predict the change of state from a trend to a flat, but not "without difficulty", in general, predicting the state "trend / flat" is not much more accurate than the direction of the next increment, per unit time, somewhere around 57% in accuracy, what was talking about some incredible numbers, clearly the result of an error.

Reason: