Machine learning in trading: theory, models, practice and algo-trading - page 1226

 
Martin Cheguevara:


see that little rectangle over there - that is exactly what I have described here.

And since you probably do not understand much about "workability", I can only sympathize with you.

If you think that an algorithm will appear and, well, it will work. universal for everything and everywhere You're very wrong.

And as you can see... or maybe not... What's written in the red rectangle is fully consistent with the situation on the market.)

At least respect those who read your posts, misfits... already there is no interest in reading all this crap, and they twist words as well.

We kicked them out of here with a fucking broom, but they're not coming back.
 
Maxim Dmitrievsky:

You should at least respect those who read your posts, misfits... already there is no interest in reading all this crap, and they twist words too

GaraZho) I will respect you more for the underdeveloper.)

 

Kesha Rutov:

It is necessary to know the STATUS OF THE MARKET, when there is a trend when there is a flat, the channels, grids, etc. work on the flat, the wagons work on the trend.

toxic:

This idea, to put it mildly, is not new, such an "insight" comes to mind about a couple of months after the first acquaintance with the trading terminal))) But we should remember "no free lunch" and that practice is the criterion of truth.

To batch the chips and targets for trend-flit detectors on MO, is not exactly a trivial task, as it seems at first glance.

 
Martin Cheguevara:


...ryamogonGle is fully consistent with the teGhouGe situation on the market)

it's natural, because a straitjacket is supposed to fit the size of insanity.

 
revers45:

This is natural, because the straitjacket is supposed to be the size of insanity.

Insanity has no size, just as chaos has no degrees of freedom; the shirt is the essence of common sense, which has a size.

Insanity is the way common sense defines the line beyond which there is no longer any need to measure anything.

Insane is the one who tries to measure the immeasurable ;)

 
toxic:

how do you do preprocessing, in general terms? because there are a lot of "chips", mostly it's a game with the allocation of chips and tags

 
I have not traded in it myself, but I have seen reports, and the DCs are banned:

There are a lot of chips in general it's hard to say anything useful, and specifics, well, you know...

There are a number of "rules" that allow to avoid gross mistakes when building features from time series, in particular one of the most violated is banal "mixing" of features with targets, features should be strictly from past, and targets from points strictly from future, such separation should be at the level of algorithm of series transformation into dataset, and not as everybody does all kinds of indicators, then they cut into train and test, shift something somewhere and so on. You have to cut the initial series and then use sliding windows (past and future) to run through the series and get features and targetets separately for Lerne and test, or even for validation. Of course, you can do it with indicators, if you know for sure that the indicator for features is not looking forward, and for the targets it is not looking backward. There is a number of even more subtle errors, but I won't mention them now.

The transformations themselves may vary from the trivial (returnee, variation, volume variation, delta of the stack, distribution of deals, etc.) to all sorts of exotic ones.) up to various exotics, gradients of horizontal levels of traded volumes, "patterns" (what the hell without them))), and dozens of specific custom statistics obtained by "inspiration" or clustering that turned out to be useful, such as "trend/flight" mentioned by Mr. Innocent above, as well as "order/haos" and the like. Some statistics are valid for different time scales, some are not, some features work for some instruments, others do not, you need to be able to filter and select the features for the target. There are a lot of things, standard ARMA models , GARCH ... Mid and long term macro prediction as features, etc. I haven't got around to doing NLP\NLU for analysis of text streams, from social networks, etc. That's exactly where dip-learnig would be needed.

yeah, i wish i had some "magic", like i rearrange values and errors drop drastically everywhere :)

Arma and Garch for input - nice, I've done something similar myself, but I haven't completed it (in particular, the linear dependence of USDX to EURUSD as a fixture, works fine).

I think I won't parse anything... I'm too lazy )) unless MT5 provides access to calendar. If they make bots for the news and without NS they have good ones, I haven't traded but I've seen reports, and the dealers get banned.

 
toxic:

"Only you should not check it in real life, for such magic the doll will beat you badly and you won't have to parsing

exactly the same magic happened recently with LDA (linear discriminant analysis) to transform features, who knew, that classifier over classifier shows nice pictures only on trayne and test, but not on validation.)

It's almost the same with PCA... i don't know what idiot invented it and wrote it to use it for forecasting time series, but a lot of people picked it up. Like tree-like needs such a preprocessing ) but had to check

 
Maxim Dmitrievsky:

It's almost the same with PCA... what idiot came up with the idea and wrote to use it to predict time series - I don't know, but a lot of people picked it up. The tree-like ones need such preprocessing.

I haven't used PCA, purely intuitively, has there appeared any justification of its harmfulness for BP?

 
elibrarius:

I haven't used PCA, purely intuitively, has there been any justification for the harmfulness of its use for BP?

It turns out about the same on the unsteady market as without it, only worse... - The main components selected on the tray start to "jump" on the OOS, and as a result PCA overtrains itself.

I think that this is true for any decomposition method or on the contrary for dimensionality reduction. Fics should be strictly standardized and normalized for it, but it doesn't save it either...

Reason: