Machine learning in trading: theory, models, practice and algo-trading - page 3305

 
Maxim Dmitrievsky #:

Scale invariance?

Moire effect :)

Not) Slutsky-Yule, found the post. I think some other similar effects were found, but I don't remember exactly. At least, it is now accepted to be careful about apparent periodicity, although, of course, local radio amateurs don't care about all this)
 
Maxim Dmitrievsky #:

A good paper on how to properly do BP representation for neural networks. FFTs can be got rid of, of course. And further comparison of different models.

The fundamental difference is that preprocessing is built into the network architecture. But it can be done separately.

LSTM smokes in the background, because it does not take into account interperiod variations.

Boosting is somewhere near the bottom of the rankings too, based on their tests.

It is thought that the overwhelming number of applications of MO, regardless of the tool, are in areas where there is a "natural" relationship between teacher and predictors. For example, weather forecasting: temperature, humidity ...

We, on the other hand, sit here and by picking our noses come up with predictors that are our imagination and for some reason want them to predict trade orders.

So any publications with "natural" predictors are not of interest to us. Unfortunately.

 
Aleksey Vyazmikin #:

How is the contrary proven?

In my opinion, there are time-bound events - the same news. I think if we divide into three sub-samples - expected, worse, better and take into account the context, we will notice similar behaviour of market participants.

Another option is seasonality of goods.

Which is what exactly?

 
СанСаныч Фоменко #:

It is thought that the overwhelming number of applications of MO, regardless of the tool, are in areas where there is a "natural" relationship between teacher and predictors. For example, weather forecasting: temperature, humidity ...

We sit here picking our noses and coming up with predictors that are our imagination and for some reason we want them to predict trade orders.

So any publications with "natural" predictors are not of interest to us. Unfortunately.

Well, this is just an aspect of feeding the model with traits, which looks logical. And what is done with it afterwards is of course an esoteric question.

For example, this approach allows you to cram more story into one sample
 
Aleksey Nikolayev #:
No) Slutsky-Yula, found the post. I think some other similar effects were found, but I don't remember exactly. At least it is now accepted to be careful about apparent periodicity, though of course local radio amateurs don't care about all that)

Well, why not.

There are economic processes with apparent, not speculative, periodicity. Harvest, for example. And there are plenty of such processes. There are models for them in which periodicity is one of the parameters.

Another thing is that it is necessary to separate the periodicity existing in reality from the periodicity sucked out with the help of some Fourier, which flourished at Forex. About 10 years ago, there was no shortage of radio engineers. A huge number of people do not understand that a very important property of any mathematical model should be its interpretability, the possibility of comparing these or those parameters and properties of the model to reality. And when, seeing obvious waves on charts with variable periodicity, we start inventing some demand-suggestions, which are unknown where they come from, the result is corresponding.

 
Maxim Dmitrievsky #:

Well, this is just an aspect of feeding the model with signs, which looks logical. And what is done with it afterwards is of course an esoteric question.

For example, this approach allows you to cram more story into a single sample

With us, rubbish in is rubbish out, and most people do NOT have rubbish in.

 
Thinking Fourier is only about periodicity is like thinking music is only about rap...

You're not giggling at radio amateurs, you're giggling at your illiteracy.
 
СанСаныч Фоменко #:

With us, rubbish in is rubbish out, and most people do NOT have rubbish out.

A hobo algorithm is needed that digs through the rubbish.

"From Dirt to Dukes", you could call it a series of articles.

 

It seems to be common place and seems intuitively obvious that more careful learning produces a shift from generalisation to memorisation of a particular sample.

For myself, I explain it by the fact that if models with increasing number of parameters are used (decision tree, for example), then a larger number of iterations simply leads to an increase in the number of parameters. With models with a fixed number of parameters it is more complicated, but probably we can say that with increasing iterations the parameter space is "used more".

To put it even more simply, there is an increase in the number of options to choose from and it is easier to choose what you need. For example, the most traceable variant of SB when choosing from 1000 of its variants will be more trendy than when choosing from 100 variants.

PS. This is about this
 
Aleksey Nikolayev #:
Not so long ago on the forum someone gave the name of the effect (I haven't found it yet), because of which series close to SB seem to have a period. This effect is associated with many shameful moments in science, when by means of Fourier "found" periodicity in processes, and radio amateurs because of it on the forum will never be translated).

This

Reason: