Machine learning in trading: theory, models, practice and algo-trading - page 3039

 
Mikhail Mishanin #:

Very correct and competent reasoning, contradictory of course - "... not in the construction of sustainable TC", just to the technology of construction/selection/evolution of sustainably profitable TC and strive for.

It is like going to a shop to buy clothes, but you have no stable ideas about what you would like, in what style, for what season, and as a result the thing will not be worn and your money will be wasted.

 
СанСаныч Фоменко #:

You have a column called classification error, now classification is irrelevant.

If you don't print the classification error on the screen, how do you know it's irrelevant? Print anything that is interesting.
Print and show that even 9% can be random, and 10% already merge. That's interesting. Here's one for you with 20%.

Your graph, as I understand it, we won't see.

 
Andrey Dik #:

I knew that you would not pass by my message. in fact, I knew that you would talk. unfortunately, you do not realise that what is highlighted in bold red is FF..... I don't understand why you're so allergic to FF.

By the way, you can make a lot of useful things out of rubbish - recycling is called recycling. I will say more - the presence of "rubbish" just allows you to still make a stable profit on the markets, there are examples even on this forum.

A post earlier you wrote

" The problem is that so far no one has managed to find such rules for FF (I have not seen, at least)."

The task is really difficult and impossible on off-the-shelf MO models: having the MO model as a black box, to look for something there in the predictive power of predictors, having balance as the goal.

An insanely difficult task.

And unnecessary.

We solve it step by step, which is what I do practically: I solve the problem of estimating the predictive ability of predictors, then fit the model as a black box, then work with the results of fitting. As of today, already at the EA testing level, it turns out that my teacher is kind of weird. I need to work on the teacher (target variable).

But the main thing in my scheme is that an already very complex problem, which is further complicated by the attempt to construct a FF, is broken down into independent stages and the problem becomes observable.

 
Forester #:

If you don't print the classification error on the screen, how do you know it doesn't matter? Print anything that is interesting.
Printed it out and showed that even 9% can be random, and 10% is already a drain. That's interesting. Here's one with 20% for example.

Your graph, as I understand it, we won't see.

Everything you wrote doesn't make sense to me.

A classification error (how was it calculated?) of 10% is a clear sign of overtraining. To disprove overtraining, you need a classification error on the training set and "out-of-sample" - they should be approximately equal.

I have a quite working Expert Advisor with a classification error slightly below 20%. At the same time, the percentage of losing trades in the tester is just over 20%.

I would like to see consistent figures on classification, which would prove the absence of overtraining, as well as understand how the balance is obtained from classification.

 
СанСаныч Фоменко #:

Earlier in the day, you wrote

" The problem is that no one has yet managed to find such rules for FF (I haven't seen any, at least)."

The task is really difficult and unfeasible on off-the-shelf IO models: having the IO model as a black box, looking for something there in the predictive power of predictors, with balance as the goal.

Insanely difficult task.

And unnecessary.

We solve it step by step, which is what I do practically: I solve the problem of estimating the predictive ability of predictors, then fit the model as a black box, then work with the results of fitting. As of today, already at the EA testing level, it turns out that my teacher is kind of weird. I need to work on the teacher (target variable).

But the main thing in this scheme is that an already very complex problem, which is further complicated by the attempt to construct a FF, is broken down into independent stages and the problem becomes observable.

The bolded part is that you ALWAYS use some fitness functions. evaluation of predictive ability - criteria of goodness (stability), fitting - criterion of identity, working with results - evaluation by some metrics and selection. i.e. at any stage, wherever we do anything, there is a FF as an evaluative description of what we want to get as a result. You have broken down the general integral evaluation into small differential ones, but the essence has not changed, you still use FFs in your work.

 

who's to blame if you bought a crappy jacket? - The manufacturer? - the seller? - The sheep? - You can go down to the atoms that make up the jacket, but it won't get better and you won't understand why the jacket sucks. your criteria for evaluating a jacket when buying it - FF - are to blame.

If no jacket fits the FF, then either such a jacket does not exist and you do not need to buy anything, or you need to change the FF)))

 
Andrey Dik #:

who's to blame if you bought a crappy jacket? - The manufacturer? - the seller? - The sheep? - You can go down to the atoms that make up the jacket, but it won't get better and you won't understand why the jacket sucks. your criteria for evaluating a jacket when buying it are to blame - FF.

If no jacket fits the FF, then either such a jacket does not exist and you do not need to buy anything, or you need to change the FF)))

And you can make a jacket: adjust the sleeves to the different length of arms, accurately fit the hump and beautifully play a lump of nerves in the front. And there will be a jacket. That's the difference in our approaches. Long live the tailors!

 
СанСаныч Фоменко #:

Or you can make a jacket: adjust the sleeves to the different lengths of the arms, precisely fit the hump and beautifully play up the lump of nerves in the front. And you'll have a jacket. That's the difference in our approaches. Long live the tailors!

A tailor won't help either, if a person doesn't know what he needs. A jacket made of rubber will fit perfectly, but it will make your belly sweat, all because the customer doesn't know what he wants.
 
СанСаныч Фоменко #:

Classification error (how was it calculated?)

Are there any options? ))))

 
Maxim Dmitrievsky #:

study

https://www.arxiv-vanity.com/papers/1910.13051/

The article contains many references to other state-of-the-art methods of time series classification, methods of signal and pattern extraction.

There is nothing about inefficiencies, but this is, as they say, homework

tried to generate x10 and x100 chips from the original ones. The error is larger than on the original dataset + learning speed suffers

scrapped

but the experience with numba is positive, it counts kernels very fast.
Reason: