Machine learning in trading: theory, models, practice and algo-trading - page 3191

 
Aleksey Nikolayev #:

What were your numerous gifs with steadily steepening balances for then? Maybe you just didn't understand the answer to your question?

The gifs are for me a new variant of the development of the use of quantum cutoffs, which I described. Showed that 10 quantum cutoffs for a particular sample is also enough to get a positive balance on a training sample. And, accordingly, I said that the random selection of the first quantum segment (from the previously selected ones) allows to find the sequence at which the balance shows positive growth on the samples test and exam. Accordingly, there are quantum segments that are effective only on the train sample, and there are those that are effective on other samples as well. And, if they have such different behaviour, then probably some of them describe a stable pattern, and some of them describe a false one. So the question was whether it is possible at the stage of searching/creating these quantum segments to select those that are false. It is clear that the criteria used by me are not enough to filter out false quantum segments. The idea of mixing up the target essentially serves as a test to assess the probability of selecting quantum segments on the SB.

That's why I don't understand, here I've selected quantum segments on a randomly placed target, and I need to construct a balance now? In the same way, but without randomisation, that I showed in the gifs?

It's just that the very way of selecting a sequence of quantum segments from the ones I have previously selected in the sample is not considered complete, but rather shows the potential of the possibility.

That's why I don't understand why and how to evaluate through balance.

 
I don't remember the specific numbers, it was discussed in the Alexander Schrodinger thread. Here is chatgpt's answer:

The average entropy in this case will be expressed as a number equal to the logarithm of the number of possible paths. This number can be any positive value, depending on the number of steps and the length of each wandering step. For example, if random wandering occurs on an axis of numbers with step length 1 and number of steps 10, the number of possible paths will be 2^10 = 1024, and the average entropy will be equal to the logarithm of 1024, i.e. about 6.93.


For a number of quotes, the numbers were comparable on average to sb.

 
Aleksey Vyazmikin #:

Right now, if I remember correctly, the predictor in the tree hits just half of the range, without looking for the best place to split?

By omission it is usually split in half. There was an option in the Alglibow forest to divide by 4. I extended myself the option to divide in any increment. I don't know about catbusta, but the thing is simple, it should be. Cutbust has a search for splits at a random point.
On OOS usually the best models are obtained when dividing by half. It retrains very quickly when the pitch is small.

 
Maxim Dmitrievsky #:
From an information point of view the market is random if you compare the amount of information in sb and quotes. I compared several years ago. From a layman's point of view - the market changes, patterns change over time.

If you are given data recording turbolent flows from an aeroplane wing and your test shows that it is random (and it will)

Can that data be considered random?

 
mytarmailS #:

If you are given data recording turbulent flows from an aeroplane wing and your test shows that it is random (and it will)

Can that data be considered random?

I have no idea. It's not my test, it's just the amount of information in the data. That is, the number of predictable sequences.

It's more about whether the quotes can be reliably distinguished from the sb. My understanding is no.
 
Maxim Dmitrievsky chatgpt's response:

The average entropy in this case will be expressed as a number equal to the logarithm of the number of possible paths. This number can be any positive value, depending on the number of steps and the length of each wandering step. For example, if random wandering occurs on an axis of numbers with step length 1 and number of steps 10, the number of possible paths will be 2^10 = 1024, and the average entropy will be equal to the logarithm of 1024, i.e. about 6.93.

For a number of quotes, the numbers were comparable on average to sb.

From personal experience.

It can't be a fluke, as I have many similar pictures left.

The market is not random, that's unequivocal)

 
Aleksandr Slavskii #:

From personal experience.

It can't be a fluke, as I have a lot of similar pictures left.

4 transactions are not statistically significant
Again, if these are ticks, it's hard to argue because I haven't done them yet. I did tests on closing prices.
 
Forester #:

By silence it is usually divided in half. In the Alglibov forest there was a variant of division by 4. I have extended the possibility to divide in any increments. I don't know how katbusta does it, but the thing is simple, it should be. Cutbust has a search for splits at a random point.
On OOS usually the best models are obtained when dividing by half. It retrains very quickly when the pitch is small.

With katbusta - by quantum table it is overfitting :)

Maybe try to implement the table at the same time for compatibility?

I have attached a file according to CatBoost's standard - the first column is the predictor number and the second column is the split.

Files:
 
Aleksandr Slavskii #:

From personal experience.

It can't be a fluke, as I have a lot of similar pictures left.

The market is not random, that's for sure)

Oooh . yes , I wanted to give this example many times too, but it always slipped my mind.

 

Just give me a long trading history, all right?

if you have found your inefficiency - you can only be happy, but it does not cancel that the market is sb or almost sb.

in casinos there are also hot streaks where people win.
Reason: