Machine learning in trading: theory, models, practice and algo-trading - page 1387

 
Maxim Dmitrievsky:

this is the latest return value for each lag, it makes no difference what you call it

You have not just one example in your sample but many, the sequence of such prices for each chip will be a sequence of increments

I've got something wrong with my terminology.) The increments are the difference in prices, as I understand it. I don't have a difference, I have a transfer of the zero coordinate point to the beginning of the sample. Everything is a pure affine transformation. Just a projection.

 
Yuriy Asaulenko:

There's something wrong with my terminology.) The increments are the price difference, as I understand it. I don't have a difference, I have a transfer of the zero coordinate point to the beginning of the sample. Everything is a pure affine transformation. It's just a projection.

what's the difference in relation or difference, just different units of measure

of course it's not an affine transformation, it's just a return

with affine one the non-markness remains, with returns it is killed.

You made a process without memory, roughly speaking.

And every return with a different lag turns out in a different dimension... and NS has to somehow navigate in almost one and the same, but lying in different dimensions? I do not understand...

So it turns out to be one and the same dimension, but it is believed to be in different dimensions of the NS... it is like a looking-glass
 

Here you have a return on the X axis, a return on the Y axis, and a return on the Z axis, and the lag differs only by one, i.e. they have a strong correlation between each other

and what should the NS find in this data?

Let's simplify, 2 axes X and Y, where 2 Markov processes without memory, in fact noise, are trying to find something important in each other. And this has to be somehow projected onto the output class. It turns out to be a 50/50 error, as expected. 0 useful information.

 
Maxim Dmitrievsky:

what's the difference in ratio or difference, just different units of measurement

of course it's not affine transformation but just returns

Affine remains non-marking, returns kills it.

You made a process without memory, roughly speaking.

And every return with a different lag is in a different dimension... and the NS should somehow navigate in almost one and the same, but lying in different dimensions? I do not understand...

that is, the measurement turns out to be sort of the same, but is believed to be in different dimensions of the NS... it's like a looking-glass

You certainly do not understand the algorithm.

On the fingers. )) We do the same thing we do for photography. We set the shutter speed to fit in the light range of the sensor and zoom to fit the object in the frame, be it a tree house or a tall house. Transformations are completely equivalent. Distortions are absent.

 
Yuriy Asaulenko:

You definitely did not understand the algorithm.

On your fingers. )) We do the same thing as in photography. We set the shutter speed to fit in the light range of the sensor, and use the zoom to fit the object in the frame, be it a hut or a high-rise. Transformations are completely equivalent. There is no distortion.

But this object is not a Picasso painting or even a Malevich square, but random relic noise, so you get the same thing in the output

Although, Picasso is a bad contrast to noise
 
Maxim Dmitrievsky:

But this object is not a Picasso painting or even a Malevich square, but a random relic noise, i.e. the output is the same

Although, Picasso is a bad contrast to noise

It makes no difference to me, of course. But for those methods, which have been mentioned here, for each instrument will have to invent their own methods of scaling. Moreover, even for one tool you will have to change the scale when the price changes significantly, and retrain too. Otherwise your MO will go unconscious. And you will be surprised that it has stopped working.

 
Yuriy Asaulenko:

I, of course, do not care. But for those methods mentioned here, I have to invent my own scaling methods for each instrument. Moreover, even for one instrument you will have to change the scale at significant price changes, and retrain too. Otherwise your MO will go unconscious. And you will be surprised that it stopped working.

So, you do what, by definition, cannot work, but you do not care? ) ok

about other methods I suggested - this is just an abstract suggestion, no one binds to anything

 
Maxim Dmitrievsky:

So you're doing something that, by definition, can't work, but you don't care? ) ok

About the other methods I suggested - it's just an abstract suggestion, not binding anyone to anything.

On the contrary, I do what only and can work.)) And the workability is shown earlier, roughly speaking, on a coin toss, the cat flies for 5 min, and the winning/losing is random. The shift in probability of winning from the graphs is obvious. And it is shown not only for NS, but alsoAleksey Vyazmikin for scaffolding, thanks to him.

 
Yuriy Asaulenko:

On the contrary, I do what only can work.) And the workability is shown earlier, roughly speaking, on a coin toss, the cat flies for 5 min, and the win/loss is random. The shift in probability of winning from the graphs is obvious. And it is shown not only for NS, but alsoAleksey Vyazmikin for forests.

well this is shown on the scatter diagram, but not on the new data in the normal tester

i.e. it is impossible to draw any conclusions... moreover there were some artificial series

moreover, there was a bubble, not a line, i.e. a complete random, as I remember

 
Maxim Dmitrievsky:

Well, it is shown on the scatter diagram, but not on the new data in the normal teter

I.e. it is not possible to draw any conclusions... moreover there were some artificial series

What does a normal tester mean? Is it a MT or something? I think mine is even more normal.

All right, the main thing here is that I made my conclusions.)

By the way, those were not artificial rows, but market ones, Sber's futures.) And Alexei already had the futures, and he normally tested on an independent sample. You missed something.)

Reason: