Testing real-time forecasting systems - page 60

 
grasn писал(а) >>

I guess it'll be too late :o( But that's OK!!! I'll make more predictions, new nteen trajectories, and one of them will probably turn out to be successful, and we'll justify the entropy! :о))))))))))

It will not be too late. No one is forgotten and nothing is forgotten. :-)

And what you predict will also be useful. :-)))

 

Sell on the market opening with target at 5438, stop in the 5514 area.

 
Yurixx >> :

A picture please !

Hi Sergey. Could you, as the hero of this thread, repeat your picture with the forecast that you made before the holiday and what you really got?

That is, all the predicted trajectories + the real price movement on one picture.

I still have this indicator in the terminal (I was going to delete it). Here is the picture. It is clear enough. May be it will be enough.


 
grasn >> :

By the way, what is the information entropy of false information? :о)

For entropy, any information is equivalent. If we take all forecast implementations with the same entropy as an example, some of them give more correct predictions and some less. That is, the amount of information in them is equal, but the quality of the prediction is different. The model, however, is the same. I was talking specifically about entropy as an indicator of prediction quality. It all started with choosing "favourites" according to entropy value ;-).

grasn >> :

Entropy as such has nothing to do with it. It is entirely determined by the nesting of the system (its dimensionality). Only this, nothing else. The higher the dimensionality, the harder it is to predict the system, that's all. Well, each dimension brings its own "slice" of entropy. Entropy can be "a lot" and the system can be quite "understandable".

Still, according to this formula, the forecast horizon is inversely proportional to entropy. Maybe we should not have mixed quality and forecast horizon. That's an inaccuracy on my part. But both are required in equal measure. And about the connection of dimensionality and entropy - it can get too long conversation, so I won't develop this topic here ;-).
 
marketeer >> :

For entropy any information is equal. If we take for example all predictions realizations with the same entropy, some of them give more correct predictions, and some give less. That is, the amount of information in them is equal, but the quality of the prediction is different. The model, however, is the same. I was talking specifically about entropy as an indicator of prediction quality. It all started with choosing "favourites" according to entropy value ;-).

That's not what I wrote about :o) In general it doesn't matter.

Still, according to this formula, the forecast horizon is inversely proportional to entropy. Maybe we should not have mixed quality and forecast horizon. That's an inaccuracy on my part. But both are required in equal measure. And concerning a connection of a dimension and entropy - it can turn out too long conversation, so I won't develop this theme here ;-).

The K-entropy in the formula is Kolmogorov, and it is, to put it mildly, strongly related to dimensionality. You have to look at the root of it. :о)

 
marketeer >> :

Not all of them, just the main trajectories, I still have this turntable in the terminal (I was going to delete it). Here is a picture. It's clear enough. Maybe it will be enough.


I understand Yuri asked for a picture of another prediction (I was playing thimbles there :o))

 
grasn >> :

That's not what I was talking about :o) Anyway, it doesn't matter.

The formula has K-entropy of Kolmogorov, and it is, to put it mildly, strongly related to dimensionality. You have to look at the root of it. :о)

OK, ;-) the topic of selecting the best predictions by entropy value is left unsolved, imho. I'll look at history to see how the correctness of predictions correlates with entropy value.

 

I expect a precipitous collapse of the pound, at least 600 pips, nearest support level is 1.6

 

It's a pity the stop triggered, but otherwise the picture was almost perfect:


 
marketeer >> :

OK, ;-) the topic of selecting the best predictions by entropy value is left unsolved, imho. I'll see on history how prediction correctness correlates with entropy value.

There's a subtle philosophy here. Minimum entropy cannot be chosen as a criterion, simply zero entropy is an area with zero probability of price appearing in that area, in other words, it makes no sense to expect price where it can't be, you have to wait where its probability is maximum. There's another subtlety, I think you're looking at informational entropy as physical entropy, and these are slightly different things. :о)

Reason: