Floating market parameters - page 8

 

Interesting


 
Rorschach:

Interesting


Simple as that, explains it well)))

 
Valeriy Yastremskiy:

He explains things in a simple way, and it's good.)

I wish everyone had teachers like that.

What was disappointing: the ns is at the level of other methods((( What about computer superintelligence?

What was surprising: didn't remove the trend, didn't bring it into +-1 range, didn't remove seasonality.

Interesting: fit only last slice, maximum forecast not more than 4 months, to avoid error accumulation predict 2 steps ahead at once instead of 2 alternate 1 (again, if he were the econometrician at all, he would have done thinning instead)

 
Rorschach:

Everyone should have teachers like that.

What's frustrating: ns at the level of other methods((( What about computer superintelligence?

...

It will never emerge from a neural network. It will emerge from the local self-awareness of one person, who will transfer his understanding of himself to the computer. )) It will probably be Ilon Musk.)))
 
Реter Konow:
It will never emerge from a neural network. It will emerge from the local self-awareness of one person who transfers his understanding of himself into the computer. )) It will probably be Ilon Musk.)))

That's what they really do when there's no money to spend.

Did I miss something?

 
Rorschach:

That's what they really do when there's no money to spend.

Did I miss something?

Let me be honest: if God exists, the task of creating a real AI will not be solved by dumb copying, but only through full self-awareness, the path to which is long and difficult. Since self-awareness as a quality is inherent in the few and the unicum, the invention of AI is likely to be accomplished by a loner. Imho.

I'd like to invent it, but I don't know how to think, and it won't work without it. There must be someone smarter than that. ))))
 
Rorschach:

I wish everyone had teachers like that.

What was surprising: didn't remove the trend, didn't bring it into +-1 range, didn't remove seasonality.

Interesting: fit only the last piece, maximum forecast not more than 4 months, to avoid error accumulation predicted 2 steps ahead at once instead of 2 alternate 1 (again, if he was the slightest bit econometric, he would have done thinning instead)

He did at the highest possible level of sufficient prediction. It's a uniform stationary series on some kind of linear moving average. The expanding series he led to a uniform series via the logarithm. But as he said everything is determined by the recognition and prediction model. Perhaps the latter would do without the logarithm. Any series transformation introduces an error in the inverse transformation if there is a SB.

Plus he talked about splitting the trend and seasonal component into 2 series. There the seasonal series will be horizontal and the trend series will be MA. This is if the MA is not linear, then it should definitely be removed.

I cannot read anywhere how much longer a series should be studied, just by eye.

Regarding prediction in one step the error will be bigger there too, but it will be different, not cumulative but temporary. Only try to find out which is bigger, logic sometimes doesn't work, although it is logical that the temporal error is smaller than the cumulative one on a couple of steps.

Didn't get it about thinning, he only has a row of 108 values and one learning and prediction algorithm. He didn't even dabble in comparing different algorithms, which would have been nice.

The target must be changed, or there must be more of them and something should be added to the input data from the series.

 
Rorschach:

That's what they really do when there's no money to spend.

Did I miss something?

I like the forecast))) Plans are in the works to conquer Broughton)))) Although it's the 20th year and impulse NS is just being explored))))

 

Valeriy Yastremskiy:

I have not yet read anywhere how much longer a series should be studied than the prediction range, just by eye.

I don't understand about thinning, it has only 108 values and one algorithm for training and prediction.

They stick to a 1 to 4 ratio.

imho ns is very dumb and resource intensive, so data needs to be cleaned/prepared in advance to simplify the task and reduce network size. So all obvious/linear patterns have to be handled manually: remove trend, remove seasonality, bring to +-1 range. When forecasting every 2 bars of the network, we will additionally have to filter intermediate fluctuations, so we also have to handle them manually. And since everything is manual, you don't need ns, well, only where it is not clear how to do it by traditional methods.

Reason: