
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I would say more as a search model...but often the search comes down to a fit...
In principle any algorithm fits... so the bigger the sample is when searching for a model, the better (for the same error for example)... and it's worth paying a lot of attention to this...
I don't know what "credibility" is, but I do know what model stability is. On historical data, you need to achieve model stability.
The probability that the error will not exceed the assigned threshold.
I just don't understand what model stability is. I would appreciate it :)
The probability that the error will not exceed the assigned threshold.
If the threshold is a constant, what if it is not, what if it is a random variable, what if it is a non-stationary random variable?
I just don't understand what model stability is. Would appreciate it :)
Look at the prediction error graph, to get a smooth line you need to run a bunch of tests and make a decision based on them. Can't explain here. Brukow. How to predict the dollar exchange rate.
Look at the prediction error graph, to get a smooth line you have to run a bunch of tests and decide on them. Can't explain here. Brukow. How to predict the dollar exchange rate.
Can we continue tomorrow?
Shall we continue tomorrow?
NS is not convenient - black box, but there are amateurs. Regression is much more convenient, the coefficient estimation immediately gives an estimate of the fit.
In the grids the weights of the inputs in some packages can be seen... + ns ... what is retranslated without problem ... what can and will discard inputs not needed in a given sample, etc... - no formula...
Regression is easier... find what you asked for, take the formula and use it...
I'm not sure about the coefficient estimate shedding light on the fit...a lot depends on the model...
As for prescott-based assessment, I've already said my opinion... the prescott's presence, not the method of assessment, is wrong... in my opinion, of course...
I would say more as a search model...but often the search comes down to a fit...
In principle any algorithm fits... so the bigger the sample is when searching for a model, the better (for the same error for example)... and it's worth paying a lot of attention to this...
I have already said my opinion about the prescott assessment... I think the presence of prescott is wrong, not the method of assessment... in my opinion of course... everything...
NS is not handy - black box, but there are amateurs. Regression is much more convenient, estimating the coefficient immediately gives an estimate of the fit.
Prescott is from poverty. In my opinion, the ideal is a wavelet. But this is Matlab, a huge set of tools, but it's fragmented and you need to understand what to build, and there's no such understanding yet.
))) the wavelet also messes up ( the body overflows )... everything is not so simple everywhere...
but the main thing to save time - it is necessary to test the models on the difficult areas of the market - that is, the breaking points... that's what we are most interested in (discontinuities)
about packages - it doesn't matter - the main thing is to save time - you do one thing at a time etc... the main thing is to have more time for real tests...
and Matlab is the most powerful package of course... + the fact that the latest algorithms usually appear there ... but I do not use it myself...