Fourier connoisseurs... - page 9

 

 

That's the ones that don't apply Fourier).

 
forte928:

the curve in red in the bottom picture is the Fourier transform and a couple of other functions...

green is the raw data...

The Fourier transform process requires a period selection to get a stable process at the starting point time[0]...

The Fourier transform has no further effect on this process...


What if you go further with your method and decompose the residual between the red and green line in the same way?

 
who's thinking about this.

I think it's our case.

https://www.mql5.com/go?link=http://dxdy.ru/topic54592.html

and mnc, and mmm might be more appropriate to replace with https://ru.wikipedia.org/wiki/Метод_максимального_правдоподобия
 
Freud:

who's thinking about this.

I think it's our case.

https://www.mql5.com/go?link=http://dxdy.ru/topic54592.html

and mnc, and mmm might be more appropriate to replace with https://ru.wikipedia.org/wiki/Метод_максимального_правдоподобия

I'll tell you a secret, MNCs and MNMs are special cases of MSM.
 
And for regressions on non-linear functions, there are plenty of iterative methods - Levenberg-McVardt, L-BFGS, ordinary gradient descent in the end, if we solve by smallest moduli...
 
alsu:
I'll tell you in confidence, MNC and MNM are special cases of MMP.


I will add, also in confidence, the LPI follows from the LMP under the assumption that the error is Gaussian, while the CMM follows from the LMP under the assumption that the error is Laplace. That is, we have a linear modelling problem:

x[n] = SUM( a[i]*f[i][n] ) + e[n], n=1...N

or

x[n] = y[n] + e[n], where y[n] = SUM( a[i]*f[i][n] ), n=1...N

where x[] is the input data, a[] are coefficients, f[][] are regression functions, e[] is model error. For example, if f[i][n] = exp(j*2*pi*i*n/N), this formula gives a Fourier series. If we assume that the error e[] is Gaussian, i.e. P(e) ~ exp(-e^2/2/s^2), then MMP leads to MNC, i.e. searching for the coefficients of a[] by minimizing the sum of squares of the error:

Obj Func = SUM(e[n]^2) = SUM( (x[n] - y[n])^2 ).

If we assume that the error e[] is Laplacean, i.e. P(e) ~ exp(-|e|/s), then MMM leads to MNM, i.e. finding the coefficients of a[] by minimizing the sum of error modules:

Obj Func = SUM(|e[n]|) = SUM( |x[n] - y[n]| ).

More generally, the error can be described by the super-Gaussian distribution P(e) ~ exp(-e^q). Why does everyone choose the Gaussian distribution? Because the ANC of the linear model can easily be solved by differentiating Obj Func and equating the result to zero. This is where the Fourier series expansion method comes from. Try differentiating SUM( |x[n] - y[n]| ).

So which error distribution is correct? Depends on the nature of the process we are modelling with our linear model. If you are sure that.

(1) exchange prices are described by a linear model with sines and cosines, and

(2) the model error should obey the Laplace distribution,

then go ahead and minimize SUM( |x[n] - y[n]| ). Don't forget to send the application to the Fields Prize in the process.

 
gpwr:


Don't forget to send an application to the Fields Prize when you do so.

You might get a Nobel for that)) in economics))
 
Freud: Mathematics states facts/describes

Mathematics is the language of science. It is not directly related to facts.

But facts can sometimes be very accurately described in the language of mathematics and called, say, physics.

 
Freud:
In short it turns out that physics can always be described through mathematics, but mathematics cannot always be explained by physics, right? If so, then mathematics, as the queen of sciences, has once again punished the rational mind)))

What rational consciousness? Writing sine waves into prices? Or doing it by MNM? And what's the physics involved? Understand that any N orthogonal functions can be written into a series of N quantities, not only sines and cosines as in Fourier. Then think about why it is sines and cosines that make physical sense to model market prices?
Reason: