Machine learning in trading: theory, models, practice and algo-trading - page 3643

 
Evgeniy Chernish #:

Regarding sine plus noise. It is a stationary process with zero MO.

But in the example you gave, the MO at a particular point of the process is considered. Naturally, at point t = 1, for example, sin(t) = 0.841.... + noise with zero MO, so we have Mo at this particular point = 0.841.

But we observe the process at different moments of time t=1,2,3,4.... and we count Mo over the whole trajectory of the process for this period of time, not at a particular point.

I wonder who is the author of the textbook ?

I don't know, some people.

 

The theorems of Tsybenko (1989) and Hornik (1989) on universal approximation, based on the Kolmogorov-Arnold theorem (1957), say that a single-layer network can approximate any continuous function with any accuracy. There is no requirement for the functions to be stationary or periodic.

It follows from these theorems that if we take any segment of a continuous function of a process, then it is reliably known that there exists such a net approximating this segment that also approximates the whole function, and to any accuracy.

These theorems speak only about the existence of such a net set, but do not speak about the ways of finding it. This is what I was talking about earlier, that there is no theoretical justification of ways to find a robust network set that will continue approximation with the same accuracy on the oos.


You can also easily show that all moshniks are optimisers, but not all optimisers are moshniks.))
 

?

 
Andrey Dik #:
It follows from these theorems that if we take any segment of a continuous function of a process, then it is known for sure that there exists such a net approximating this segment that approximates the whole function, and with any accuracy.

Trolling again, passing off approximation as extrapolation.

 
It has been written 3 times already that you need to load representative data or analytical data into the neural network. Then everything is extrapolated.
 
Aleksey Nikolayev #:

Trolling again, passing off approximation as extrapolation.


Study the theorems, think. Draw conclusions.
 
Maxim Dmitrievsky #:
It has been written 3 times already that you need to load representative data or analytical data into the neural network. Then everything is extrapolated.

You don't know anything about functions.)))

Understand, you can't know in advance how long a "representative sample" should be if there is no analytical formula available. Besides, the process may not have a strict periodicity or any periodicity at all.
 
Andrey Dik #:

It's not like you don't know about functions.)

Understand, you cannot know in advance how long a "representative sample" should be if you don't have an analytical formula available. Besides, the process may not have a strict periodicity or any periodicity at all.

I don't know about stationary phs, but apparently you do.

It's not an IO problem, it's a data problem. And the spacing between the IO and the data.

You're just bored, aren't you? Nothing useful to say?
 

You don't know what the ph, but you assume there are periodic components.

You try it.

#  Создаем признаки
X = np.column_stack([
    np.sin(x),
    np.cos(x**4), 
])

you get it, you test it.

What's your problem?

 
Never got graphs going in different directions on the OOS on this ph-i as originally stated. You just get different errors for different signs.