Machine learning in trading: theory, models, practice and algo-trading - page 3644
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
.
Draw conclusions.
Continuous functions, completely coinciding on a segment (approximations), can differ as much as they like outside this segment. That is, closeness on the segment does not mean closeness outside the segment at all. The same is true for multidimensional functions and multidimensional sets.
There would be a small chance of extrapolation if one could approximate an analytic function by analytic approximations, but there are just Vitushkin's smoothness theorems, from which it follows that this is impossible.
You can check it yourself. You can substitute other ph-iases. Optimiser's tears.
So the problem is considered fictitious, according to my version.
The theorems of Tsybenko (1989) and Hornik (1989) on universal approximation, based on the Kolmogorov-Arnold theorem (1957), say that a single-layer network can approximate any continuous function with any accuracy. There is no requirement for the functions to be stationary or periodic.
It follows from these theorems that if we take any segment of a continuous function of a process, it is reliably known that there exists such a net approximating this segment that approximates the whole function as well, and with any accuracy.
These theorems speak only about the existence of such a net set, but do not speak about the ways of finding it. This is what I said earlier, that there is no theoretical justification for finding a robust network set that will continue approximation with the same accuracy on the oos.
Of course there is no such requirement, because we are talking about deterministic processes.
MO is the part of statistics that deals with random processes, in which the numbers (mathematical expectations) that we have written and see do NOT exist, but there is actually a set of numbers that should be treated with some level of confidence.
The limit theorem tells us the number of random variables so that we narrow the confidence interval (increase the probability of random variables falling within that interval) for the digit we see.
But this is for stationary processes.
We are dealing with non-stationary processes in which the existence of mathematical expectations and confidence intervals for them is highly doubtful.
All this has been explained to you many times in different words, including GPT. Everyone wrote about the same thing. But you reject any information, demonstrating "unlearning".
So the question is natural: are you trolling or just a dull ignoramus in mathematical statistics who does not understand the meaning of the word combination "random variable"?
A long time ago. This is a whole field of mathematics, which emerged in the process of Kolmogorov's solution of the thirteenth Hilbert problem.
.
Continuous functions that coincide completely on a segment (approximations) can differ as much as they like outside this segment. That is, closeness on a segment does not imply closeness outside the segment. The same is true for multidimensional functions and multidimensional sets.
There would be a small chance of extrapolation if one could approximate an analytic function by analytic approximations, but there are just Vitushkin's smoothness theorems, from which it follows that this is impossible.
You're just bored, aren't you? Nothing useful to say?
.
...
Forum on trading, automated trading systems and testing of trading strategies
Machine Learning in Trading: Theory, Models, Practice and Algorithm Trading
Maxim Dmitrievsky, 2024.11.18 08:03
I do not know about f-i....
Well, you have confirmed my words. MO methods (and the theory confirms it) allow to approximate any continuous function with any accuracy, but they cannot extrapolate with the same accuracy as in the approximation area (they cannot reliably choose the set that will allow to extrapolate on the oos, because this set exists, but there are no ways to choose it). And this is the point of forecasting, to first approximate the available information and then extrapolate. That's exactly what I was saying.
Wrong.
IO methods, like any other method of mathematical statistics, give a probability of something, God willing, with a confidence interval. No specific numbers, as deterministic processes, MO does not and cannot give.
Are you extrapolating the curve or approximating the f-i? ))
So the question is legitimate: are you trolling or are you just a dense ignoramus in mathematical statistics who doesn't understand the meaning of the phrase "random variable"?
Dear, what are you doing in the MO branch if you think that you deal with random processes on the market? Then you are trolling, not me. You should create a separate thread on random processes and post there, about matstat and so on.
Wrong.
MO methods, like any other methods of mathematical statistics, give a probability of something, God willing, with a confidence interval. No specific figures, as deterministic processes, MO does not and cannot give.
Read the theorems I gave above. If you reject them, then what is there to talk to you about in this particular thread. I will ignore your posts from now on.
I've thrown down a notebook with your task, can you at least find a mistake there, I don't know? To have a meaningful conversation.
Maybe you need extrapolation of the curve by its last points, not approximation of the function?