Machine learning in trading: theory, models, practice and algo-trading - page 3734

[Deleted]  
Ivan Butko #:
You haven't learnt how to make a prompt yet, and you're already talking about extrapolation.
You'll be more interesting with your brothers in intelligence, go and have a chat.
 

Good afternoon. When asked about the applicability of neural networks in trading, ChatGPT said that this idea is absolutely pointless, because neural networks can be useful in image classification, text processing, etc., where there is static data and stable structures. Predicting stock exchange prices remains an extremely difficult (unfeasible) task, where "rubbish at the input", i.e. an arbitrary set of any data and stable structures. i.e. an arbitrary set of any dynamics data is inevitable, just as "rubbish at the output" is inevitable. In general, AI is useful in the sense of quickly getting a qualitative overview of the problem and attempts to solve it.

But, in my opinion, this is not a reason to give up working with the market. You just have to dig deeper - study chaos theory, multidimensional spaces, spatio-temporal symmetry, etc.

 
You just have to start with the philosophical underpinnings:

1). Learning is the transfer of knowledge from one object to another.

2). Learning is NOT research. Learning is NOT the acquisition, NOT the discovery of new knowledge.

3). To teach a model is to teach "something", not " something out there".

4). The process of knowledge acquisition is datamining. You need to dig exactly into knowledge mining. And then already transfer this knowledge to the machine with the help of training.

5). Numerical data on the input is encryption of the encrypted. Any number on the input is perceived by the model, which has an adder, as a force on the decision making system. But a price chart is not a quantity, it's a geometric coordinate space.

Therefore, when you give prices as an input as numbers - you give them an unreasonable force factor.
When you give an indicator as an input, you encrypt the encrypted one.
When you preprocess - you encrypt the encrypted 2nd time.
When you normalise the input values - you encrypt the encrypted one for the 3rd time.

By occupying the input with weights from destructive values (according to the "opinion" of the model in an attempt to learn) the model kills on the root the whole subrange of destructive number (when the system performance is in scattered values, including numbers close to 0).

And to "pull out" this subrange and use it, neural networks add(!) layers(!) and neurons(!).
And this provokes the growth of a huge astronomical space of weights, where a "working weight set" will simply never be found in the next 150 years of search of hyperparameter values.

And learning in this aspect is only a random probing of a microscopic part of the weight space, where the search for a global bottom is a self-defeating interpretation of the result.

After all, with a ginormous weight space, any set will be found to show similar growth on the buck, and show completely random results on the forward.

That is, even an attempt to "pull" from numerically noisy input data working subranges is guaranteed to and always leads to overlearning.

Hence the sixth basis:

6). No unreasonable numbers in the input. Only qualitative spatial geometric features (and derivatives thereof).



Departing from these basic fundamentals guarantees you "crutch" results.
 
Inquiring #: You just have to dig deeper - study chaos theory, multidimensional spaces, space-time symmetry, etc.

Will you? Or have we talked and forgotten?

I hope you will study not chat-chatter, but more reliable information from textbooks, articles, instructions of MoD models. Practise.

 
Ivan Butko #:
You just have to start with the philosophical underpinnings:


6). No unreasonable numbers in the input. Only qualitative spatial geometric attributes (and derivatives thereof).


I understand that you are very familiar with the principle of neural networks.Do you have any qualitative studies about"qualitative spatial geometrical features", or are these general thoughts?
 
Forester #:

Will you? Or did we talk and forget?

I hope you will study not chat-chatter, but more reliable information from textbooks, articles, description of parameters of MoD models. Practise.

I study, practice, look for interlocutors with similar thoughts and projects.
 

Here is the actual confirmation of the remote definition of promptus.

"Philosophy" instead of knowing the basics of the subject from basic textbooks. Like in that anecdote where a metal musician played a virtuoso melody in the process of unsuccessful attempts to play a given note.

 
Inquiring #:
I understand that you are very familiar with the principle of neural networks. About"qualitative spatial geometric features", do you have any qualitative developments, or are these general thoughts?
Of course, it is in its infancy.

In my mind this geometry of a broken line of a time series includes such notions as "informativeness", "relevance", "completeness".

That is, there is a root in these three notions.

Therefore, fractality, self-organisation (self-layout), wave levels, etc.

In the final perspective, datamining methods should be used to format the price chart, where the most recent waves of the general chart structure will be studied.

That is, the microstructure of 10 years ago is NOT relevant today. But the macrostructure of 10 years ago can prompt tectonic shifts, which will inform the model whether to increase the probability of the signal or not.

Therefore, the very fact that there is a dependence of the next quote value on the previous one and hence a serpentine price movement (specific average volatility), which generates directed structural movements of different order, already indicates that it is impossible to consider information only from a specific time scale.

Timeframes themselves are one big crutch. The markup should be self-similar.
 
Aleksey Nikolayev #:

Here is the actual confirmation of the remote definition of promptus.

"Philosophy" instead of knowing the basics of the subject from basic textbooks. Like in that anecdote where a metal musician played a virtuoso melody in the process of unsuccessful attempts to play a given note.

"Learning" in MOE is defined very simply. It is merely the assignment of specific numerical values to the parameters of a chosen model.

There is no "philosophy" here. But there are a bunch of problems with choosing a learning algorithm and tuning it. And there is a huge field of science (and partly art) about how this very heap of problems is solved in a particular subject area (in our case it is trading).

In the case of trading, there is also the problem of closedness of research results, because no one in their right mind would openly share profit-making algorithms.

 
Ivan Butko #:
In my mind, this broken line geometry of the time series includes concepts like "informativeness", "relevance", and "completeness".
The geometry of space is serious. But it is the one that gives an understanding of what is going on. But for that you need to know what space is, what geometry is. Then you need to know what symmetry is, etc. Is there a desire?