neural network and inputs - page 8

 
solar:
As for recognizing letters from the alphabet (by analogy), we should suppose that the market is a closed system. That will make it stationary (using your language), i.e. we should input everything we know about the market. )))))

And here appears the eternal question of the TS builder: is the non-linear function that the network has built with the given inputs and outputs a fitting or a really found regularity?

And in case of NS the question is very serious, because, for example, 2 layers of 10 neurons is a hundred weights (a hundred parameters of future EA) that should be optimized. Try to take an Expert Advisor with a hundred parameters and run optimization on them all for a year: most likely this should be considered a fitting.

 
alsu: Again, if we know in advance the nature of non-stationarity, we can include it into the algorithm and, detecting this very non-stationarity, quickly adjust parameters of the controller.

More specifically, we don't know the nature of the non-stationarity of financial markets and, unfortunately, we don't know.

The only advantage of NS is that it is very nonlinear and very flexible. With a small amount of neurons the NS can memorize (learn) 10-15 years of a minute chart of any tool easily.

It would seem - what else is needed?

And here only trader's skill becomes relevant - to input information of those symbols that have regularities and avoid retraining of the network. If these two conditions are met - the network will work perfectly.

But these two conditions are among the most difficult ones. All you need is trader's "gut feeling" )))).

 
LeoV:

More specifically, the nature of the non-stationarity of financial markets is something we do not know and, unfortunately, cannot know.

We can only speculate. And even check our assumptions.)
 
 alsu: We can speculate. And even test assumptions)
Of course we can speculate, but it's like predicting - it's no use. "Like a finger in the sky" ))))
 
LeoV:
We can certainly speculate, but it's like predicting - it's no use. "Like a finger in the sky" ))))

It's not useless at all: if the input (quotient) is adequately described, it gives quite a tangible statistical advantage. To put it in formal terms, the problem is to find a transform which has a kotir at the input and a stationary GPB at the output as the residual. If the transformation is found, it means that the model takes into account all the peculiarities of the kotir's behavior. Well, then it's a matter of technology - to analyze the current parameters of the model and determine whether they currently give us the opportunity to take advantage of the situation. The task is creative, but such is the case, "scientific gauge" was, is and always will be the main method of scientific synthesis)))

 
alsu:

Not useless at all: if the input (quotient) is adequately described, it gives quite a tangible statistical advantage. In formal terms, the problem is to find a transform which has a kotir at the input and a stationary GER at the output as the residual. If the transformation is found, it means that the model takes into account all the peculiarities of the kotir's behavior. Well, then it's a matter of technology - to analyze the current parameters of the model and determine whether they currently give us the opportunity to take advantage of the situation. The task is creative, but such is the case, "scientific gauge" was, is and always will be the main method of scientific synthesis)))


I don't agree with such complications, but whatever.

In fact, in my opinion (and not only in my opinion), everything is much simpler - if there are patterns in the quotients fed to the NS inputs and outputs, then the network will successfully find them and you will be happy. And practically any NS. If there are no regularities - it is useless to transform, synthesize, build some kind of all-round NS and be engaged in other scientific and mathematical problems - you will not find regularities where there are none )))).

It's like looking for a black cat in a black room, especially if it's not there ))))

 
LeoV:


I do not agree with such complications, but never mind.

In fact, in my opinion, it's much simpler - if there are patterns in the quotients fed to the NS inputs and outputs, the network will find them and you'll be happy. And practically any NS. If there are no regularities - it is useless to transform, synthesize, build some kind of all-round NS and do all other scientific and mathematical pokes )))).


I don't! The regularity may not be everywhere, but appear only at certain moments, short-term ones, which the NS will not define due to its inertia. Personally I hold exactly this point of view concerning kotier - there are short local areas of inefficiency, and to work with them you need to detect them at the very beginning. For the network to be able to do this, it must be not just any network, but one with feedbacks between the layers, and not taken at random, but according to some model, i.e. again, some a priori knowledge must be put in the NS.
 
alsu:

No, it doesn't! The pattern may not be everywhere, but may appear only at certain moments, short-term ones that the NS cannot detect due to its inertia. Personally, this is exactly my point of view regarding kotier - there are short local areas of inefficiency, and in order to work with them you need to detect them at the very beginning. For the network to be able to do this, it must be not just any network, but one with feedbacks between the layers, and not taken at random, but according to some model, i.e. again, some a priori knowledge must be put in the NS.

By the way, in the above mentioned theory of optimal control it is proved that the problem of search for an optimal control law under certain conditions (np quite simple in structure "Witsenhausen counterexample" for the quadratic controller) is NP-complete (i.e. computationally very complicated), so no wonder that they try to solve it with just this particular NS...
 
alsu:
No, it isn't! The regularity may not be everywhere, but appear only at certain moments, short-term ones, which the NS cannot determine due to its inertia. Personally I hold exactly this point of view concerning kotier - there are short local areas of inefficiency, and to work with them you need to detect them at the very beginning. For the network to be able to do this, it must be not just any network, but one with feedbacks between the layers, and not taken at random, but according to some model, i.e. again, some a priori knowledge must be put in the NS.


Maybe, but why do such investigations when it is possible to earn money by much simpler methods.

Your method makes some deep sense, but a lot of questions arise that have very few answers - how do you detect these ineffective areas? What are the inefficiencies? How do you identify the nature of these feedbacks in relation to these models? How can the correlation between these models and feedbacks be determined? What a priori knowledge and how to relate it to the models in conjunction with the feedbacks? In short - brain cancer ))))

 
alsu:

By the way, in the optimal control theory I mentioned, it is proven that the problem of finding an optimal control law under certain conditions (nn a rather simple in structure "Witsenhausen counterexample" for a quadratic controller) is NP-complete (i.e. computationally terribly complicated), so no wonder that they try to solve it with just such a NS...

OK, I give up )))) I'm out ))))
Reason: