Machine learning in trading: theory, models, practice and algo-trading - page 594

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Can't sleep - did a little reading on the Internet. I liked this:
"The fact that increments are used is not really so bad against the general background, most of the logarithmic price is fed to the input, the increments is a step forward, although both and so the fit.
I know people who pulled the grail from NS, but those guys are so closed to communication and even hints about what they do, I'm a beginner so sure I have no chance. I only know that everything is complicated, it's not Vels, not Metatrader and not even S#, but C++ and MatLab with some chips that decode and interpret the data coming from the caliders, it turned out that this is the same methodology, I heard it and got scared, it works with them uncle who used to grind terabytes a day at CERN looking for new particles in quantum chaos.
That's funny. I stand by my opinion - the input to the NS should be pure as a tear, price increments. It is the increments that are the key to everything. They form the basis of the solution to this problem. In fact, in Forex we follow a pseudo-stationary process of movement of a wave packet ( probability density function) of these increments. And nothing more. (I wrote this paragraph already :)))
It is remarkable from the biography of B. Fritzke that in 2001 he ended his career of a scientist at the Ruhr University (Bochum, Germany) due to a job offer at the German Stock Exchange (Deutsche Bӧrse) . I won't hide the fact that this fact served as an additional incentive to choose his algorithm as a basis for writing this article.
https://www.mql5.com/ru/articles/163
In general, I am amazed... to sit down at my computer in the evening... why don't I write a uh... growing neural gas... for example...
HOWEVER, it amazes me how smart people are
...ns that plays with itself in forex...
...well, Yuri already wrote about it...
What kind of neuron can do that? I somehow missed those messages.
What kind of neuronics can do that? I somehow missed those posts.
Well, reinforcement learning in general, but there may be variations on
q-learning, for example.
It is not really part of neurodynamics ... because it is trained in a different way, but the topology and weights of neurons do not change afterwards when it is already trained, it seems
It's on. But you are all about the MCL, yes MCL. That's what I mean. It's not good, if we are going to do DM. Imho.
Secondly, I myself have MCL only as an interface, which means that, if necessary, I can change the interfaces without changing the analysis system.
Everyone is talking about what to feed to the input. But in my opinion, what to feed to the output is just as important. If you feed the zigzag, the network is not trained at all on any inputs. If you balance the classes. I.e. if we remove most inputs that do not show reversals, the result also does not withstand any criticism. If we set for outputs whether the average price of the bar will be higher or lower than the previous one. We get exactly 50% of correct answers. Which is also not good. What else can we come up with?
Hello, have you finished the robot? with AI.
it's time to test it ))
An interesting thought https://monographies.ru/en/book/section?id=2465.
When modeling neural networks with linear neuron activation functions it is possible to build an algorithm that guarantees an absolute minimum of learning error. For neural networks with nonlinear activation functions it is generally not possible to guarantee the achievement of the global minimum of the error function.
...............
In the case of a linear network model and an error function as a sum of squares, such a surface (error functions) will be a paraboloid, which has a single minimum, and this allows to find such a minimum quite easily.
In case of nonlinear model the error surface has much more complex structure and has a number of unfavorable properties, in particular it can have local minima, flat areas, saddle points and long narrow ravines.
Maybe we should use more neurons for non-linear activation functions? To smooth out all these irregularities.
Everyone is talking about what to feed to the input. But in my opinion, what to feed to the output is just as important.
When you enter a building called "Statistics," it says"Garbage in, garbage out" above the entrance.
When you walk into a building called "Statistics," it says"Trash in, trash out" above the entrance.
))
Firstly, not me, and Maxim about the MCL, which is why I actually wrote that it did not go.
Secondly, I myself have MCL only as an interface, which means that, if necessary, I can change the interfaces without changing the analysis system.