Machine learning in trading: theory, models, practice and algo-trading - page 2464

 
Dmytryi Nazarchuk #:

And through which broker do you get access to the Moscow Exchange using MT?

Otkrytie, BKS, Finam. There are separate threads about it!
 
Mikhail Mishanin #:

You have interpreted my opinion exactly to the contrary, in nature the target is the most practical - the most "necessary" survives and reproduces. And it is necessary to train the most "practical" target without changing it in any way.

About the data, yes, the information fed to the input, but ideally we should have formed / selected - "eyes", "ears", "nose", etc.

1) except that "To train a neural network, you need gigantic sets of carefully selected data".

To create a new neural network, you have to set up an algorithm, run all the data through it, test it, and optimize it repeatedly. This is difficult and time consuming. So sometimes it is easier to use simpler algorithms, such as regression.

2) I also thought of it as simple (regression) at first... But linear regression is confusing because, as far as I remember, prices are non-linear and return is linear (if not vice versa?) - at least for options on futures... But it's not that simple ax^2+bx+c=0 , where b- velocity, a- acceleration, - but the time factor should be imposed on it - in principle it is imposed when considering options of different expiration dates, but 3d pictures of such goodness do not always justify themselves... And the saddest part of such analysis is that according to DB CME, all prices are aligned to the central strike and I don't see the possibility to see imbalances in the report, but to monitor real-time (I'm not an arbitrator, to get so involved with it)... And it's not so linear, rather exponential in logic, and I don't want to cross crocodile with rhinoceros (in the model of 2 factors - time and interest rate)... It seems that in option pricing everything is already crossed...

3)

Mihail Marchukajtes #:
Everything is correct about the target, it is perfect from signal to signal, if the signal is profitable then put one if it is unprofitable then put 0 and nothing else, except that profit can be calculated with the spread condition!!!!

This is another possible machine learning algorithm - using Bayes theorem (the description seemed that way to me)... "These algorithms are used to deal with text documents - for example, for spam filtering."... Neural networks, on the other hand, are more complex - from layer to layer (deep learning)...

Anyway, looking at these 3 machine learning algorithms - and so far I understand that in terms of probability theory it's NOT so easy to NOT cross out the target function in your model, lest you end up over-learning from best to worst and most importantly just relying on history... algorithm+data+conditions for selection to the next layer = seems the most logical (although only when dealing with large amounts of data, but they are not always such)

Probably that's why (due to falling out of two out of three algorithms) H.Grid layers are more promising in trading than just regression or Bayes' theorem... But in essence, in my opinion, so far it all comes down to banal programming of decision-making process, only using huge samples to back up robot's decisions with statistics... which, alas, we don't have much of, and which doesn't center the specifics of the output as much as it centers the range of possibilities of the set of outputs... the same Range... in which the price usually floats... (since the floating rate of exchange itself generates volatility)...

And you can't code a trader's decision-making process (and his own learning) without learning it yourself... If a trader has not trained himself, then he will have something to pass on to the robot, but, of course, the algorithm of error analysis he cannot pass on to the math model (as long as it is present in the encoder's brain)... imho

OK, I'll think at my leisure how not to turn everything upside down (so as not to get overtrained for the worse)... Pr, OI, Volume- it's only part of data for expectations & decisions of trader anyway, and supply & demand is born from them, not from mathematical models... imho

(i have counted 5 factors, without taking into account the fiscal and monetary policy of 2 countries present in the quotes)

Mikhail Mishanin , thanks for the tip.

one nuance slightly contradicts your view? (not quite clear from the quote on the link)

Basically, hidden layers perform some kind of mathematical function. We don't set it, the program learns to output the result itself.

Sounds like a dumb brute force path... (like in an algorithm for password cracking, for example)

p.s. And yet:

Igor Makanu #:

MO doesn't remember history, maybe you're talking about model retraining

that's why you can't gather much data at the current moment to charge the network layers instead of a simple decision-making process... so still leaning towards possible usefulness in RM, but not in TM

Искусственный интеллект, машинное обучение и глубокое обучение: в чём разница
Искусственный интеллект, машинное обучение и глубокое обучение: в чём разница
  • skillbox.ru
Компьютер запросто диагностирует рак, управляет автомобилем и умеет обучаться. Почему же машины пока не захватили власть над человечеством?
 
Before you can teach a neuronet, you have to know something yourself. As for trading, if you don't know how to trade with your hands, a robot won't help.
 

As for the forum, if you don't know how to draw conclusions from feedback, you can make such a robot WITHOUT even machine learning, if your own brain doesn't help... to yet another troll reposter with no brains.

Machine learning systems allow you to quickly apply learning knowledge to large data sets, allowing them to excel at tasks such as face recognition, speech recognition, object recognition, translation, and many others.
 
JeeyCi #:

As for the forum, if you don't know how to draw conclusions from feedback, you can make such a robot WITHOUT even machine learning, if your own brain doesn't help... to yet another troll reposter with nothing to say.

+
 
Igor Makanu #:

MO does not remember the history, perhaps you are talking about retraining the model

How does it not remember? It does exactly that.
Have you seen the expression "neural network based databases"? I've come across it once and I think it's the best definition of what NS/trees are.

One tree can be trained to the last split and then it will remember absolutely all the history with absolute precision (get an over-trained model).
If you stop division not up to the last split, but a little earlier (for example, 10 examples in the list), then you get memory with generalization and with averaging results of these 10 most similar examples. There will be less overlearning. I.e., you have to stop division when underlearning begins to turn into overlearning. This is the main and most difficult task.

 
elibrarius #:

How does it not remember? That's exactly what it does.
Have you ever come across the expression "neural network based databases"? I came across it once, and I think it's the best definition of what NS/trees are.

One tree can be trained to the last split and then it will remember absolutely all the history with absolute precision (get an over-trained model).
If you stop division not up to the last split, but a little earlier (for example, 10 examples in the list), then we get memory with generalization and with averaging results of these 10 most similar examples. Overlearning will be less. I.e., you should find the depth of division with minimal overlearning.


Applies to Forex, it remembers the history and makes a trade based on it?
 
Vladimir Baskakov #:
Applies to Forex, remembering the history and making a trade based on it?

Yes. We hope that history will repeat itself. Probably in vain. But we have nothing else to hope for.

 
elibrarius #:

Yes. We hope history will repeat itself. Perhaps in vain. But we have nothing else to hope for.

They themselves write a warning in Signals, past accomplishments don't tell us that this will happen again. Funny
 
Vladimir Baskakov #:
They themselves write a warning in Signals, past achievements do not say that this will be the case later. Funny

What's funny is that no one can guarantee - the actions of others.

machine learning, now only works on statics, Maximka just proved it.
Reason: