Machine learning in trading: theory, models, practice and algo-trading - page 3358

 
Andrey Dik #:
Discussed this topic back in the days of Porksaurus, Matemat, Granite and Metadriver, that is a long time ago.
I haven't seen this topic from them, maybe I just missed it (I used to read Cyberpaw more). It's about models that produce probability distributions as outputs, rather than specific numerical values. Not to say that this is a completely new approach, but in recent years there has been a noticeable rise in interest in this topic.
 
Aleksey Nikolayev #:
I haven't seen this topic, maybe I just missed it (I used to read more Cyberpawk). It's about models that produce probability distributions as outputs rather than specific numerical values. Not to say that this is a completely new approach, but there has been a noticeable upsurge of interest in this topic in recent years.

Well, there have been many attempts, but I don't know their successful public results. The simplest thing that has been done is to treat the output of a single neuron as the probability of a sell/buy in the range [-1.0;1.0], nothing good came out of that, applying a threshold doesn't help.

Another thing is that it is possible to apply the distribution of neuron outputs as a probability, but I have not seen anyone do it. For example, for the same sell/buy signals of the network output neuron during training, the distribution of values can be very different, so the behaviour on OOS will be different.

Besides, I have long ago shown graphs of training and behaviour on OOS, where the line goes without breaking, of course without spread, and the input was given increments of simple mashka from different timeframes, elementary. And here some geniuses suddenly made a "brilliant" conclusion that the spread affects the behaviour on OOS.

 
Andrey Dik #:

Well, there have been many attempts, but I don't know their successful public results. The simplest thing that was done was to treat the output of a single neuron as a probability of sell/buy in the range [-1.0;1.0], nothing good came out of it, applying a threshold does not help.

Another thing is that you can apply the distribution of neuron outputs as a probability, but I haven't seen anyone do it. For example, with the same sell/buy signals of the output neuron of the network during training, the distribution of values can be very different, so the behaviour on OOS will be different.

Besides, I have long ago shown graphs of training and behaviour on OOS, where the line goes without breaking, of course without spread, and the input was given increments of simple mashka from different timeframes, elementary. And here some geniuses suddenly made a "brilliant" conclusion that the spread affects the behaviour on OOS.

Still, classification is a relatively simple special case, in which the distribution of output is discrete and therefore everything is relatively easy to reduce to the usual "point", numerical MO problem.

A broader approach is interesting, with models for which the output is not a number, but any (within reasonable limits, of course) distribution. An example is the MO used in reliability theory (where the distribution of lifetime is studied) or in probabilistic weather forecasting (where a probability distribution is constructed for the possible amount of precipitation, for example).

 
Aleksey Nikolayev #:

...

A broader approach is interesting, with models for which the output is not a number, but any (within reasonable limits, of course) distribution. An example would be the MO used in reliability theory (where the distribution of lifetime is studied) or in probabilistic weather forecasting (where a probability distribution is constructed for the possible amount of rainfall, for example).


That's what I was actually saying, to try to use the distribution, not the output values themselves, as a classifier.
 
Probabilities are derived from an already trained model. Otherwise, why would you teach it probabilities? If the probabilities are known, why train it?

Moments of distributions in regression models for estimation of intervals? No, you haven't heard of it? Have you done much forecasting with them?

What is it, 20 years ago you knew everything but were too embarrassed to say? That's a long time to get away from over-optimisation of phases.

It's sad, 20 years...
 

Forming a probability distribution during training, not after.

And after training, what is the point of doing anything at all? A hypothetical machine fool will not acquire new knowledge if he is tweaked with a screwdriver after training.

 
Maxim Dmitrievsky #:

I have already described the example above. There is a classifier that passes the OOS, but the returns are distributed 60/40. You don't like it, you raise the decision threshold, but the situation doesn't change, and sometimes it gets even worse. You scratch your head as to why this is so.

The explanation is given: because in the case of real probability estimation the situation should change.

A solution is given.


How do you find morons like this Karpov?

The man's head is a mess. The man is incapable of coherent thought. It's just creepy!

From the first minutes he simply states that the classifier does not give probability. And where can you get the probability without using what the classifier gives?

 
СанСаныч Фоменко #:

How do you find morons like this Karpov?

The man's head is a mess. The man is incapable of coherent thought. That's just creepy!

Well you were invited to work in England with your mush too? )

the man doesn't care at all, he's doing fine.

Not getting the point != getting it wrong. These are people of a slightly different formation, that's probably the problem.

It's been obvious for a long time that the topic needs new blood. I'm already an oldfag too. The new ones will come and show, if the forum doesn't get mouldy at the end of course.

The worst part is that I understand what changes in the brain occur with age, and why people reason this way and not that way. This obviousness is sometimes hilarious, but there's no getting away from it.

 
Maxim Dmitrievsky #:

Well have you been invited to work in England too with your porridge? )

the man doesn't care at all, he's doing fine.

Not getting the idea != not putting it right. These are people of a slightly different formation, that's probably the problem.

It's been obvious for a long time that the topic needs new blood. I'm already an oldfag too. The new ones will come and show, if the forum doesn't get mouldy at the end of course.

The worst part is that I understand what changes in the brain occur with age, and why people reason this way and not that way. This obviousness is sometimes hilarious, but there's no getting away from it.

What's this got to do with England?

You seem to be a qualified person, but you are constantly dragged to the rubbish bin.

You very rarely argue on the merits....

 
СанСаныч Фоменко #:

What's England got to do with it?

You seem like a qualified man, but you're always dragging yourself to the rubbish bin.

You very rarely make substantive objections...

I got it from his webinar. What else is substantive?
Additionally all the method names are there, you can google it. He talked about 2 of his favourites.
He did a lot of courses, went to England for a part-time job. Google or meth, I can't remember. For me the rubbish is the local interlocutors :)

I have friends in good positions in IT, although I am far from it myself. One of them raised the entire banking infrastructure. From time to time they besiege me for some nonsense, sometimes they are surprised by my knowledge. Hence the interest in the MoD. So it's all clean and tidy.

I have nothing to do with this area, if anything. Just for fun. I have zero maths background, purely on intuition. I mean, I won't even pass some kind of high school maths programme. Or any programming patterns either.

If you bring a hardcore MOSHnik here, he'll blow you all to smithereens. So if you don't understand me, he'll be God to you. But he will definitely not come to this zoo :) and he doesn't care about your R from a high bell tower.

And the first thing he will start with is that you are all cripples here, since you are sitting on this Forex :)
Reason: