Machine learning in trading: theory, models, practice and algo-trading - page 1393

 
Maxim Dmitrievsky:

The classic version won't work, unfortunately... it's also all about the features/targets.

Don't be so dramatic. Classic NS has been working for more than a year. There is some progress with NS regression, too. I don't know about RL, but there are no problems with the classics at all. You should formulate the task properly, not like I want a roast bird, and you won't have any problems. You don't have to predict the price of a candle.)
 
Yuriy Asaulenko:
Don't be dramatic. Classical NS has been working for more than a year. There are shifts with NS regression, too. I don't know about RL, but I don't have any problems with the classics. You should formulate the task properly, not like I want a roast bird, and you won't have any problems. You don't have to predict the price of a candle.)

I'm talking about my experience

 
Maxim Dmitrievsky:

I'm talking about my experience.

Well, the RL on the market, I imagine very vaguely. I do not know what to do with them. But the topic is very interesting. I will leave it for a week or so, if there is no progress. That's all.
The interesting thing about RL is the ability to make the TC fully autonomous, including support and closing the deal. If this fails, then the transition from the usual MLP and RF to RL does not make much sense. How to do that? - I have no idea.
 
Yuriy Asaulenko:
I do not know exactly how to use RL in the market. I do not know about RL on the market. But the topic is very interesting. If I do not have some progress for a week or so, I will quit. That is all.
Zy what is interesting about RL is the ability to make the TC fully autonomous, including support and closing the transaction. If this fails, the transition from the usual MLP and RF to RL does not make much sense. How to do that? - I have no idea.

do you understand the difference between training with a teacher and training with reinforcement? they are different approaches altogether, absolutely. They have in common only that NS is used as an approximator

 
Maxim Dmitrievsky:

Do you understand the difference between learning with a teacher and learning with reinforcement? The only thing they have in common is that the NS is used as an approximator

Definitely, I understand.) What does this have to do with it? I'm actually talking about the final result, not the principles. If the result is the same, there is no point in a more complex solution. And it makes no difference what their principles are. So far I see that RL, when applied to the opening of the transaction, does not give a new quality.
 
Yuriy Asaulenko:
Definitely, I understand.) What does this have to do with it? I'm actually talking about the end result, not the principles. If the result is the same, there is no point in a more complex solution. And it makes no difference what their principles are.

too abstract... another principle - another approach to solving the problem, other results

In general, people dedicated their lives to this, for example Sutton, so the "quick" mastering/application is out of the question. There's some very complicated stuff in there that's from the last one.
 
Maxim Dmitrievsky:

too abstract... a different principle means a different approach to solving the problem, different results

Your results with RL are not better and not worse than others. What does your approach have to do with it? It's the results that matter. The results are roughly the same for the MLP with the teacher on the opening deal. Even if you have a few. better. It does not significantly change anything. You need a qualitative leap from the application of RL.
Don't get me wrong, this is not a criticism of your approach at all. You are doing a good job.
 
Maxim Dmitrievsky:

too abstract... another principle, another approach to solving the problem, other results

In general, people dedicated their lives to it, like Sutton, so "quick" mastering/application is out of the question. There are very complicated things that are from the last.

Judging by your article, it's not a very complicated thing to master for a long time.

Before the first training - a random target is set, and then after each cycle of training - if it brought a profit, it is left, if a loss, it is changed.

 
Yuriy Asaulenko:
Your results with RL are no better and no worse than others. What does it have to do with your approach? The important thing is the result. The results are approximately the same for the sergeant MLP with the teacher on the opening deal. Even if you have a few. better. It does not significantly change anything. You need a qualitative leap from the application of RL.
Don't get me wrong, this is not a criticism of your approach at all. You're doing just fine.

When it comes to results - I haven't seen anything similar to mine in the thread, not even close

The only results I've seen are from fxsaber, and not from the MO in the full sense of the word.

i don't need to remind you about the backtests on the napkin.

I don't take it as a criticism, I'm just saying that it's a very complex approach and I'm amused by such statements like "I'll do it for a couple of weeks and everything will be fine".

 
Elibrarius:

Judging by your article is not a particularly difficult thing to master for a long time.

Before the first training - the target is set by random, and then after each cycle of training - if it brought a profit, it is left, if a loss, it is changed.

Even about such a seemingly simple thing, no one wrote here, as well as in general about RL, alglib scaffolding, etc., until I brought up the topic

so what are we talking about... so you see only that "random target", and how to attach to it something more complicated you can not think of, because to see ready and say that it's easy - always easy, but to improve ...

Just babble that everyone is so smart, and in fact only discuss the obvious neural network settings, but not complex approaches

Asaulenko submitted 20 returnees to the grid and is happy... isn't it funny?

Reason: