Machine learning in trading: theory, models, practice and algo-trading - page 627

 
Aleksey Terentev:

I wondered why he deleted the first post. He blogged the scheme. =)

Yeah, just screwed up looking for his own posts on the forum :D so at least the link will be able to give
 
Maxim Dmitrievsky:
Yeah, just got tired of searching for my own posts on the forum :D so at least the link can be given
That's why I wrote the article, not to repeat the same thing to different people...
 
Mihail Marchukajtes:

There is not much of an article here yet, if the result will be interesting, then you can do it later

Because I'm lazy and slow to do, with breaks for a week :)

 
Maxim Dmitrievsky:

There is not much of an article here yet, if the result will be interesting, then you can do it later

Because I'm lazy and slow to do, with a week's break :)

I wrote mine for about a month with editing... from the moment of the first edition to the release of the book.....
 
Maxim Dmitrievsky:

sketched a new network diagram, this is the first description. Then there will be a continuation (I hope)

created a blog for the memorial, because I'm tired of searching for scraps of ideas on the forum

https://rationatrix.blogspot.ru/2018/01/blog-post.html

You're confusing the soft with the warm. Submitting net result data is a flawed practice.

You will simply confuse the net, and it will not be able to understand why one and the same pattern is growing in one case, and falling in another.

This is because previous results have been mixed in with the data. This has to do with both the result of a previous trade and the equity value.

You are trying to insert a fitness function into the body of the grid itself. But knowing what is good what is bad is foreign knowledge, it has nothing to do with the grid mechanism.

If you want to feed the story move from MLP to recursive meshes.

But your hypothesis is (IMHO) kotopedo.

ZY And yes, since I've already gone in, you wrote that there are problems with retraining. Modern NS science solved this problem long ago.

The simplest and clearest and most efficient way is cross-validation (google it).

 
Nikolay Demko:

You're confusing the warm and the soft. It is a bad practice to feed the net result data.

You will simply confuse the grid, and it will not be able to understand why the same pattern is rising in one case and falling in another.

This is because previous results have been mixed in with the data. This has to do with both the result of a previous trade and the equity value.

You are trying to insert a fitness function into the body of the grid itself. But knowing what is good what is bad is foreign knowledge, it has nothing to do with the grid mechanism.

If you want to feed the story move from MLP to recursive meshes.

But your hypothesis is (IMHO) a cotopoe.

Judging by all neural networks for you consist in one definition - the Perseptron. IMHO, you are not even aware of the fact that this is only the visible tip of the iceberg, and it is a tiny one.

And no one has ever beaten overtraining. You don't have much of an understanding of the MoD.
 
Nikolay Demko:

You're confusing the warm and the soft. It is a bad practice to feed the net result data.

You will simply confuse the grid, and it will not be able to understand why the same pattern is rising in one case and falling in another.

This is because previous results have been mixed in with the data. This has to do with both the result of a previous trade and the equity value.

You are trying to insert a fitness function into the body of the grid itself. But knowing what is good what is bad is foreign knowledge, it has nothing to do with the grid mechanism.

If you want to feed the story move from MLP to recursive meshes.

But your hypothesis is (IMHO) kotopedo.

I'd like to do an analogue of a backed up grid. Recurrent meshes, unfortunately, have no concept of environment and no response. Nets with reinforcement do. How do you do that? The first thing that comes to mind is to kick it if its performance outside is unsatisfactory, e.g. via eq.

maybe a cotopec, i don't know, imho... it's just fun :) and it's easy, it doesn't take much time.

 
Nikolay Demko:

And yes, while I was at it, you wrote that there are problems with retraining. Modern NS science solved this problem long ago.

The simplest and most clear and effective way is cross-validation (google it).

I know all about it, crossvalidation is a fitting too, but more subtle.

recurrence also loops on itself and sometimes cannot learn

I don't quite get it - you say you can't feed network outputs to inputs, and then you write a recurrence... :) and that's all it does, it eats its outputs

the recurrence is, in the simplest case, a regular MLP that eats itself

 
Maxim Dmitrievsky:

I know all about it, cross-validation is also a fitting but more sophisticated

recurrence also loops on itself and sometimes can't learn

And I don't quite understand - you say you can't feed network outputs to inputs, and then you write a recurrence... :) and that's all it does, it eats its outputs

The recurrence is a simple MLP that eats itself.

No, what I was saying was that you can't mix market data with network performance.

In other words, the network processes the quotes, while you feed it with data about whether the previous trade was successful or not.

And in general, whether the network worked well or not, this is a separate unit (I used to call it fitness-function in GA, in NS the name of the error function is used, but the essence is the same).

Suppose you train network backprop, it turns out you have an error becomes part of the data, buttery oil. I hope you understand what I mean.

 
Aleksey Terentev:

Judging by all neural networks for you consist in one definition - the Perseptron. IMHO, you're not even aware that this is only the visible tip of the iceberg, and even that is a tiny one.

And no one has ever defeated retraining. You don't have a lot of understanding of the Ministry of Defense.

I was talking about the specific network presented by Maxim.

If you do not understand the three sentences ... You know, why should I tell you).

Reason: