Machine learning in trading: theory, models, practice and algo-trading - page 913

 
Aleksey Vyazmikin:

It follows from these words that training on a large amount of data will give bad results, because the market changes, but then how can you train a primitive tree to create rules for this huge sample with a small margin of error? Randomness?

I would say cramming, which has nothing to do with generalizability. It's like a first grader who has memorized a text and stands at the blackboard and tells it without understanding the essence of what's being said...

 
Mihail Marchukajtes:

I would say a crammer, which has nothing to do with generalizability. It's like a first-grader memorizing a text and standing at the blackboard telling the story without understanding what is being said...

"Cramming" is if there is one rule per line, but if there are 100-300 lines of sampling per rule? Randomness?

 
Maxim Dmitrievsky:

Learn how to do it properly.


Eh, Maximka... You made me do it... I see you've got H4, what period of time is that???

Learn how to do it properly. And that's for two weeks. What a beauty.... It's not just one trade. Look at the average profit and loss. So... I don't need to be taught. I'm a teacher. But my students are stubborn lately. You tell them one thing, they do something else. ..... It is not right ...


 
Aleksey Vyazmikin:

"Ridiculous" is if there is one rule per row, but if there are 100-300 sample rows per rule? Random?

A cog with a head like an elephant. Meaning more memory, but it's not a generalization....

 
Mihail Marchukajtes:

Why are you showing me a tester, I'm showing you a week's worth of live trades. What difference does it make what chart I opened for you :)

 
Maxim Dmitrievsky:

Why are you showing me a tester, I'm showing you a week's worth of live trades. What difference does it make what chart I opened for you :)

What tester, come on, I trade ONLY on the real account...

And so.... Do you recognize it?

I think that in three months I`ll record a video and post my status, but I couldn`t stand it now that we`re doing pimping......

 
Mihail Marchukajtes:

Oh, right, the eternal flat. That's because you teach your valance in a short time, how many times do you teach it in at least half a year?

 
Mihail Marchukajtes:

A cog with a head like an elephant. Meaning more memory, but that's not a generalization....

So just a random guess?

I attach a file for a month - I hope for an expert opinion.

Files:
Pred_023_1M.zip  120 kb
 
Maxim Dmitrievsky:

Oh, right, the eternal flat... That's because you train your wad in a short time, how many times do I have to tell you at least in six months?

all right,i'll let you in on a secret,but you don't tell anyone. Okay???

The fact that with increasing the training sample there is such a thing as a critical sample length, getting used to it one extra value, the quality of model training begins to plummet. Right away. And no matter how you train, you'll still get.... you get the idea. So my job is to make money, not to prove to everyone that I could train the model for half a year with a good learning curve. I train the model so that the quality of the model does not fall below the threshold of 0.71 according to R-score, and you know why I won't bother to tell you right away. Because the entropy of the target is about 0.69, so I get a model that knows more about the output variable than the uncertainty of that variable itself. This is the kind of model that may work in the near future. Yes this future is estimated at 10-15 signals within 1-2 days. So this is work... If I have to, I will build the models every day in the morning in front of Europe, my goal is not to prove something to someone, my goal is to make money. It's not my fault that I have to work so hard. You have to accept reality as it is, and not try to see what you want in it... I have a feeling I'll be posting a video tomorrow... You drive me crazy Maxim ...

 
Mihail Marchukajtes:

All right, I'll let you in on a secret, but only you to nobody.... you got a deal???

)))) in short tell you what not, the results will be random with this approach. (in the long term)

and your long term graph shows that your overall expectation is less than zero - exactly the same if you had trained your model over the entire period, it is stupidly unrobust

try to add it up again in your head


Reason: