Machine learning in trading: theory, models, practice and algo-trading - page 716

 
Alexander_K2:

Michael, how did the experiments with entropy/non-entropy end?

My brothers advised me to install R and threw in a couple of plugins with code snippets. As a result, I stopped all the calculations in Excel and use ready-made packages. In R I made a calculation of the number of important input variables to the output for each line of the table, as well as several outputs. As a result, I get some amount of data depending on the size of the table and the output variable. I choose the output where I have the maximum number of important variables at maximum sampling depth. Then I train the models on the prepared sample. And you know, in all the years of AI training, this is the first time that the tests show such stable satisfactory results. But all the tests are worthless if the signal doesn't go up, and when it goes up and people pay attention to it, then everyone will start reading my article again and try to understand what I've done and how I've done it. After all, it absolutely does not matter how I did it, as long as it was positive. And with the help of a machine or AI does not matter. The important thing is the end result!!!!!

 

And this is a worn-out link, which is where all the movement in the market begins. It's interesting to watch from minute 20. That's where he gets to the point...

https://www.youtube.com/watch?v=d4XzMqHNeew

I threw this to those who think my knowledge of the market is ridiculous. Although I know very little about the market, but I look at it soberly, without illusions and rose-colored glasses. I wish you to do the same.....
 
Mihail Marchukajtes:

My brothers advised me to install R and threw in a couple of plugins with code snippets. As a result, I stopped all the calculations in Excel and use ready-made packages. In R I made a calculation of the number of important input variables to the output for each line of the table, as well as several outputs. As a result, I get some amount of data depending on the size of the table and the output variable. I choose the output where I have the maximum number of important variables at maximum sampling depth. Then I train the models on the prepared sample. And you know, in all the years of AI training, this is the first time that the tests show such stable satisfactory results. But all the tests are worthless if the signal doesn't go up, and when it goes up and people pay attention to it, then everyone will start reading my article again and try to understand what I've done and how I've done it. After all, it absolutely does not matter how I did it, as long as it was positive. And with the help of a machine or AI does not matter. The important thing is the end result!!!!!

Well, that is, right now, all research is on hold. They use ready-made templates from R, counting on even a small +. Next - the opening of the signal and, if there is steady cash in the purse, the continuation of research outside of the templates. Do I understand the current situation correctly?

 
Alexander_K2:

Well, that is, right now, all research is on hold. Ready-made templates from R are used, counting on even a small +. Next - the opening of the signal and, if there is steady cash in the purse, the continuation of research outside of the templates. Do I understand the current situation correctly?

No. Right now the research is in full swing, namely there are large-scale tests in connection with the new opportunities that have opened up. So far the results are more than satisfactory. I have a signal, now I just need to lift it up :-).

In R I preprocess data and remove garbage from the data. As it turned out, the presence of garbage in the input very much worsens the performance of the model on the feedback loop. After preprocessing, when R tells me, that these very inputs have dependence to the output, I look for the dependence itself in the optimizer. I get about 3-5 models, then I do a control test of each model and choose the one that passed the test. Then I put it to the robot and see how it goes.....

 
Mihail Marchukajtes:

.... As it turned out, the presence of garbage in the input very much degrades the performance of the model on the OOS.

+100

And not only on the feedback.

 
It is clear that a 100% model is still an accident, rather than some kind of stability (getting them). But what is good about the model? Because it is wrong in small cases and always says correctly in those cases when there is a large profit on the signal. I'm going to do some housework and finish my tests and I will show you my approach to such a tool as binary options. It turns out that you can earn on them, too, having an advantage on the market as a whole. That is, this is a professional approach to the tool, rather than a frenzied attack with the principles of the casino. Pure strategy.....!!!
 
Mihail Marchukajtes:

No. Right now the research is in full swing, namely large-scale tests in connection with the new possibilities that have opened up. The results are more than satisfactory so far. The signal is already there, now I just have to lift it :-).

In R I preprocess data and remove garbage from the data. As it turned out, the presence of garbage in the input very much worsens the performance of the model on the feedback loop. After preprocessing, when R tells me, that these very inputs have dependence to the output, I look for the dependence itself in the optimizer. I get about 3-5 models, then I do a control test of each model and choose the one that passed the test. Then I put it to the robot and see how it is.....

so all you have to do is throw out jpredictor and use plenty of models in R

maybe your signs are so fiery that any model will be fine with them

 
Maxim Dmitrievsky:

So all you have to do is throw out jpredictor and use the abundance of models in R

maybe your signs are so fiery that any model will be fine with them

But this is fundamentally wrong statement. The fact is that Reshetov in the optimizer tightened all the screws to the limit in terms of retraining. Maximum stiff conditions for model selection, not to mention the random construction of the training and test. It seems to me that the nuts are tightened even too much, because with the abundance of input data models rarely had even a tenth of all the inputs. BUT what did R do?

By pre-processing R outputs, these inputs have some relationship to this output. That is, R says only about existence of this relationship, and search for this relationship is already engaged in the optimizer and with its strict rules in order to reduce overtraining it builds models in the field of useful data for the output and is not overtrained. At least it tries to..... So it's a good symbiosis !!!!

 
Mihail Marchukajtes:

Well. Criticism is in our honor..... Way to go.....

Just tell me, what's so ridiculous about my post? What's wrong with it????

Nothing about anything, general phrases in the style of Gertschik.


 
Mihail Marchukajtes:

I was advised by my brothers in the mind to install R...


Reason: