Machine learning in trading: theory, models, practice and algo-trading - page 1907

 
Maxim Dmitrievsky:

Here's one. There are only 12 fiches, not 24

I have errors

>>> print(train_score, " ", tst_score)

1.0 0.5454545454545454

This is a literacy on Maxim's topic, purely from the armchair. The file you have uploaded there are only 12 entries. To start with the top, you need to have 20-24 moves, then you can start with 11 inputs (for my power as much as possible) and go through them, reducing the number of inputs and trying them. When I optimize from top to bottom, then first ten one-time runs, I do so as to choose the best run and with it start normal training. If there are few inputs I usually start from the bottom with 5 inputs and try to increase number of inputs selecting the worst version of the first ten runs, in order to postpone the start in the area of maximal overshoot during training. In the first case I am fiercely saving resources. In the second case, choosing the worst option I try to go through all the possible options as much as possible.

The fact that you have obtained such data is bullshit. Look what I got and these results I trust, I only do not trust your inputs, which peek into the future. Maxim, how can you not be ashamed to do so, we are watching the kids, and you mold their pictures and wonder why the feedback does not work

Let's just so I'll send you now the model and file training that you know what input number corresponds to your designations, but if you do not show the work of the model on the balance of the other I will not do it to you. Anyone wants to know the result of the work. So the shaitan machine does not work for free.


Just a squeeze of the main code

vv0[q]=0;//12    вместо нулей подай значения своих входов
vv1[q]=0;//10
vv2[q]=0;//8
vv3[q]=0;//2
double Ress1=getBinaryClassificator1(vv0[q],vv1[q],vv2[q],vv3[q]);  //Вызов результатов полиномов 
double Ress11=getBinaryClassificator2(vv0[q],vv1[q],vv2[q],vv3[q]);

double getBinaryClassificator1(double v0, double v1, double v2, double v3) {
   double x0 = 2.0 * (v0 + 0.00352160000000379) / 0.0060209999999973896 - 1.0;
   double x1 = 2.0 * (v1 + 0.00321680000000524) / 0.006628599999996879 - 1.0;
   double x2 = 2.0 * (v2 + 0.00257640000000836) / 0.00577599999999978 - 1.0;
   double x3 = 2.0 * (v3 + 0.00197520000000417) / 0.00414859999999794 - 1.0;
   double decision = 0.6226912928759895 * x0
  -0.013192612137203167 * x0 * x1
  + 0.9920844327176781 * x2
  + 1.3060686015831136 * x0 * x2
  -3.5395778364116093 * x1 * x2
  -1.1394019349164468 * x3
  + 2.5659630606860158 * x0 * x3
  + 0.5395778364116095 * x1 * x3
  + 0.31090589270008795 * sigmoid(x0)
  + 0.009674582233948988 * sigmoid(x1)
  -0.0839929639401935 * sigmoid(x0 + x1)
  + 0.012313104661389622 * sigmoid(x2)
  + 0.30474934036939316 * sigmoid(x0 + x2)
  -0.5958663148636764 * sigmoid(x1 + x2)
  + 0.002638522427440633 * sigmoid(x0 + x1 + x2)
  -0.05013192612137203 * sigmoid(x3)
  + 0.014951627088830254 * sigmoid(x0 + x3)
  -0.13412489006156553 * sigmoid(x1 + x3)
  -0.006596306068601583 * sigmoid(x0 + x1 + x3)
  + 0.04397537379067722 * sigmoid(x2 + x3)
  + 0.1363236587510994 * sigmoid(x0 + x2 + x3)
  + 0.6952506596306068 * sigmoid(x1 + x2 + x3)
  -0.29331574318381703 * sigmoid(x0 + x1 + x2 + x3)
  + 1.0738786279683377 * sigmoid(1.0 + x0)
  -1.073438874230431 * sigmoid(1.0 + x1)
  -0.4256816182937555 * sigmoid(1.0 + x0 + x1)
  + 1.0708003518029903 * sigmoid(1.0 + x2)
  + 0.9656992084432717 * sigmoid(1.0 + x1 + x2)
  -3.1314863676341247 * sigmoid(1.0 + x3)
  -0.8500439753737907 * sigmoid(1.0 + x0 + x3)
  + 1.0281442392260334 * sigmoid(1.0 + x1 + x3)
  + 0.8544415127528584 * sigmoid(1.0 + x0 + x1 + x3)
  -0.21328056288478453 * sigmoid(1.0 + x0 + x1 + x2 + x3);
   return decision;
}
double sigmoid(double x) {
   if (MathAbs(x) < 1.0) {
      return 2.0 * signum(x) - x;
   }
   return signum(x);
}
double getBinaryClassificator2(double v0, double v1, double v2, double v3) {
   double x0 = 2.0 * (v0 + 0.00518320000001116) / 0.00871940000000327 - 1.0;
   double x1 = 2.0 * (v1 + 0.00542880000001134) / 0.01145720000000306 - 1.0;
   double x2 = 2.0 * (v2 + 0.00578500000001125) / 0.00872540000000166 - 1.0;
   double x3 = 2.0 * (v3 + 0.00496500000001143) / 0.00698900000000191 - 1.0;
   double decision = -0.17965023847376788 * x0
  + 1.7416534181240064 * x1
  + 0.5389507154213037 * x0 * x1
  + 0.5023847376788553 * x2
  -0.16653418124006358 * x1 * x2
  -0.06836248012718601 * x3
  -0.8191573926868044 * x1 * x3
  -0.029809220985691574 * sigmoid(x0)
  -0.009141494435612083 * sigmoid(x1)
  + 0.00794912559618442 * sigmoid(x0 + x1)
  + 1.7150238473767885 * sigmoid(x2)
  -1.2686804451510334 * sigmoid(x0 + x2)
  + 0.051271860095389504 * sigmoid(x1 + x2)
  + 0.05405405405405406 * sigmoid(x0 + x1 + x2)
  -1.095389507154213 * sigmoid(x3)
  -0.2444356120826709 * sigmoid(x0 + x3)
  + 0.34737678855325915 * sigmoid(x1 + x3)
  + 0.9264705882352942 * sigmoid(x0 + x1 + x3)
  + 0.16176470588235295 * sigmoid(x2 + x3)
  -0.7682829888712241 * sigmoid(x0 + x2 + x3)
  -0.16335453100158984 * sigmoid(x1 + x2 + x3)
  + 0.7551669316375199 * sigmoid(x0 + x1 + x2 + x3)
  -2.048489666136725 * sigmoid(1.0 + x0)
  -0.31756756756756754 * sigmoid(1.0 + x1)
  -0.08982511923688394 * sigmoid(1.0 + x0 + x1)
  + 1.4666136724960255 * sigmoid(1.0 + x1 + x2);
   return decision;
}
double signum(double x) {
  if (x == 0.0) {
    return 0.0;
  }
  if (x > 0.0) {
    return 1.0;
  }
  return -1.0;
}

From the attached file I identify the numbers of inputs according to their markings. I am waiting for the results of OMF test.

Files:
Si_Splice_10.txt  102 kb
 

Well, in addition to the model estimates.

The predictors show the number of columns in the file

258 total number of vectors. I removed class 0 and left class 2, renaming it to zero, because they were balanced with class 1 in number, 19.60 is quadratic error, or rather the difference between straight linear and quadratic it should tend to zero, 79.141 is General General Generalization ability, when striving to 100 the difference between errors decreases, 69.767 is spicificity. The total control plot is 75 with a general generalizability of 70. The answer is NOT KNOW we got on 77 vectors of the total sample where the control plot had 17 of them.

In fact, even though I got worse results on the training, I got much better results on the control plot. Moreover, this is not a test plot, like yours, but a control one, the one that the network did not see at all. The test is when it trains on the training so that the test work well, that is, potentially the network sees the test site during training. The test one does not. Questions????

 

Please tell me how to find an order with the maximum profit (mql4).

Thank you very much.

 
a5l3e5x:

Please tell me how to find an order with the maximum profit (mql4).

Thank you very much.

My friend nobody uses MT4 any more, an order has a parameter profit. It is necessary to enumerate all orders to read this parameter and choose the maximum. That's it... in a nutshell...
 
Mihail Marchukajtes:

Is this task interesting?

Forum on trading, automated trading systems and trading strategies testing

Machine Learning in Trading: Theory, Practice, Trading and Beyond

Rorschach, 2020.07.14 19:21

Who wants to practice? I cannot go back to the beginning of the first one. 2 and 4 are test. Some explanations, 1 chart increment, 3 chart signal to buy some "classic" system. By the way, question for the back of my mind, is ns capable of emulating ligic and with memory indicators, something like zigzag and maximum/minimum indicator?



 
Rorschach:

Is such a task of interest?


Let's say, but are you suggesting that I should score digits from a picture?
 
Mihail Marchukajtes:
Suppose, but you suggest me to enter the figures from the picture?

Then the data I prepare, 350 examples, 100 inputs will do?

 
Rorschach:

Then I'll prepare the data, 350 examples, 100 inputs will do?

Yes it's cool in general, but I stick to the idea that the inputs should be three times more examples so that the algorithm had something to choose from. I think I'll record one more video, it's a long time to write. There is a square matrix theory from which I came to this conclusion... Prepare a sample...
 

Check it out, people, not for the sake of advertising. I was once at their open house, I thought that enough to be self-taught if I suddenly do not represent it all correctly. I wanted to get an official confirmation of their knowledge, so to speak. Just the other day I read about Elon Musk and his Neurolink, which will be presented in August. And then as if the letter came, so I did not understand how to download the content so look at the picture.

In general the topic is interesting so can get together there on the sly and talk about the urgent? What do you say?

 
Mihail Marchukajtes:
Yes, it's really cool, but I think that there should be three times as many examples for the algorithm to choose from. I think I'll record one more video, it's a long time to write. There is a square matrix theory from which I came to this conclusion... Get your sample ready...

My results are detailed here. For validation, I multiply the original series by -1

Files:
Files.zip  4 kb
Reason: