Machine learning in trading: theory, models, practice and algo-trading - page 878

 
Dmitriy Skub:

IMHO, this is the only way to use NS in trading today. All others are a waste of time and effort. Considering the current level of so called AI))

I believe - the only one not for today, but in general the only one for MO and HC in configurations of reasonable complexity. First of all, we limit the fields of application of NS and MO, and already on this we apply NS and MO.

And solving problems of "in general, all at once" type - that's for AI).

Renat Akhtyamov:

It turns out that the NS is a sort of filter of making decisions about entering a trade, a preliminary test of the possible result?

An operational tester, so to speak?

Rather, NS is a teachable logic of decision making. It was originally intended as a substitute for one in the standard strategies, not to bother with its writing.

 

I have a question about the target variable.

If our target variable is a financial result of a trade, then it is reasonable to normalize this result, as I thought. But here I'm looking for information on the site, and everywhere it says that the target variable must have two values - buying or selling. And if I will have a loss in any case - buy or sell (and it happens that way!), then why should I cut all negative variables? And if it is the presence of negative options that affects the statistics?

In general, I would like to know what networks work (and where to get them?) in the extreme case with a trigger - buy/sell/never do anything, and in the best case with a function (earlier I asked for a function here because I was looking for a theoretical solution, and now I made a script that brings together predictors) that does the ranking.

 

In the application set of predictors and targets without transformation - the first two columns after the serial number (by the way, is the sequence important, if yes, what from the old to the new or as now - the top of the new to the bottom of the old).


N arr_Buy arr_Sell arr_Vektor_Week arr_Vektor_Day arr_Vektor_Don arr_DonProc arr_iDelta_D_1 arr_iDelta_H_1 arr_RSI_Open
52131 -18 -127 1 -1 -1 2 4 3 0
52130 -15 -130 1 -1 -1 1 4 3 0
52129 -31 -113 1 -1 -1 2 4 3 0
52128 -26 -118 1 -1 -1 2 4 3 0
52127 -6 -138 1 -1 -1 1 4 4 -1
52126 -4 -134 1 -1 -1 1 4 4 -1
52125 -6 -116 1 -1 -1 1 4 3 -1
52124 -8 -86 1 -1 -1 1 4 2 0
52123 -13 -60 1 -1 -1 2 4 1 0
52122 -30 -43 1 -1 -1 3 4 1 0
52121 -26 -47 1 -1 -1 2 4 1 0
52120 -6 -67 1 -1 -1 1 4 1 0
52119 -6 -67 1 -1 -1 1 4 1 0
52118 -35 -38 1 -1 -1 3 4 1 0
52117 -32 -41 1 -1 -1 2 4 1 0
52116 -34 -39 1 -1 -1 2 4 1 0
52115 -20 -53 1 -1 -1 2 4 1 0
52114 -20 -26 1 -1 -1 2 4 1 0

Can this be taught (on what), or do I need to do something else?

Files:
Pred_001.zip  312 kb
 

Advice - read the whole topic from the beginning, as your questions have been duplicated many times, there is no point in repeating the same thing

MO is a systematic approach, where you need to know a lot of things, learn everything step by step.

Because of the local flooders will be difficult to read at the end, but in the beginning and middle of the norm :)

 
Maxim Dmitrievsky:

So far I haven't settled on any particular modification (there are already 25 of them), I keep testing... I decided to achieve a large number of deals and similar curve to OOS, at least the 3rd part from trayne. But I always want to train closer to the current date.

It was noticed that the ensemble of models, each being trained on its own features and with individual settings, is more stable in AOS (in this TS there are 10 models)

Monitoring the test demo in the profile (since I haven't stopped at a particular version and keep improving, the versions on the demo may periodically change)

Fine, let's go!

 
Maxim Dmitrievsky:

Alglib has kfold, has anyone figured out how to work with it? almost zero documentation :)


Last summer I tested it, as well as all other subspecies of NS. It works, as well as other methods, with no special impressions. Only one thing that applies to all NS in alglib-e is that they are ten times slower than R. Well, yes, there are 10 times retraining the same data, but in different blocks - i.e. even 10 times slower)))
 
elibrarius:
Tested it last summer, like all other subspecies of NS. It works, as well as other methods, no special impressions left. Only one thing that applies to all NS in alglib-e is that they are ten times slower than R. Well, yes, there are 10 times retraining the same data, but in different blocks - i.e. even 10 times slower)))

And in R kfold gives an improvement? I have batches usually up to 1000 examples, so mb will not be so long

if you did, is there a code for saving mlp structure to a file?

 
Maxim Dmitrievsky:

And in R kfold gives an improvement? I have batches usually up to 1000 examples, so mb will not be so long

If you did, is there a code for saving mlp structure to a file?

I haven't tried it in R yet. Dr Trader seems to say that it improves.

Saving in alglib? There are Serialize functions for NS, ensembles, scaffolds, regressions - each one has its own and recovery from them.
There's also only for NS with pulling coefficients from NS itself https://www.mql5.com/ru/articles/2279 I started with it (as a working example), then switched to serialize.
Another thing, if you do normalization and remove predictors - all this should be remembered and then applied to new data (thanks to Mr. Vladimir for the tip) the article above does not do it.
In R Darch, for example, when normalizing (center,scale) by the network itself, it will remember this by itself and try it on future data. Other packages from R surely remember everything too.

Нейросеть: Самооптимизирующийся советник
Нейросеть: Самооптимизирующийся советник
  • 2016.10.03
  • Jose Miguel Soriano
  • www.mql5.com
После того, как трейдер определился со стратегией и реализовал ее в советнике, он сталкивается с двумя проблемами, которые, будучи нерешенными, обесценивают эту стратегию. Очевидно, что, в отличие от параметров, которые задаются заранее (рабочая пара, таймфрейм и т.д.), есть и другие, которые будут изменяемыми: период расчета индикаторов...
 
Maxim Dmitrievsky:

Advice - read the whole topic from the beginning, as your questions have been duplicated many times, there is no point in repeating the same thing

MO is a systematic approach, where you need to know a lot of things, learn everything step by step.

Because of the local flooders will be difficult to read at the end, but in the beginning and in the middle of normal :)

I have been following this topic in detail for 6 months, I do not remember similar questions, smart posts, which I found useful for myself and wrote in the notebook - only 3.

Perhaps there is something else in the topic, but given the amount of crap - not pleasant to read ...

Therefore, I relate to this thread as a place where they can give answers to a newcomer to the question of NS, well, if people are sorry for 5 minutes, what would answer the questions, or give a link to an answer (and the answer I was looking for and did not find), then too bad.

 
Aleksey Vyazmikin:

In the application set of predictors and targets without transformation - the first two columns after the serial number (by the way, is the sequence important, if yes, what is the sequence from old to new or as now - the top of the new and the bottom of the old).

Is it possible to train this (on what), or do you need to do something else?

Targeted - you have regression, not classification. I have abandoned regression for now. I think it's better to train 2 neurosets, by number of targets, but I haven't done enough experiments with regression myself - experiment on your own.
The sequence of columns isn't important, the main thing is to tell NS that these are targets. Row sequence is probably better to have freshest data at the end (but not necessarily), many packages shuffle all rows by default for even training. Otherwise NS may deadlock somewhere in the middle (local minimum) and not get to fresh data. Fresh data (last 10-20%) can be 2 - 3 times fed so that the network better learns the latest market trends - also an opinion that I haven't tested in practice.
Check out the blog of the topicstarter - he taught regression there, a lot of good thoughts. But in the end he wrote that he had found some error in the code that made all results invalid.

So there are no clear and unambiguous answers, that's why everybody keeps silent.)

Reason: