Machine learning in trading: theory, models, practice and algo-trading - page 612

 
Yuriy Asaulenko:
He wasn't trying, he was studying.) I am glad that you liked it. Actually it saves a lot of time - there is no need to reinvent the wheel. Everything has been invented before us.

It seems that the input predictors are converted into images and the network is trained on them.

Here.

 

Good document, I saw this link on gotay a couple of months ago, it's not so much about ML itself, but about the data they use, with a large collection of links to sites, which is very valuable info, really valuable.

 
Maxim Dmitrievsky:

I switched to Wiener - I wonder... when will these books be over :) he also tried to make predictions

I too was languishing that all books that I have planned I will not have time to read and for 10 lifetimes, but then have looked at it on another hand, from a positive side, while there is a physiological motivation to "binge" reading scientific literature, The brain is clearly not in danger of marasmus, and if seriously, the answer is in RANGE, put in Excel, put a rating of importance, sorted (periodically) and read the top that have not read and do not bother with the number, knowing that you read the most important.
 
elibrarius:

On the test site, like most, the error is on the verge of 50%. But at least counts dozens of times faster than Alglib. If here the calculation of the model for 40-100 minutes, the Alglib-e more than a day waiting for the same structure, not waited and turned off the calculation.
Although if now I have to pick up models in the cycle, then it will take a lot of time again.... I also have to program it.
In general, this is a long time, as you do not put myself time limits on the MO.

Interesting - so I dig)

Surprised. What kind of model is it that counts for more than an hour?

It should be 1-2 minutes at most.

 

Brothers, how did you want to get a model in 1-2 minutes of optimization, which will be relevant to such a complex market as forex????

In my opinion this is a logical nonsense. After all, building a model affects the computational resources that can be translated into cost. So every model has its value in the form of deduction resources spent on its creation. And now a question. Do you want to make money on models that cost pennies? I guess you can make pennies, but no more than that.... IMHO

 

The link above was

toxic:

http://www.valuesimplex.com/articles/JPM.pdf

And here is a picture from this book, authored by a very respected bank:



Why doesn't anyone besides me discuss what is listed here as model Instability?

 
SanSanych Fomenko:

The link above was

toxic:

http://www.valuesimplex.com/articles/JPM.pdf

And here is a picture from this book, authored by a very respected bank:



Why doesn't anyone besides me discuss what is listed here as model Instability?

What is this?
 
Mihail Marchukajtes:

Brothers, how did you want to get a model in 1-2 minutes of optimization, which will be relevant to such a complex market as forex????

In my opinion this is a logical nonsense. After all, building a model affects the computational resources that can be translated into cost. So every model has its value in the form of deduction resources spent on its creation. And now a question. Do you want to make money on models that cost pennies? I guess you can make pennies, but no more than that.... IMHO

No. It was, as I understand it, about the training time, not about optimization. Optimization is 20-30 minutes, of course.
 
SanSanych Fomenko:

For trading, the idea of model optimization (TS) seems very doubtful, because any optimization looks for peaks / troughs, and we do not need them. We need, ideally, flat plateaus, as large as possible in size. These plateaus should have one wonderful property: changes in model parameters should NOT cause them to leave the plateaus.

This is about optimization.

In fact, here we should also add the problem of stability of model parameters, which if they change, then within a fairly narrow (5%) confidence interval. As it seems to me, stability of model parameters results in the situation when its efficiency is at a certain plateau and if we suddenly get a very good result while testing the model, it means we have reached a minimum point, we have got an unstable condition that will never occur in practice, moreover there will be a stop-out around this optimal point.

PS.

By the way, in the tester the developers have provided such a possibility to search for a plateau by color. Personally, I use the tester as a finishing tool and take the parameters, which refer to a square, around which squares of the same color. This is a clear expression of my concept of a "plateau".

I have often seen that good trading parameters in MetaTrader create plateaus in optimization functions. If there are MA or RSI or some other coefficient in parameters, then changing the parameter by a small value did not affect the final result.

But it is logical, most of the parameters there are used in the formula for calculating the indicator, so a small change will just slightly affect the result, which will still be calculated on the same prices.

And in machine learning on the contrary - parameters can have an avalanche effect on the entire learning process, and even a small change leads to a completely different result. For example, the number of neurons in a hidden layer - as their number increases the number of used weights will also increase, and function for initializing weights using gpsc will set their values in a slightly different order, which will lead to a different result.
Changing some parameters will also draw a plateau in optimization function, you can study for each parameter smoothly or stochastically its influence on final score of model, and for smoothly influencing parameters additionally use optimizer based on derivatives (functions optim(method="L-BFGS-B") and optimize() in R)

 
SanSanych Fomenko:

The link above was

toxic:

http://www.valuesimplex.com/articles/JPM.pdf

And here is a picture from this book, authored by a very respected bank:



Why doesn't anyone besides me discuss what is listed here as model Instability?

We are talking here about error on the training data and error in the prediction. The essence of the picture is that when error is minimized there is retraining, and the whole point of creating and adjusting model is to reduce this error to optimal value on new data (avoid retraining).

Good illustration.
Reason: