Machine learning in trading: theory, models, practice and algo-trading - page 10

 

Thanks, looked at the columns, in principle I've already done it too - deltas, min, max, time, etc.

The way how to create a forex trading model only gets more complicated for me, simple methods do not give me stable results. At the moment I see it like this:

1) unload data from MT5: ohlc, time, indicators. At this stage I will not add deltas.

2) loading data into R, add to them a huge number of new columns by adding, subtracting, min, max, etc. to the original ones. It is easier to do it in R than in mt5.

3) somehow select subsets of input data (by columns). I can follow Alexey's example using GenSA or just genetic optimization using GA package. Since I just need a binary result per input (on/off), GA, in my opinion, has an advantage, it has a binary mode of operation. But I need to try and compare both packages.

4) Analysis of subset inputs. This is also in Alexei's example. But I'll risk to train the model on a subset of inputs right away, and use validation data error as a result. Just as long as the model learning time is not more than a couple of seconds.

5. Go back to step 2, add new counted inputs, perform the remaining steps, go through these cycles until the result stops improving.

I also experimented a bit with the article about the Method of main components. In the example from the article there is a nuance that the result can be accurately calculated from the input data. In my case, when input data is obviously not enough, this method starts using noise for learning. It turns out that if this method can achieve a result (r-squared) of 0.95 with only a couple of components - then the predictors used in the model are most likely correct. If even using all components, the r-squared is still less than 0.95 - then the model includes noise in its calculations. However, it is noteworthy that as additional noise predictors are added - r-squared decreases little by little. I think this is a way to compare subsets of predictors with each other - if r-squared is larger, then the subset is better.

 
Dr.Trader:


I also experimented a little more on the article about the Principal Component Method. In the example from the article there is a nuance that the result can be accurately calculated from the input data. In my case, when input data is obviously not enough, this method starts using noise for learning. It turns out that if this method can achieve a result (r-squared) of 0.95 with only a couple of components - then the predictors used in the model are most likely correct. If even using all components, the r-squared is still less than 0.95 - then the model includes noise in its calculations. However, it is noteworthy that as additional noise predictors are added - r-sqaured decreases little by little. I think in this way you can compare subsets of predictors with each other - if r-squared is larger, then the subset is better.

So far, what we seem to have in terms of preselection of predictors that are "relevant" to the target variable: Alexei has a certain set of techniques and I have one. It wouldn't be bad to have another one. PCA is very attractive because it is widely known, plenty of literature.... Note, however, that there are a large number of algorithms that calculate the "importance" of predictors, but I have not been able to use them. But in combination with my algorithm, almost any of these standard algorithms give good results - the error is reduced by 5-7%.

And the number of initial predictors, I think, should be a few dozen with several thousand observations. In statistics, if something is not enough, then there is no statistics.

 
Dr.Trader:

Thanks, looked at the columns, in principle I've already done it too - deltas, min, max, time, etc.



You're welcome. You can practice on my kit, too. There is a quality design of the experiment. The data is good. And getting a positive result on validation is not easy at all.

1) Unload data from MT5: ohlc, time, indicators. I will not add deltas at this stage.

2) loading data into R, add to them a huge number of new columns by adding, subtracting, min, max, etc. the original ones. It is easier to do it in R than in mt5.

3) somehow select subsets of input data (by columns). I can follow Alexey's example using GenSA or just genetic optimization using GA package. Since I just need a binary result per input (on/off), GA, in my opinion, has an advantage, it has a binary mode of operation. But I need to try and compare both packages.

Here I would advise to keep two points: first - inputs should be allvdostationary with respect to their average. Second - about the mechanism of column enumeration; yes, GA has a binary mode. GenSA does not, but I simulated binary selection.

4) Input subset analysis. This is also in Alexei's example. But I'll risk training the model on a subsample of inputs right away, and use the validation data error as a result. As long as the learning time of the model is not more than a couple of seconds.

Good idea! But keep in mind that the whole process can still take a long time, since you usually need thousands of iterations. This fitness function is the most complicated one, because the whole model is computed in it. I, on the other hand, apply a so-called filtering fitness function, where there is no learning as such, but there are heuristics that determine how much the inputs affect the output in a broader sense. The calculation can be faster than two seconds. But not by an order of magnitude.

The way I see how to create a trading model for forex is just getting more complicated, simple ways don't produce consistent results. Right now I see it like this:

I've so far nailed adding new predictors and all sorts of small improvements to the design of the experiment.

I have an idea that I haven't realized yet. The matter is that we, you and everyone else usually predict the specific closing condition of a deal, for example, 3 hours ahead or when the price reaches the take or stop level.

And you get lousy results. And when I input in MT4 to close a trade several parallel working conditions, which are worked out through the OR operator, I manage to have a positive OM even outside the sample.

So I wondered how I could simulate in R forecasting of a deal outcome based on several conditions at once. For example, if the price reaches the TP within 3 hours, then close it. But the parameter 3 hours should not be strictly fixed, because it also should be chosen optimally. And if we also add a condition that if the price is positive within 3 hours, but has not reached the TP, we should pull up the STOP to Breakeven.

In this case the entering a position is also predicted by the trading robot.

That is the task in mind.

 
Alexey Burnakov:

So I wondered how I could simulate in R predicting the outcome of a trade based on several conditions at once. For example, if the price reaches the TP within 3 hours, then close it. But the parameter 3 hours should not be strictly fixed, because it also should be chosen optimally. And if we also add a condition that if the price is positive within 3 hours, but has not reached the TP, we should pull up the STOP to Breakeven.

From the experience of trading experts I do not like TP and SL. If the trading system is built correctly, then its own exits from trades will be more effective than exits based on TP and SL.

The optimal values of SL and TP change with time, there is no stable constant value. We can find some values that will improve the trading system efficiency at a certain section, but in this case the strategy efficiency will decrease beyond this section. It's better not to move the SL, but keep it at the distance of "something went wrong" and generally stop trading if the price reaches the SL, for error analysis and trading strategy optimization.

But there are some smart self-optimizing EAs that use TP. I understand that we should introduce a constant TP in training sampling itself. For example, in my case the result of model training is 0/1 - the price has increased / decreased in the next bar. But if price first rose and reached the TP level, then at the end of the bar it fell below the initial level - in the training set the result "1", not "0" (because the deal will be closed with a profit on TP, and there will be no more trades to the end of the bar). TP is usually small, less than 50 points (5 pips on the four digits). SL - dozens of times more, just in case "everything went wrong". TP for fronttest or trade cannot be optimized, only the one that was used when creating the training sample. I've seen such successful strategies, I think this is the direction to dig.

 
Dr.Trader:

From the experience of trading EAs, I don't like TPs and SLs. If the trading system is built correctly, then its own exits from the trades will be more effective than exits by TP and SL.


And how are its own exits formulated? By time only?

If you say to close in an hour, but if within an hour the TP is broken, then it will be outcome 1, then the complex conditions for closing are already used.

About TP of 5 pts and SL dozens of times more - as an option, but such TP will eat the profit.

 

I am with you this week and last week separated by time zones, so I don't get a live dialogue. I'm in California for work.

Anyway, I think I (maybe you too) can already plan the experiment, train and validate the results well. You can recruit predictors redundantly, too.

I think the catch is that predicting a trade based on a fixed target, like closing in an hour, is not optimal, and the results are weak.

In the MT tester, I optimize based on balance and recovery factor. In R I optimize the accuracy of guessing directions or predicting price differences. These are different things, no matter how you look at it.

Maybe try to write your own loss function in R for the learning method, where the profit is maximized, for example. This function can be substituted into some learning methods.

 
Alexey Burnakov:

I am with you this week and last week separated by time zones, so I don't get a live dialogue. I'm in California for work.

Anyway, I think I (maybe you too) can already plan the experiment, train and validate the results well. You can recruit predictors redundantly, too.

I think the catch is that predicting a trade based on a fixed target, like closing in an hour, is not optimal, and the results are weak.

In the MT tester, I optimize based on balance and recovery factor. In R I optimize the accuracy of guessing directions or predicting price differences. These are different things, no matter how you look at it.

Maybe try to write your own loss function in R for the learning method, where the profit is maximized, for example. This function can be substituted into some learning methods.

Lately I've been implementing the following plan.

I have taken my old trend indicator based EA. It is a solid Expert Advisor that trades on real account.

Next.

I look for its drawbacks and try to improve them using R.

For example.

I take the general direction from the high bar. However, if I look closely at the time, there is a considerably large lag in terms of lower bars, especially in terms of bars. So if this is D1 and I trade on M5, it turns out that I almost take the data of the day before yesterday for the direction. Even one step forward prediction for D1 with 30% error has radically improved EA profitability, and most importantly, increased confidence that it will not fail.

Maybe this way - to predict in R some elements of a ready-made, even bad EA in order to upgrade it?

How do you like this idea?

 

I recently wrote an article for smart-lab. A weak community, but I got something useful:

http://smart-lab.ru/search/topics/?q=%D0%B4%D0%BB%D1%8F+%D0%BB%D1%8E%D0%B1%D0%B8%D1%82%D0%B5%D0%BB%D0%B5%D0%B9+fx

 
SanSanych Fomenko:

Lately I have been implementing the following plan.

I took my old trending Expert Advisor on indicators. It is a full-fledged Expert Advisor that trades on real account.

Now I am looking for its drawbacks.

I look for its drawbacks and try to improve them using R.

For example.

I take the general direction from the high bar. But if I look closely at the time, there is a tremendous lag in terms of lower bars especially. So if this is D1 and I trade on M5, it turns out that I almost take the data of the day before yesterday for the direction. Even one step forward prediction for D1 with 30% error has radically improved EA profitability, and most importantly, increased confidence that it will not fail.

Maybe this way - to predict in R some elements of a ready-made, even bad EA in order to upgrade it?

How do you find such an idea?

I also have some working EAs. Maybe I will think about how to update them. But I do not understand what exactly needs to be improved. What do we have to teach our EA?

The Expert Advisor has a rigid logic of opening and closing positions. The decision in machine learning is made somewhat differently.

So, it is not exactly clear what exactly you will do.

 
Alexey Burnakov:

The idea is interesting. I also have advisors working. Maybe I will think about how to update them. But it is not clear to me what exactly needs to be improved. What do I have to teach my programmer?

The Expert Advisor has a rigid logic of opening and closing positions. The decision in machine learning is made somewhat differently.

So, it is not quite clear what exactly you will do.

What you will do is clear. The other thing is not clear - what for do we need it. First, if there is a system analyzing another system's correctness and/or controlling parameters, then the controlled system becomes unnecessary, because if you use them together, the total efficiency of both will decrease at once and will be lower than if you use them separately. For example, there is a system that gives 70% correct decisions, and there is a system that can control one or more parameters with 99% accuracy, then the final efficiency will be equal:

0,7*0,99=0,693

and that's lower than the systems on their own.

Amen. It is better to try to improve initial system without "controllers".

Reason: