Taking Neural Networks to the next level - page 10

 

Hallo ;)

About the news processing, I want the network to 'learn' when a high volatility news is comming up and stop trading, or close the actual trade.

I'm planning to use the system with the DAX index, I feel confortable trading the DAX, it moves fine and it has a 'little' spread. I just want to earn a little every day and go out.

The system is divided en 3 parts. One is a metatrader indicator that dump the needed info in a file, each 5 minutes. The next part is a Python script with all the stuff related to neural networks, it will write its prediction in a text file. And the last component is a expert advisor that catch the info of the file and works. It's planned in this way in order to rent the system with the minimum CPU load and resources to the client, only if the system works fine, of course.

I've 9 years of programming experience in MQL4 and MQL5 all kind of indicators and experts, doing script to automate backtests, but I've never developed a neural network for trading, and this is my first test.

I'm still playing with the inputs. I hope to learn things here about your experiences and how people here is going with this kind of systems. I'll share here also my advances.



Chris70:

Saludos ;-),

Always nice to hear about other projects in the field.

Yes, you can of course apply the sin/cos method to any cyclic variable. But just keep in mind that you always need sin AND cos, because for example: sin(20)=0.342, but sin(160) is also(!) 0.342. Cos(20) is 0.94, bis cos(160) is negative(!) 0.94, so if you use both in combination, any degree/angle of the circle is always assigned unambigously. Still, two numbers are much less than e.g. 365 numbers for a year, which I'd consider as a huge simplification.

Note: the formula sin(2pi*x/n) and cos(2pi*x/n) refers to angles in radian units, not degrees (in degree notation: sin(360°*x/n) or cos (360°*x/n)).

I also thought about using news as additional inputs. I wouldn't go this far to try to automatically interpret the directional implications of any specific news event (=from a fundamental point of view), but a simple method could be to feed in the time distance (seconds) since the last event that affects this currency, the impact level and the same for the next (scheduled) upcoming event. I already wrote the code for such a function class that returns me this number of seconds a few month ago, so I only need to combine it with my neural networks. There are about 90.000 news in the available history through Metatrader's built in news event functions. This should be enough for some reasonable neural network training.

Do you also implement AI with naked MQL only or do you interface MQL with another programming language? I'm really interested what you have achieved so far.


 

@Brian: manual trading is not my thing anymore; I did it for many years, but I just prefer the added consistency and objectivity that I get from 100% mechanical systems. But yes, I don't see why a "forecasting indicator" based on neural networks should't be helpful with manual trading, too (not for me, but in general), for example for confirmation of entry decisions.

@2duros: your project sounds interesting and you obviously have the experience to succeed with something like that. I with you all the best for that!

When I started with neural networks, I also went down the python route at first, but then decided to do it all in Mql. No matter how it's done, it's always some kind of compromise: in Python we have these powerful Keras/Tensorflow libraries, in Mql you need to code it all from scratch. Of course there already exist many other examples in Mql5, but most of them are very basic - too basic if you ask me. With the Python+Mql combination on the other hand we do of course have some challenges with interfacing and I also see the risk that people who do it in Python don't understand the details of what's important for proper training and network design, just because some powerful readily available functions do so much for us in kind of a black box style. But that's just my personal view.

I don't want to discourage anybody to do it with Python. But I don't believe that it's much of a shortcut to useful final results.

For anybody who tries to work with neural networks in Mql, probably the most important advise that I can give is keeping it all as dynamic as possible right from the start: number of layers, neurons per layer, activation functions, loss functions, optimizers, weight initialization methods... it's a good ideas to work as much as possible with arrays and enumerations. Then you can quickly modify the whole network architecture just with a few lines of code or by changing some input variables.

Although my neural network class over time has added up to about 4000 lines of code, if you keep it flexible right from the start, building a new network later is as simple as a few lines of code.

And getting started really isn't that complicated: an initialization function for the weights&biases matrix, input scaling, loop through the network à la "sum of weighted inputs, add bias, activate", label scaling, calculate errors/hidden errors, calculate weight deltas and update weights,.. plus functions for saving and loading the weight matrix. In their basic form each of these functions is possible with about 10-20 lines, so getting started with a working neural network is absolutely possible within a few days. Things like exotic activation functions, optimizers, alternative loss functions, statistical evaluation functions... can be added later if one wishes so, but are not necessary to get started.

With regard to "playing with the inputs": keep in mind that neural networks are "universal function solvers", so any indicators that are derived from price and volume don't add additional information that a neural network - given enough "depth" - couldn't find by itself. To the contrary: using indicators instead of pure price/volume can reduce the available information and lead to unwanted feature reduction. Keep it simple. All the complex stuff is the neural network's problem, not our's.

By simple I mean sticking to general information classes and not derived information of those. If you work with Python you probably can also easily imlement things like automated sentiment analysis from twitter tweets or news feeds in order to get an edge over pure price/volume alone. If we as retail traders want to compete against institutions, we should use every tool we can find (my opinion).

----

Please keep us updated about your project. Maybe you start a thread like this one or a blog article (?).

 
Hi again,

Nice to read yours advices. I agree with you about the synthetic data generated by indicators. I'll use just a few to get the data of a different timeframe I'll use to trade, for example I'll trade in 1 hour timeframe but I want to know what happened in a 5 or 15 minutes, I mean, is it the market nervous at this moment?

The reason to use of python, with keras and tensorflow, is just because it's easy to use it, even like a black box, and easy to use with GPUs to run faster the trainings.

I'll publish here some of my results :)
 

@2duros: another solution in order to incorporate different timeframes would be to take prices on a non-linear time scale as inputs, i.e. the more recent the higher (=exponentially) the frequency, if that makes sense.

GPU is a good argument, although I'm not sure if it makes much of a difference if the programming language itself is slower. It would be interesting to see a "face to face" competition between Python and Mql with an equally sized network.

Don't get me wrong just because I said these backtests take "forever". I think it's still pretty amazing what can be achieved with Mql5. To give you as an example some numbers from a log of a training I did yesterday, when I trained two networks in parallel with 45141 total weight connections: 84788 backprop. iterations took me 130 minutes; that's 10.8 iterations per second or almost half a million weight updates per second (even with 100 time steps for the LSTM network), just Mql5, no GPU (Intel core i7-6700@2.6GHz, 16gb RAM). I guess that's fast ;-).

 
positionmarket:
I think the predicted vs real close line is quite obvious, furthermore the candles indicate a stronger connection to the algos performance,i.e. you can see the market depth of that currency pairs strength volume assumptions as opposed to the line it will be hard to understand. Furthermore a screenshot with market volume bars would be more ideal if you really wanted to see the real close and predicted line "clear picture." The line doesn't tell you s--- apart from how close the prediction vs real coincide together.

Actually you are wrong on all accounts. Unless self deception is the goal.



Properly visualized it can be noted that the prediction lags 1 time step. Lime is the real close, purple is the close "prediction".

Same goes for the High and low, though i could not be bothered to draw them out.

 

We can't be sceptical enough, which is why I appreciate your comment and the work you put into redrawing those lines.

I don't know about "positionmarket", but for myself  I recognize the evident similarities between the lime and purple line, but you're probably wrong about my expectations and interpretations.

It is obivious that the predictions are much closer to the level of the last close than to any "correct" prediction, starting with the fact that the last close is the reference point for the prediction (prediction=close[1]+directional bias). It's not a 100% exact copy, but a similar copy and the essential information is in this little difference.

Needless to say that it's impossible to know the next price. This is only about the directional bias and this is small!!!! Sometimes it goes up to ~5 pips for a 15m candle, but often it's even below 1.0 pips. If you want to take your "1 bar lag line", the directional bias is the difference between the prediction and this lagged copy. And yes, this difference is small, but it does exist and its direction is correct more often than not.  That's all I'm saying.

To support this: for the metalabels that have binary labels, the statistical expectation for the null hypothesis would be a mean absolute error of 0.5, because for example a prediction either is a true positive or it's not. If we would just be rolling a dice, our estimation would be wrong 50% of the time, but the reality is, that in my last training session I got a result of 0.26 mean absolute error for theses values, which means that the ability to correctly predict if a directional bias is a true positive is significantly beyond randomness - and this can be used for trading.

I never said that I can make anything even close to correct predictions. This is only about having an edge beyond randomness, which holds true whatever you say.

 

Update: added a visual analysis tool for the neural network's input and output data (sample distribution histograms) in order to discover any potential problems associated with data scaling;

it meant putting in some work, but it seems to be helpful with getting the network's parameters right.

neural network data analysis

 
Lets say  I created  an expert advisor.The win  and loss   percentage of this expert advisor  is 57    percent   without the machine learning .Then I used the buy sell  signals of the expert  as input to  a machine learning algo.The machine learning algo predicted the signals lets say with 75 percent accuracy.How much the win and loss percent of the expert would  increase? Is there any     such calculation?
 

I'm trying to understand what you want to achieve.

I guess by "win and loss percentage 57%" you mean winrate 57%, i.e. 57% winners out of 57+43=100% all trades? First of all, I think we always need to be careful where those numbers come from (real life average result over many trades? optimized backtest?...).

But let's just assume these numbers are true.

There is a logical problem with those signals being inputs for a machine learning algo and its predictions at the same time. Inputs and outputs usually are different (except for the special case of autoencoders). The outputs on the other hand correspond with the labels that the algo has been trained on in the "supervised learning" setting. And those labels are completely up to you. You can declare anything you want as your labels, depending on what you want to investigate, which is why you won't find any general calculation.

I'm actually doing something very similar. In my case the trading signals are generated by a multilayer LSTM network with inputs from multiple currencies. Then, as a second step, I look for a confirmation by evaluating the quality of the signals that I get from this network. This is what the second network (=metalabel network) is for. This one has all outputs of the primary network as inputs, plus time and day info. One could say that the second network has the job to evaluate single currency results of the first network in the context of the other currencies' outputs and time context. It doesn't answer "how high is the signal?" but "how true is the signal?" with respect to it's value in the context of the other values. The concrete outputs of this second network are the "confusion maps", i.e. the probabilities of true positive / false positive / true negative / false negative for the individual currencies and the hypothesis I want to confirm. I tested different methods, but what works best for now is the simple hypothesis "chance is > risk".

Put into practice with a numerical example: if from the LSTM network I get the result "next high for GBPJPY 6 pips above, next low -4 pips below" and the second network tells me that "in this context the probability of it actually being true that we have a favorable reward/risk ratio is 60% out of all positives, the precision is 65% and the accuracy is 62%", then I know this is a trade worth considering.

Please be careful with terms like "accuracy". There's often some confusion about what the difference between e.g. accuracy and precision are:

      - accuracy = (TP+TN)/(TP+TN+FP+FN), --> measure for "trueness"/validity vs. bias (=constant error)

      - precision = (TP) / (TP+FP), --> measure for reproducability/reliability vs. variability (=variable error)

      - positive predictive value (PPV) = TP / (TP+FP)

      - negative predictive value (NPV) = TN / (TN+FN)

[note that precision and positive predictive value actually are the same if we look at the same hypothesis and not it's negation]

Let's say you're shooting with a gun at a target, then the accuracy is kind of a systematic error, i.e. for example the gun on average tends to shoot a little to high every time. Precision then is the variable error component, i.e. the diameter of the circle of how far the individual hits are scattered apart. A gun could shoot with perfect precision always at the wrong spot (=perfect precision, but low accuracy), or scattered around exactly the spot that was aimed at (=perfect accuracy, but low precision).

Back to trading: what we want is a signal in concrete numbers that is true, accurate and precise. Machine learning can help us a little with the magic formula for evaluating what we can expect e.g. for accuracy and precision in a given context.

You ask how much the winning percentage would increase: as always, high probability setups are more rare than bread and butter setups. But I think machine learning can help eliminating some false signals. But fewer signals don't necessarily imply trading less if we compensate by diversifying over different markets: better a few high quality setups on different markets then taking every single trade in one market. If you see it in this broader way, there is no simple answer to "how much does the winning percentage increase?".

 

I still don't understand what is meant by a high probability set up? The market probability is always the same. Whatever the TP:SL ratio is that will be the probability of winning or losing not taking fees into account. Even buying at the bottom of a Bollinger band and selling at the top for example might work 5 times in a row and then on the 6th time lose -5x the amount of a single win. If you put in a stop loss to avoid this then the probability of losing increases.


If you try to only do it according to the trend, then you only notice the change in trend too late and it still does not help.


This is the problem I have with dealing with trading, I cannot overcome it

Reason: