Taking Neural Networks to the next level - page 11

 

"The market probability is always the same".

Is it really? Or are there periods when a trend is more stable/predictable? Intuitively I think that context informations like time of day, upcoming news events, recent volatility, price behaviour of correlated other currency pairs etc. might contribute to the probability actually not being always the same. I don't know though and less do I know a formula how to calculate such complex influences  - that's why I ask a neural network. The primary network makes the predictions and the secondary ("metalabel") network gives me an estimation about the precision and accuracy that I can expect in the given context. If this was always the same, the secondary network over time would learn to always give the same answers, namely those that go with the least average error, but the reality is that this information is dynamic.

If it helps - I also think trading isn't easy and don't know no holy grail.

 
Chris70:

"The market probability is always the same".

Is it really? Or are there periods when a trend is more stable/predictable? Intuitively I think that context informations like time of day, upcoming news events, recent volatility, price behaviour of correlated other currency pairs etc. might contribute to the probability actually not being always the same. I don't know though and less do I know a formula how to calculate such complex influences  - that's why I ask a neural network. The primary network makes the predictions and the secondary ("metalabel") network gives me an estimation about the precision and accuracy that I can expect in the given context. If this was always the same, the secondary network over time would learn to always give the same answers, namely those that go with the least average error, but the reality is that this information is dynamic.

If it helps - I also think trading isn't easy and don't know no holy grail.

After thinking about it I guess I could use when the price touches envelopes as decision points to perhaps have less noise and avoid overtrading - it may help. With EA's though when I program them they always have given me just the odds (or worse) of the TP:SL ratio no matter which technical analysis strategy I tried. 

I think for manual trading it would be nice to buy low and sell high, more so than catching trends. The trends for me are the one thing that causes uncertainty. They can be realized over a little bit of time but are subject to change before one can understand that they have changed. I might still try some manual trading for fun on virtual but with any real money I honestly would not try to trade patterns based on what I have seen from backtests. Obviously I do not know about Neural networks so I hope it works for you.


My only final idea I have with programming an EA I will share with you:

I am going to try to use the gamblers fallacy. I am going to find outliers of consecutive losses

for example if TP:SL = 50pips:50pips and maximum expected losses from back tests is 8, then the EA will open a trade on the seventh loss. The EA will just keep checking how many losses in a row would have occurred if it had traded (and not actually trading those 'check' trades). The EA will use martingale scaling for any losses it experiences for its actual trades (starting on say loss 7 for example).

I already tried this as an EA once before and it did work but it only trades maybe 5 trades in 3 years or something.


What my theory is, is to add to it the following: There should always be a combination (sequence) of trades at a point in time that lead to say 7 losses. For example it could have been:

Buy,Buy,Buy,Buy,Buy,Buy,Buy

Sell,Sell,Sell,Sell,Sell,Sell,Sell


Buy,Sell,Buy,Buy,Buy,Buy,Buy

Sell,Buy,Sell,Sell,Sell,Sell,Sell


etc. I think there should be 2^7 combinations to check if I use 7 trades sequence to check. (The sequence will start from the beginning for the 8th trade)

Therefore I would hope to always find a sequence that is an outlier and have many more trading opportunities.


This is somewhat for fun though usually something always goes wrong when I test EA's but it is my final attempt .


Another separate idea I had was to use an EA (that is not profitable but makes sort of zig zags or mean reverting type results and then wait for the EA to lose a few trades and then trade it for real until it makes a reversion. Or in other words my other idea is to create a synthetic instrument using an EA the purpose of which is to create a synthetic instrument that is more predictable for technical analysis than the actual underlying market. I have not looked much into this either though but I might

 
Brian Rumbles:

After thinking about it I guess I could use when the price touches envelopes as decision points to perhaps have less noise and avoid overtrading - it may help. With EA's though when I program them they always have given me just the odds (or worse) of the TP:SL ratio no matter which technical analysis strategy I tried. 

I think for manual trading it would be nice to buy low and sell high, more so than catching trends. The trends for me are the one thing that causes uncertainty. They can be realized over a little bit of time but are subject to change before one can understand that they have changed. I might still try some manual trading for fun on virtual but with any real money I honestly would not try to trade patterns based on what I have seen from backtests. Obviously I do not know about Neural networks so I hope it works for you.


My only final idea I have with programming an EA I will share with you:

I am going to try to use the gamblers fallacy. I am going to find outliers of consecutive losses

for example if TP:SL = 50pips:50pips and maximum expected losses from back tests is 8, then the EA will open a trade on the seventh loss. The EA will just keep checking how many losses in a row would have occurred if it had traded (and not actually trading those 'check' trades). The EA will use martingale scaling for any losses it experiences for its actual trades (starting on say loss 7 for example).

I already tried this as an EA once before and it did work but it only trades maybe 5 trades in 3 years or something.


What my theory is, is to add to it the following: There should always be a combination (sequence) of trades at a point in time that lead to say 7 losses. For example it could have been:

Buy,Buy,Buy,Buy,Buy,Buy,Buy

Sell,Sell,Sell,Sell,Sell,Sell,Sell


Buy,Sell,Buy,Buy,Buy,Buy,Buy

Sell,Buy,Sell,Sell,Sell,Sell,Sell


etc. I think there should be 2^7 combinations to check if I use 7 trades sequence to check. (The sequence will start from the beginning for the 8th trade)

Therefore I would hope to always find a sequence that is an outlier and have many more trading opportunities.


This is somewhat for fun though usually something always goes wrong when I test EA's but it is my final attempt .


Another separate idea I had was to use an EA (that is not profitable but makes sort of zig zags or mean reverting type results and then wait for the EA to lose a few trades and then trade it for real until it makes a reversion. Or in other words my other idea is to create a synthetic instrument using an EA the purpose of which is to create a synthetic instrument that is more predictable for technical analysis than the actual underlying market. I have not looked much into this either though but I might

this is called the gambler's FALLACY for a reason

 
Your experiments on programming neural networks on MKL are simply a waste of time. For this area (neural networks and machine learning) you need to use the languages intended for this. For example, R or Python.

Not for discussion. Hint.

Good luck

 

I worked with neural networks in Python before. Doing it in Mql didn't just happen by accident, but because of advantages of a language inside a dedicated trading environment.

Yes, the journey wasn't easy, but Mql5 is totally capable of doing the job. I know it because it's not something I plan on doing, but it's already done, with all major neural network functions, architectures and learning optimizers, so I don't see the point in going back now in hindsight, just because of the personal opinion of some guy on the internet. Constructive critique is always welcome, but sorry if this (hint) "I know it better, not for discussion" attitude feels a little arrogant to me.

[meaning: of course you can share your opinion, that's what "comments" are about, but sharing an opinion that you were not asked for by entering a discussion with "not for discussion" is just rude]
 
Vladimir Perervenko:
Your experiments on programming neural networks on MKL are simply a waste of time. For this area (neural networks and machine learning) you need to use the languages intended for this. For example, R or Python.

Not for discussion. Hint.

Good luck

MQL is a fairly complex programming language. i don't see why it would not work for NN. And how will you incorporate the NN you created in Python in your trading system ? there is a problem there.


it's much more work, but i think implementing all the NN algo in MQL is the way to go, then you can monitor everything you need in your EA


Jeff

 
Jean Francois Le Bas:

MQL is a fairly complex programming language. i don't see why it would not work for NN. And how will you incorporate the NN you created in Python in your trading system ? there is a problem there.


it's much more work, but i think implementing all the NN algo in MQL is the way to go, then you can monitor everything you need in your EA


Jeff

 

MQL <=> Python : https://www.mql5.com/en/articles/5691

 

Thanks for the link to Siraj Raval's video. I think what's interesting about this video is that although the code example is written in Python, only simple functions that have a direct equivalent in Mql were used. Basic python IMHO really isn't that different. The real power comes with the imported libraries like keras, numpy, matplotlib, scipy...

And then there often is the argument about using GPU power, which for Mql is only possible via OpenCL, but in my personal experience Mql as a compiled (i.e. more 'medium' level) language can more than compensate for that compared to a 'high' level interpreted language like Python that's certainly better suited for human beings but less so for machines.

Of course there is no direct equivalent to matrix operations in numpy or adding neural network layers with just 1 line of code with Keras. But this is not because Python is inherently more powerful in the sense that it can do things that can't be done with Mql but because other people already did 99% of the work for us. With Mql on the other hand, we need first to build these libraries by ourselves. Is it a lot of work? Definitely. Is it a waste of time? I don't think so. This work only needs to be done ONCE and not with the complete complexity of Python's libraries, but only for the functions that we actually use and functionality grows over time according to the requirements of individual projects.

I'm pretty sure that anybody who is not a professional data scientist anyways will only use the tip of the iceberg of what theoretically can be done with Python.

A problem with machine learning + Mql is that as of today the standard library and the examples in the codebase are pretty much useless for serious machine learning applications.

 
Chris70:

Thanks for the link to Siraj Raval's video. I think what's interesting about this video is that although the code example is written in Python, only simple functions that have a direct equivalent in Mql were used. Basic python IMHO really isn't that different. The real power comes with the imported libraries like keras, numpy, matplotlib, scipy...

And then there often is the argument about using GPU power, which for Mql is only possible via OpenCL, but in my personal experience Mql as a compiled (i.e. more 'medium' level) language can more than compensate for that compared to a 'high' level interpreted language like Python that's certainly better suited for human beings but less so for machines.

Of course there is no direct equivalent to matrix operations in numpy or adding neural network layers with just 1 line of code with Keras. But this is not because Python is inherently more powerful in the sense that it can do things that can't be done with Mql but because other people already did 99% of the work for us. With Mql on the other hand, we need first to build these libraries by ourselves. Is it a lot of work? Definitely. Is it a waste of time? I don't think so. This work only needs to be done ONCE and not with the complete complexity of Python's libraries, but only for the functions that we actually use and functionality grows over time according to the requirements of individual projects.

I'm pretty sure that anybody who is not a professional data scientist anyways will only use the tip of the iceberg of what theoretically can be done with Python.

A problem with machine learning + Mql is that as of today the standard library and the examples in the codebase are pretty much useless for serious machine learning applications.

Your autoencoder:

Is it still an MLP or have you Tried lstm autoencoders?
When i get it right you are only autoencoding the current index.  Or also a 3rd Dimension? (=the while sequence Back in your prediction window for every Index like an lstm does)

edit:

Additional a test without the autoencoder would also be valuable and informational.

+ are any trailing stops or similar involved?

Reason: