Statistics of a anti-grid like system - page 5

 
zzuegg:
arr, we are leaving the topic :( bring it back

Well lol, sorry to point out but I feel this topic too is at a dead-end. I take your original question "are the results usefull for a anti-grid like system and what are the keyfactors of such systems?" was sort of answered as drawdowns. If you're looking to evaluate the system for stuff like optimum bet/ kelly, sorry I don't know how to calculate those when the lots size are variable and on-a-string of dependent trades. However, if you're looking for answers as to how this system stacks up against other winning systems (trending or otherwise) then that's the path its on right now.

Since you have all the win-loss data per trade. I think the ball fall in your court to provide stuff like the Variance and Standard Deviation which could be used in other statistical calculations like Return, Rate, Risk etc. One final question, did you try running this system on all available price data you have available (other currencies included)? If you did, did the system ever crash? Afaic, it's not if a system like this would crash, but rather how often.

 

zzuegg:

> This are the danger zones, Ranging larger then the gridsize but not twice as large

How has it done since 25th July?

-BB-

 

I apologise to zzuegg for apparently sending this thread off course yesterday. However, the main point of my post was to show that the statistical performance of the anti-grid system was superior to that of simple trend following systems, so I was surprised to be on the end of ubzen's vigorous (if somewhat off target) attack.

@ubzen, at a glance, your thread on your trend-following method looks interesting. I'll make sure to look at it. Presumably the fact that you also posted links to the website where I got the software I used to do the analysis in my post means you take back your earlier categorical advice against optimisation.

You bring up the interesting issue of the frequency of reoptimisation. I have come to the logical conclusion that in live trading, there is no harm in re-optimising very frequently (with a method that has good walk forward performance) but I have not yet convinced myself empirically that it results in much improvement. What is much more important is the length of the optimization period - it is easy to make it too short. But using MetaTrader and the walk forward analyzer, there is an important reason that short testing periods are misleading for systems which trade infrequently. This is that any trade open at the end of the period is unrealistically closed out at midnight. I don't like this "feature" - in my opinion the tester should let trades run until the rules exit them, but that's what we have to work with. This distorts results by an amount which increases the less trades are in the testing period.

Yes, my testing really was as accurate as using every tick (underlined because I have stated this before, and it remains true). The reason is that it is perfectly practical to execute trades at bar open, and in fact this is exactly what my code did. All logic was based on the values of indicators on bars with index 1 or greater. Incidentally, this is a practical and popular way to avoid the silly situation of catching several signals in the same bar. If you want to catch signals that frequently, use smaller bars! Good point about occasional giant 15 minute bars, but they are not a source of inaccuracy in this case.

Thank you for your kind words about the quality of the results of my example systems, but it's not good enough for my purposes, and pales compared to zzuegg's systems, for example. A lot of improvement is needed, so I have much to learn! I have always found EURUSD more amenable than other markets, getting the best of my results there in manual trading and with rule-based trading. But I occasionally experiment with other markets and will continue to do so. One idea I have been working on for a few years involves analyzing every pair from a basket of 4 to 6 currencies (6 to 15 pairs) before picking the pair to trade.

Don't ever stop learning!

 
@Elroch: Sorry, it was not my intension to Attack anyone. I was trying to keep my answers short so maybe it came out that way. I myself have allot to learn about mt4. Currently I'm learning to program baskets of currencies. I also really wanna do self-optimizing EA's and Neural Networks EA's hopefully I can get into gear and start those. It's quite easy to say what doesn't or did work. It's much tougher to say what will work. You live, you learn I guess. Everything I ever say here is My-Opinions. Just because something didn't work for me does-not mean it will not work for you is the attitude I tend to take. Hence, thats why I find my self re-inventing the wheel instead of taking other peoples advise blindly.
 
BarrowBoy:

zzuegg:

> This are the danger zones, Ranging larger then the gridsize but not twice as large

How has it done since 25th July?

-BB-

Hi BB, system works as expected. Even if the market where ranging the ranging size was around 200 pips +something. As the standart gridsize is 50pips i have no problems in such a phase. It also looks like the ranging periods where quite optimal for this system.

There was basically no danger zone in this time. Here is a test form july till now:

Note: the big drop up in balance is due the current changes in exit creteria: I have changed the exit form a simple exit at profit target to 'exit by equity trailing'.


@Elroch, constant re-optisations of course does sound very good, especially with trendfollowing systems. The problem for me in this is that you need to specify boundary's when the market conditions have changed and a reoptimisation is needed. You can of course use optimisation on the fly, (a nice article is there in the mql5 section). But also this all require that market condtiotions will stay the same for a larger period. Every change costs. As more fitted your optimisations is as smaller the changes in market conditions have to be for a fail. I think i am not going that way, my AdaptiveStrenght system for example have no imputs, no defined periods for it's indicators. I have programmed as basis an indicator which shows me the average lenght of up and down cycles. The other parameters are based on these results. The hope was to get a system which is automatically adapted to current market conditions. Looks good in the tester, but as said. Up to now the live results are not good at all. (But i let the EA run since long term results count and he is running on a small side account).

@ubzen, yeah NeuroNets might be THAT thing. Since my degree was based on this topic i truely believe that such systems can fast adapt to new market conditions. I dream about a NeuroNet analyzing the market conditions and choosing, or retraining automatically a new NeuroNet based on that specific market conditions. Still, i am far away from solving this issue. Only programming one state of the art Net is a big task.


//z

 

@ubzen, that's cool. It's easy to get the wrong impression in Internet discussions. Arguing is quite a good way to clarify understanding, as has been known since Ancient Greece. :-)

@zzuegg, nice work! Do you feel your new exit method offers superior performance?

Incidentally, very frequent optimization does not demand anything more of the market's behaviour than less frequent optimization, but it may not be worth the bother. My first feeling was that if the parameters of my system right now are influenced by the behaviour of the market over a past period of a certain length, I want that period to be as recent as possible. This can be achieved with very frequent re-optimization. For example if you re-optimise every week using 1 year of data, you are always using pretty close to the most recent data possible, but if you re-optimize every 3 months, some of the time you are using an optimization period that is 3 months old. However, I believe the difference in performance is likely to be very small for several reasons. Firstly there is a big overlap between the most recent year and the year that ends 3 months back - in fact 3/4 of the data is the same. Secondly optimization is a very imprecise process. A large fraction of the difference between the results of an optimization over the last year and the year ending 3 months back is likely to be due to chance, rather than a significant difference between the characteristics of the market over the two periods. Thirdly, the expectations of the characteristics of the market whose changes might be captured by optimization probably change slowly over time. Fourthly, there is the fact that the correlations between whatever characteristics our optimization is being influenced by and the characteristics of the out of sample data are quite low, diluting the effect on results further. Finally what ever characteristic of the market whose changes we are trying to catch will only explain part of the results of the system on the out of sample data. The consequence is that the difference in performance is likely to be very small. It would be good to test this scientifically with a real example, looking at statistical differences in performance, but it would need to be a pretty sizable test to reduce the random variation in results, and a very large test to identify a small improvement due to very frequent re-optimization, IMO.

 

> NeuroNets might be THAT thing

I understand the excitement around NN but I've always thought that as the training & retraining comes from... historical data, we're just going round the same old loop but with more CPU cycles...

-BB-

 

same old loop but with more CPU cycles...

Yeah, I kinda share the same sentiment but thinking happy taughts %:)

Well I'm hoping the NN does my system development process better then me. By that I mean the process of learning from a path which failed and trying another route.

 

A couple of thoughts. Historical data is historical data, but it's pretty much all we've got :-) NNs have a bit of a mystique, but really they are a sort of regression machine. By this I mean a NN can encode a class of functions between its inputs and its outputs, and the process of training involves determining the functions free parameters to fit it to the training set.

The issue of historical data reminds me of one of my favourite nonsense trading dogmas "indicators lag, price doesn't". The current price does not lag, true, but try trading using just the current price and ignoring all others (It's 1.41665 - do you want to buy or sell? No other information available). Previous prices rather obviously lag. Do they lag less than indicators? Well suppose you have a set of SMAs with length 1, 2, 3, and so on. The first N of these determine the last N prices. I think in any real sense this mean the SMAs don't lag any more than price does. Someone who subscribes to this popular dogma will believe that when you talk about the support created by a price extreme that occurred N bars back, that's an example of price not lagging, but if you talked about the relationship of price to an N period SMA, that would be a lagging indicator. Amusing.

 

It's 1.41665 - do you want to buy or sell?

Wow thats pretty good, I've never taught about it like that before. Man, I like this guy even more, he makes me think. Allow me to add my dogma to the list. Rsi, Macd, CCi, Adx, Sma or whatever your favorite indicator has move 100 points in the up-direction, do you want to buy or sell?

Reason: