Statistics of a anti-grid like system - page 3

 

Sorry to correct you, ubzen, but mathematics is simply true. It's the misuse of mathematics that can cause problems. If you really want to disagree with the truth of mathematical theorems, you're almost certainly banging your head against a brick wall and in the unlikely event that you are not, I reckon you need to improve your knowledge to postdoc level. For example, the Kelly criterion is a mathematical truth about the probability distribution of the returns achieved by using a certain amount of leverage on a sequence of opportunities with certain statistical properties. It can be of practical use in situations where you have what you believe to be good estimates of the necessary statistical properties. But the most important thing the theory (and my empirical studies) tells me is that you can expect horrendous variance in return if you use anything close to the maximum leverage that the criterion would suggest. Pretty much the only way the theory can mislead someone who understands it is if they plug in estimates of the statistical properties of their trades that are too inaccurate. Garbage in, garbage out is an old truth. If you put incorrect data into a correct program, don't expect correct results.

As for the Gaussian distribution, the fact that some sloppy people made incorrect assumptions about the mathematical model to use for correlation between assets in certain applications, and that this mistake led to big financial problems is rather old hat now, and seems about as exciting to me as the Y2K problem (which was already sending me to sleep by about 1990, especially as I had always avoided using 2 digit years in my programs).

I aspire to not make wild claims like yours (eg "billions of independent events"), but to actually work out how meaningful the information I have is. For example if the equity curve achieved by zzuegg's system really is as amazingly good as it appears, this is very strong evidence for its effectiveness unless it has been optimised on that data. I have never produced an equity curve as good as that without it being nonsense or optimised using the data. Whenever you see a final profit that is a very large multiple of the maximum drawdown in equity, it is highly indicative of quality. When it is a small multiple, the chance of it being a fluke is a lot higher. This is a simple rule of thumb, but it's not a bad one. It can be improved upon by looking at the statistics in more detail.


Incidentally, if George Soros went broke this year, I would say "his money management was abysmal". While George Soros will never make the mistake of risking his entire wealth, a good example of someone who did do so is the famous Jesse Livermore. He was bad enough at money management to become poor after being extremely rich. It doesn't take any mathematics to realise he was foolish to do so, probably due to a combination of psychological flaws combined with a lack of methodical management of his wealth. Livermore was clearly a very good trader, but there was room for improvement in his money management, wouldn't you say?

 

With all due respect, could you help us with substance. Twenty paragraphs explaining why we dis-agree is not gonna help this thread. I hope you can respond to my questions with a direct answer, I'll even accept "i don't know" or "i'll look into it".

1) Please show us the math with proves or dis-proves the anti-grid approach.

2) Please provide an estimate of a statistically valid # of trades.

3) Please provide formulas or references which support your opinions which I disagreed with.

As you said, It's the misuse of mathematics that can cause problems. IMO. Using conventional statics toward trading like its a Law is a misuse of the mathematics. I'm really begging you to please provide formulas this time around.

In response to:

Livermore was clearly a very good trader, but there was room for improvement in his money management, wouldn't you say? I'd say the ends justify the means. Had he not gone broke, we would have kept thinking he knew his money management.

From wikipedia: On September 16, 1992, Black Wednesday, Soros's fund sold short more than $10 billion worth of pounds,[27] profiting from the UK government's reluctance to either raise its interest rates to levels comparable to those of other European Exchange Rate Mechanism countries or to float its currency.

Where do you think they got the statistics, which supported the Edge on the trade, which supported the Money Management of 10 billion pounds. If Soro's lost that bet, we'd say he didn't know money management too, because the kelly and bankroll didn't support the bet.

If the math says you have a 0.01% (1/10000) risk of ruin, and you lost all your money. Does that mean you don't know money management? or does it mean you had bad luck which hits 1 out of 10000 people?

 

@ubzen, interesting points. I don't think we're as much in disagreement as you might believe. And discussion improves understanding, hopefully for all of us.


1) I haven't made any of these (four?) claims and would not do so. I would not want to or attempt to prove or disprove grid or anti-grid systems in general. Simply infeasible. Even with one particular system, all that is possible is to make statistical statements about its performance.

2) This question brings up an issue which is of fundamental importance in interpreting evidence. The basic idea is to estimate how likely it is that the results might have been achieved by chance, so as to avoid being deceived. As a simple analogy, suppose you had a method for predicting coin flips and you managed to achieve 14/20, you can easily work out how likely such a result would be if you were really scoring 50%. 14/20 could be thought of as a profit factor of 1.75, so doesn't need much of a sample to become significant. (in fact there is about 1 in 17 chance of getting at least 14 out of 20 right if you have no edge).

It turns out that lower profit factors (eg 1.1) need much larger samples to become significant. The precise process to follow is to create a null hypothesis which might be something like "the trades have the same variance as the actual results, but the average profit is zero" and then calculate the probability that you would get the actual result by chance. A very useful general procedure, I hope you would agree. In the case of zzuegg's results, my analysis was not sufficient to disprove the null hypothesis (even with the optimistic assumption I made about the variance) but the amazingly good equity curve makes it clear to me that doing analysis using data I did not have would show the results could definitely not be ascribed to chance.

There is an interesting point when you test several systems (say in some sort of optimisation). If you performed this analysis on the best of your runs, you could mislead yourself, because of the selection that has taken place. You need to strengthen the criterion to take into account that you have done several runs. For example if you do 100 independent runs and one of them comes up with results that you might expect to occur by chance 1 in 200 times, that is hardly any evidence at all that the method is profitable. Hopefully it is clear why.

3) ok, finally addressing your three disagreements quoted below.

ubzen:

I Disagree: It is a mathematical fact that the way to achieve the best long term performance (with a reasonable definition of this including your risk-aversion) from a series of trading opportunities with similar statistical characteristics is to use a similar amount of leverage for each of them:

I Disagree: Admittedly, there are probably quite a few people who are willing to accept (or ignore) a small probability of losing very heavily in order to have a high probability of healthy profits, but this is best classified as pure gambling.

I Disagree: there is no theoretical reason to allow what happens to the first trade to influence the size of the position for the second trade, (except that it may be changed in proportion to the equity in the account).

The first one is related to the Kelly criterion with which you are familiar. If you have a series of trading opportunities with the same statistical characteristics (say, two possible outcomes with the same probabilities each time), you are aware that you achieve the maximum log mean growth with the Kelly fraction - i.e. the same specific fixed leverage every time you get one of the opportunities, but that this gives extremely high variance in the results. You are probably also aware that if you use a lower fixed leverage, you achieve a lower mean growth but a much lower variance. If for a sequence of similar opportunities you use 2 or more different levels of leverage for subsets of them, the result will be a geometric mean of what you would get with each of the different levels of leverage separately (if this isn't obvious, I can elaborate). The reason that you get worse variance with the mixture of leverage than with a single choice of leverage is that the graph of variance versus return is concave (each point corresponding to a single choice of leverage). If you think of this curve of variance against return, an average of two or more points on the curve will be above the curve (simply because it is concave). This means it is always best to use a single level of leverage. There is obviously a huge problem in that we don't know the probabilities in trading, but if we did, Kelly would point us towards the way to vary exposure in order to provide an optimum compromise between return and variance.

It is not trivial to apply this to the situation where you have a variable size net position, which is the case in grid and anti-grid systems. This is very different to the situation say where you have a series of bets on an biased coin. On reflection, in this type of situation we could make the convenient artificial definition of a trade as the time over which a position stays constant. Turning a blind eye towards the spread, you can pretend that you simply open and close such positions at the times when position sizes are changed. The exposure you have and the time over which you have it is obviously the most important and interesting thing.

With this viewpoint, theory would suggest variations in position size should be associated with variations in the statistical performance (you want fairly low exposure (in terms of the effective leverage) when you have a slight edge and high exposure when you have a huge edge).

I guess the second disagreement could be viewed as personal preference. But the probabilities and the sizes of the wins and the losses are important too. The sort of situation I was thinking of was where someone had a 90% chance of a 10% gain and a 10% chance of a 90% loss or worse. i.e. no edge, but winning most of the time. This is the sort of situation you get from martingale staking without an edge.

The third disagreement comes from thinking of the effect on the final equity. Can we agree that, in principle, the only thing that should be taken into account in picking the exposure at a particular time is how that exposure affects the probability distribution of your final equity? As far as I can see, the only things that are relevant are the probabilities of things happening (price reaching different levels later) and the equity at the point the opportunity occurs. The only way the result of some earlier trade affects this is by the effect it has had on the equity. Of course if there is already a position open, this affects how the appropriate level of exposure can be achieved. Perhaps that is what you meant.

 

@ubzen, on the other points:

Surely you can't think it's good money management for someone with a vast fortune to lose it all and more? To quantify this, people generally have a convex utility function, especially at the low end, where the first part of their wealth is worth far more than additional wealth. Eg, the utility difference between $0 and $1M is much more than the difference between $9M and $10M. This fact can be used to show that utility is lost by risking too much of one's wealth. The reason is that the marginal profits have much lower utility than the marginal losses. But surely this is common sense: if you have $100M (in today's money), you should keep at least some of it safe to ensure you have most of what you want. The utility of squeezing out the last penny of potential profits by maximising exposure is nowhere near high enough to justify risking it all.


I think you have guessed wrongly about the riskiness of Soros' bet. If the position was $10B, the risk was lower than 4.5% of this ($450M) because of the upper bound on the ERM, but probably a lot less than this because entry and exit prices would not have been at the worst points possible. He had identified a trade with a good risk/reward ratio, since the upside if the the overvalued pound left the ERM was very high. Since Soros' fund probably had > $10B funds at the time of the trade (eg we know that about $6B was invested by the Rothschilds in 1969), the leverage used for the trade can be considered to be conservative according to the Kelly criterion, assuming any reasonable edge. [Incidentally, the pound dropped about 25%, and Soros' bet must have caught quite a large fraction of this move, which is impressive. I wonder what the exit strategy was? :) ]

 

Nice, well explained. Now I understand where you're coming from. I agree with all of the explanation provided. Allow me to play the devils advocate against this system for a moment. On appearance, the system seems Random. No-Forecasting, No Edge and No Algorithm. Most people with Mathematical background would look at such a system and say the system's Expectation=0. Because of the mathematical Law which I agreed with on the list, this system is doomed to fail. Why on earth is this system using Variable size bets anyways. If it's truly profitable, according to the math and how I understand it, it should be profitable using flat-bets as well. Also, why is it using 20k instead of the standard 10K starting account? Gordon provided some nice math and comments to dis-arm such systems.

Now I'll attempt to support this system. I think Zzuegg put his justifications nicely. Indicators lagg and prices don't. What the system is doing is using the OrderProfit as the indicator. Once I saw his curve, I knew he was hedging the same currency. I've seen curves where Zzuegg have worked wonders with drawdown using hedging. He apparently know something I don't. Hedging is something 7-bit, the 1st person I've heard about this Anti-Grid from would not consider. However he did consider using Grid and Anti-Grid together because one have a linear-growth (Grid) and the other have a quadratic-growth (Anti-Grid). All-in-all some nice mathematics have been provided to support this system by 7-bit.

To the issue of Variable-Lot-Size vs Similar: Doesn't make much difference, Zzuegg could have used 0.1 Lots through-out the entire test and still achieve the same results. Only this time he'll have 10x more trades. I've seen him do it before. The reason why Math guys like Fixed-Lots is because it makes it easy to calculate the Kelly and other Stats. But that's not whats important with such systems, what's more important are the orders Dependence/Independent Events. Yes, I know it's not impossible to calculate Kelly and Sd-Curves with Dependent Events, It just more difficult and I appreciate your comments trying to explain some of that.

To the issue of Draw-Down & 20k Bankroll. I think even Zzuegg realizes that his system requires a much bigger investment capital than typical. The relative draw-down will get lower and lower when ever the stating capital is Increased. I just wanted to point out the importance of comparing apples to apples.

To the issue of Statistically Valid# of Trades: Boy, I wish you had the answer to that :(. Anyways, all hopes is not lost. He's got other currency pairs which may show different characteristics which he could test on. I'm more than confident that such a system would work better on different charts than a textbook Trend-Following or Side-ways system for that matter. Now sure, again it might die on one of those range pairs.

To the issue of Random Trading 0-Expectancy: My belief is that the market is random a high percentage of the time. The effects of it's un-random nature is mostly realized on a Larger-Scale/Time-Frame. However, its within the smaller time-frames where most of our Bankrolls must weather the storm. Also trading is not a Static process like a card game or chess where algorithms could exploit them (to a statistic degree near certainty) because all the known variables are visible. Technical Analysis and Price-Charts only show 1-dimension to markets. The fundamental and emotional aspects are not apparent. One can calculate all the variables and even emotions (if you believe that too is reflected in price) up-to the point you place the order. But after that, the trade is at the mercy of new price movements. I think such a system allows you to adjust to the before and after within a randomly changing market.

In Conclusion: Did Zzuegg crack the code on the 7-Bit Anti-Grid system using Hedging? Well, we'll wait and see.

Cheers. -Zen-

 

You are right ubzen, the system works also with constant lotsizes, but it recovery very slowly after ranging phases so that the profit decreses. Even it might look more saver, i don't think it is. Recovering fast after a ranging period, at best as soon as the breakout happens is a good way of not falling into the next ranging period without locking the profit.

To the issue of Draw-Down & 20k Bankroll. I think even Zzuegg realizes that his system requires a much bigger investment capital than typical. The relative draw-down will get lower and lower when ever the stating capital is Increased. I just wanted to point out the importance of comparing apples to apples.

In the 20k test i used my live account. Meaning Minimum lotisze is 0.1, if you find a broker (and there are a lot, even ECN's) which allows you to trade 0.01 the bankroll would only be as high as 2k. Still high for such small lotiszes but affordable.

A stickt antigrid would require to open a position in both directions after an X pip-movement. The main difference between an anti-grid and a grid is that in the first one your are setting a stoploss and in the second one a takeprofit. There are pros and cons of heding, as i said the same system can also be implemented in a non hedging way, but the coding is far easier with. 7Bits antigrid works fine, (but if i remember correctly also he is selling and buying simultaneosly, long time ago) but you need to define exit criterias, if you miss the exit the whole antigrid goes against you. This would require and indicator. Since i stricktly do not want any indicator i needed to find another way to automate this. The main addition to the strict antigrid is that the system locks even some of the profit if the market goes against the net position. (not much, but at least some pips). The 'large' drawdown is still a problem, avoiding only a few trades in the ranging phases would not only decrease the drawdown, but also increase the profit since the recovery would happen faster. Unfortunately i currently don't know how to do without relying on indicators.

Also MT5 multicurrency backtests seems more tricky then tought. :(

BTW, i reversed the system to see if i am exploiting any betting strategy or truely the nature of trending markets, and the reversed system failed all time, starting time does not matter. This leads to the conclustion that even if some small progressive system is included i am exploiting the fact that markets trend.

 

@ubzen, actually, "hedging" provides exactly none of the edge or profitability!. It is merely a convenient way to keep stop and take profit orders open. There is no other advantage to having two positions open in opposite directions at the same time at the same broker (because account equity changes in precisely the same way as if you had the net position). In some cases, hedging can increase spread costs (any time you open and close opposite positions at essentially the same time). Of course with a bit more trouble you can produce the same results as any hedged system by just ensuring you have the same net positions at all times and the same stop and limit orders open (as separately managed orders, not OCOs attached to a position).

I thought my answer on statistically valid number of trades was what you needed. To look at it in a slightly different way, whenever you have a sample of trades, you have uncertainty in the underlying performance (as opposed to the sample performance), which declines with sample size. You can find things like a 95% confidence interval on performance using the technique I described. The only reason you need more trades when the profit factor is low is that you need to make the interval smaller to be confident you are profitable. Of course, the uncertainty goes down with the square root of the number of trades.

With regards to the thoughts about what makes a strategy profitable, bear this trivial thing in mind. Some of the time you have a long position, some of the time you have a short position. For profitability the market has to move in the same direction as your trades more than it goes in the wrong direction (with weighting if the position size varies). So you do need to predict the direction of the market to profit (i.e. be long when it is going to go up, be short when it is going to go down), however complicated your system. :)

 

@zzuegg, what were the red green and blue data on your chart posted on 2011.08.08 21:19?

I'm stil utterly amazed if the equity line can hardly have any downside, but it still goes up such a lot over the entire period. Can you indicate roughly what it is in the system that causes this? The antigrid I have looked at from 7bit seems very different (and nowhere near as good), at least with me putting in random numbers. [for some reason it backtests very slowly, which doesn't help] Also the balance line only heads south until the end of the test period, which seems a bit odd!

 

@Zzuegg:

7Bits antigrid works fine, (but if i remember correctly also he is selling and buying simultaneosly, long time ago): Yea maybe a long time ago when he was experimenting. Per his statement here: "opposing positions will of course be closed immediately, I thought this was self-evident? Why on earth would i want to be long and short the same instrument at the same time?". The final version of the Snowball system like mt5 would just not allow it. But I agree, it's easier to code with the hedge as you'll have access to all information stored in the orders without having to use global_variables or dummy orders.

Also MT5 multicurrency backtests seems more tricky then tought. :(: Yeah? lol, I taught I feed you to the wolves first. Anyways, if you're still trying to get some ideas as to how it'll perform in multi-currency without using mt5, I still recommend the tool described earlier. It can merge multiple Report files to show equity effects. 1) problem with this approach is, if you're using percentage of equity to determine anything like (risk/reward for example) then this approach would be mis-leading. 2) This solution cannot look at what's happening in example, EUR/USD to determine the outcome of EUR/GBP.

The 'large' drawdown is still a problem, avoiding only a few trades in the ranging phases...: Well, true to the system, you can try using Order_OpenPrices to determine the range. However, in my tests, for as many times it helps you, it also hurts you. Here's a sample code.

//~~~~~~~~~~Stack-Tech:
for(x=1;x>=-1;x-=2){
    if(Order_Manage(x*iMagic,10)>0){
        if(Order_Manage(x*iMagic,10)<OrMax){
        //~~~~~~~~~~
        if(Last_Or==(x*Atg_Magic) && Zone_Out()){
            if(Order_Manage(x*iMagic,20)<-Neg_Gv*OrderLots()){
                Atg_TimeStamp=Trade(-x,'f',Lots);
                Last_Or=(-x*Atg_Magic); break;
            }
        }
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
//~~~~~~~~~~Zone_Capture:
if(!No_Orders && Order_Manage(0,2)>0){
    if(Zone_Hi==0 || OrderOpenPrice()>Zone_Hi){
        Zone_Hi=OrderOpenPrice();
        //Zone_Lo=Zone_Hi-Grid*Point2Pip;
    }
    if(Zone_Lo==0 || OrderOpenPrice()<Zone_Lo){
        Zone_Lo=OrderOpenPrice();
        //Zone_Hi=Zone_Lo+Grid*Point2Pip;
    }
}
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
//~~~~~~~~~~Zone_Out:
bool Zone_Out(){
    if(Zone_Hi!=0 && Zone_Lo!=0){
        if(Mid_Price>Zone_Hi+Grid*Point2Pip
        || Mid_Price<Zone_Lo-Grid*Point2Pip
        ){
            return(true);
        }else{
            return(false);
        }
    }
}
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
Elroch:

@zzuegg, what were the red green and blue data on your chart posted on 2011.08.08 21:19?

I'm stil utterly amazed if the equity line can hardly have any downside, but it still goes up such a lot over the entire period. Can you indicate roughly what it is in the system that causes this? The antigrid I have looked at from 7bit seems very different (and nowhere near as good), at least with me putting in random numbers. [for some reason it backtests very slowly, which doesn't help] Also the balance line only heads south until the end of the test period, which seems a bit odd!


The equity goes down, the chart posted at 08.08 is the recorded true equity showing the peak (blue) the current equity on a daily basis (high/low) and in red the maximal drawdown. Which was 28.something percent..

I will write more tomorrow.

Reason: