You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
.. I think that it is not necessary to recalculate these reference values for each pair in the Expert Advisor itself. You can just make a script that runs for a couple of days and that's it - here are the reference values in a file, and let the robots read from the file. A year later, recalculate. These distributions are unlikely to change even over such a period of time. Even if they change, they won't change much....
The bare mathematical data won't change at all. I suggest we cache these calculations. I have attached a file with calculations of these very factorials, probabilities, sums of probabilities, calculated average amplitude, with caching of all this for the time of work. I suppose someone else besides me might find it useful too. So, calculating these values, even on every bar, is no longer stressful.
But building blocks in the past, on each new bar anew, and for several block sizes, it is.... The larger the size, the more bars must be processed to get the number of blocks of that size necessary for analysis.
I used OHLC to build the block, not just C. In my opinion, assuming that the price goes through (O->H->L->L->C) for down bars and (O->L->H->H->C) for up bars is somewhat more useful than assuming only close. This way, of course, the number of calculations does not increase much (in absolute time), but there are advantages. For example, the necessary number of backward-blocks is typed faster. And when switching from M1 to M5, the typing picture changes insignificantly. And when changing to M15 and higher, the changes are absent or even more insignificant compared to M5. That also provides prospects for improving performance. When building forward-blocks, it is not so much fun, because "in motion" you can build one picture (for example, immediately by ticks), and when recalculating - another one. Again, if forward-blocks are supposed to be drawn at the close of the next bar, OHLC could be used as well."I am certainly not a master", as Panda used to say, but what I have built on my knees based on the article, after several optimisations (where the first one is the cache of statistical calculations) gives approximately such results:
- USDJPY M1 currency, at opening prices, set of blocks from 24 to 40, block size from 90 pips and further +10% to the previous one (256p max), 12 sizes in total.
- For a period of a calendar month, a simple run takes on average 2.5 minutes, where the share of trade processing is only 10%. The stricter the requirements for trend detection, the less often the signal to open is found, the more overshooting in the direction of increasing the block size and search time. The more frequent the signal, the more frequent the fixation of back-blocks and calculation of fwd only, which is oh-so-very-speedy.
With the result of a run of 2.5 minutes for a month of calculation, it is difficult to identify nuances or optimise some parameters of operation, but possible :)
The idea in the article is interesting and seems to be simple and clear. But, as one girl said, "I didn't realise if I understood...". All the above is not about the idea itself or its evaluation, but only about its concrete realisation (of course, incomplete and due to my own understanding).
P.S. And it started with doubts that on 24 blocks the average wandering will be 3.8 :)
The bare maths will not change at all. I suggest caching these calculations. I have attached a file with calculations of these very factorials, probabilities, sums of probabilities, calculated average amplitude, with caching of all this for the time of work. I suppose someone else besides me might find it useful too. So, calculating these values, even on every bar, is no longer stressful.
But building blocks in the past, on each new bar anew, and for several block sizes, it is.... The larger the size, the more bars must be processed to get the number of blocks of that size necessary for analysis.
I used OHLC to build the block, not just C. In my opinion, assuming that the price goes through (O->H->L->L->C) for down bars and (O->L->H->H->C) for up bars is somewhat more useful than assuming only close. This way, of course, the number of calculations does not increase much (in absolute time), but there are advantages. For example, the necessary number of backward-blocks is typed faster. And when switching from M1 to M5, the typing picture changes insignificantly. And when changing to M15 and higher, the changes are absent or even more insignificant compared to M5. That also provides prospects for improving performance. When building forward-blocks, it is not so much fun, because "in motion" you can build one picture (for example, immediately by ticks), and when recalculating - another one. Again, if forward-blocks are supposed to be drawn at the close of the next bar, OHLC could be used as well."I am certainly not a master", as Panda used to say, but what I have built on my knees based on the article, after several optimisations (where the first one is the cache of statistical calculations) gives approximately such results:
- USDJPY M1 currency, at opening prices, set of blocks from 24 to 40, block size from 90 pips and further +10% to the previous one (256p max), 12 sizes in total.
- For a period of a calendar month, a simple run takes on average 2.5 minutes, where the share of trade processing is only 10%. The stricter the requirements for trend detection, the less often the signal to open is found, the more overshooting in the direction of increasing the block size and search time. The more frequent the signal, the more frequent the fixation of back-blocks and calculation of fwd only, which is oh-so-very-speedy.
With the result of a run of 2.5 minutes for a month of calculation, it is difficult to identify nuances or optimise some parameters of operation, but possible :)
The idea in the article is interesting and seems to be simple and clear. But, as one girl said, "I didn't realise if I understood...". All the above is not about the idea itself or its evaluation, but only about its concrete realisation (of course, incomplete and due to my own understanding).
P.S. And it started with doubts that on 24 blocks the average wandering will be 3.8 :)
Yes, my maths is cached at runtime. The tests are very long... I was preparing tests for the next article, before the new year I put the tests to test. The tester is single-threaded, so I divided the tests into sections of 2 years (28 currency pairs) and loaded 5 terminals. All this was tested for 14 days and then my router broke down, I started to connect the Internet by cable directly and I got the error send error 10054 in all testers.... So the next article is not soon... By the way, who knows what kind of error send error 10054 is and how to avoid it in the future?
Not everything is described in the article yet, almost everything is described in the ToR, but not everything is described there too)). Now I'm making a much better version.
By the way, how did the doubts about the figure 3.8 disappear)?
For now, I will throw down the results for 1.5 years from 2010.06.25 to 2012.02.27. This is without optimisation for 28 instruments at the same time. Parameters for all instruments are the same
By the way, who knows what kind of error is send error 10054 and how to avoid it in the future?
How by the way doubts about the number 3,8 disappeared)?Doubts have disappeared, maths.
and 10054 is supposedly a network protocol error, due to loss of network connection. The fact that it was plugged back in did not play a role. The problem usually appears in many programmes when you pull the link from the network card (or disconnect the router). In this case (especially if the ip by dhcp is received), Windows starts to think that it no longer has a working network device, so network services are screwed up. At the same time, programmes running locally over network connections, for example to their local address 127.0.0.1, are also broken.
The following helped me: I created a virtual network device (loopback, you can google it), assigned an IP address like 127.0.1.1 to it, wrote the name of my PC and this address in hosts. It turns out that there is always an active network device in the system, the PC is available to itself by name and this address, network services start, programmes that communicate with themselves via tcp/ip - also.
Well, I always disable ipv6 as well.
Again, I haven't caught this for a long time, either I'm always connected, or something has changed somewhere ...Doubts are gone, maths.
and 10054 is presumably a network protocol error due to loss of network connection. The fact that it was plugged back in did not play a role. The problem usually appears in many programmes when you pull the link from the network card (or disconnect the router). In this case (especially if the ip by dhcp is received), Windows starts to think that it no longer has a working network device, so network services are screwed up. At the same time, programmes running locally over network connections, for example to their local address 127.0.0.1, are also broken.
The following helped me: I created a virtual network device (loopback, you can google it), assigned an IP address like 127.0.1.1 to it, wrote the name of my PC and this address in hosts. It turns out that there is always an active network device in the system, the PC is available to itself by name and this address, network services start, programmes that communicate with themselves via tcp/ip - also.
Well, I always disable ipv6 as well.
Again, I haven't caught this for a long time, either I'm always connected, or something has changed somewhere ...ah, thanks... I think I've got it, I accidentally disconnected the network card, I guess that's where the windup panicked and the tester stopped. Thanks!
Please attach the sample MT5 EA for us to follow your ideas and test your theory.
Please attach the sample MT5 EA for us to follow your ideas and test your theory.
The algorithm is not distributed in the public domain, but in the next article I will show how it works using the example of backtests.
Greetings.
Some thoughts and their results. A bit long.
There are two approaches to analysing block deviations: the graph is a random walk - we use statistical functions (on factorials, etc.) or the graph is a not-at-all-random movement - we use statistical data on past movements. If you have been following the formation of the block chart for a long time, you have probably noticed that the chart depends on the initial point of construction. If we build the chart on each candle opening, we get a slightly different chart on candle N than on N+1. Hence, there are two options for generating statistical data on past blocks: also at each candle opening or periodically, for example, once per (hour/day/week), but at the same time the chart of blocks should be plotted from the same zero point. The approach with recalculation of everything at each bar is possible, but it is very slow.
I despaired with this option to pick up/test something in the tester, and I tested all guesses or optimal parameters on demo, online, controlling the results by the Equity/Balance chart online too. I must say that on those short periods (the last one from 21/01 to the current time) the algorithm (well, as I understood and implemented it) is quite stable. I opened, by the way, not on every bar at a signal, but at a new signal (i.e. there was a buy signal on H block 150p, I bought, the next one was opened at Buy H=165, and that if it goes into negative. Since with positive development the imbalance in the wandering decreases, respectively, and the signal disappears), Only one demo account out of 8, with very aggressive settings, with a balance of $ 1000 and leverage 1:1000 used 48 currency pairs, metals, 2 oil, without control of hedge on currencies ("hedge") went to stopout - when you buy USDJPY, you can not buy USDCAD, but you can buy EURUSD, etc. on the whole set of symbols, the article has), on the deal allocated $10, lot adaptive, depends on the size of the block, exit only on the accumulated profit on the sum of transactions on the symbol or on the sum of profit on all open symbols.
Stable, yes, but drawdowns are significant. When we went "in the wrong direction" and continue to build up the position, being in the minus, it is understandable. But less aggressive settings did not kill the other accounts, yes.
Going back to the way of calculating blocks. I now use the following variant. We take a reference point - the previous Friday's close. From it we build (1 time!) in the past sets of blocks with heights as in the article (*1.1). I took (iATR(sy, PERIOD_M1, 1440) X 5) from the initial point as the initial height. We immediately build sets with length StepMax+MAX_STAT (for example, 48+1000), and calculate the distribution of run frequencies, etc. for each length in blocks (let it be from 24 to 48). Then on each new bar we complete the forward sets, look for a signal for each H in the range of lengths (for example) [24;48] blocks, guided by the previously calculated statistics, choose a suitable one, make a decision to open a new one, close/upgrade an old one, etc.... This approach gives a significant increase in calculation speed, and, I believe, does not differ much from the one proposed in the article (calculation "from scratch" at each bar). At the same time, some additional possibilities appear. In particular, if levels do not change during the week, then if there is a signal for H=180p (for example), you can place a grid of orders in the expected direction of movement....
Further. This approach allowed not to use pure statistics, but to measure specific distributions and expectations on a specific symbol and block size. This slightly improved the results compared to "pure wandering". At the same time, we collected data on the length of wandering modulo (also in statfunctions I used the length of movement modulo, and it is understandable that the random walk, in general, symmetric about zero). I should separately note that the sample size of 1000 units can be called "statistically reliable" rather conditionally, but that's not the point.
Well, then I tried to collect the distribution of mileage lengths in blocks taking into account their sign (up or down). And the results got worse. Unexpectedly.
I checked on the set of symbols "28 majors" and on sets for each major (*AUD*, etc.). This is already in MT5. I checked different periods, parameters of block construction, various nuances of input-output....
The general conclusion (according to the available results) is that when taking into account the direction of the length of movements in blocks, the efficiency of the system decreases. If someone can explain it, I will be glad to know what and how. Also, if someone has similar experience, but the opposite result, I will be glad to discuss and compare.
Greetings.
Some thoughts and their results. It's a bit long.
There are two approaches to analysing block deviations: the graph is a random walk - we use statistical functions (on factorials, etc.) or the graph is a not-at-all-random movement - we use statistical data on past movements. If you have been following the formation of the block chart for a long time, you have probably noticed that the chart depends on the initial point of construction. If we build the chart at each candle opening, we get a slightly different chart at candle N than at N+1. Hence, there are two options for generating statistical data on past blocks: also at each candle opening or periodically, for example, once per (hour/day/week), but at the same time the chart of blocks should be plotted from the same zero point. The approach with recalculation of everything and everything again at each bar is possible, but it is very slow.
I despaired with this option to pick up/test something in the tester, and I tested all guesses or optimal parameters on demo, online, controlling the results by the Equity/Balance chart online too. I must say that on those short periods (the last one from 21/01 to the current time) the algorithm (well, as I understood and implemented it) is quite stable. I opened, by the way, not on every bar at a signal, but at a new signal (i.e. there was a buy signal on H block 150p, I bought, the next one was opened at Buy H=165, and that if it goes into negative. Since with positive development the imbalance in the wandering decreases, respectively, and the signal disappears), Only one demo account out of 8, with very aggressive settings, with a balance of $ 1000 and leverage 1:1000 used 48 currency pairs, metals, 2 oil, without control of hedge on currencies ("hedge") went to stopout - when you buy USDJPY, you can not buy USDCAD, but you can buy EURUSD, etc. on the whole set of symbols, the article has), on the deal allocated $10, lot adaptive, depends on the size of the block, exit only on the accumulated profit on the sum of transactions on the symbol or on the sum of profit on all open symbols.
Stable, yes, but drawdowns are significant. When we went "in the wrong direction" and continue to build up the position, being in the minus, it is understandable. But less aggressive settings did not kill the other accounts, yes.
Going back to the way of calculating blocks. I now use the following variant. We take a reference point - the previous Friday's close. From it we build (1 time!) in the past sets of blocks with heights as in the article (*1.1). I took (iATR(sy, PERIOD_M1, 1440) X 5) from the initial point as the initial height. We immediately build sets with length StepMax+MAX_STAT (for example, 48+1000), and calculate the distribution of run frequencies, etc. for each length in blocks (let it be from 24 to 48). Then on each new bar we complete the forward sets, look for a signal for each H in the range of lengths (for example) [24;48] blocks, guided by the previously calculated statistics, choose a suitable one, make a decision to open a new one, close/upgrade an old one, etc.... This approach gives a significant increase in calculation speed, and, I believe, does not differ much from the one proposed in the article (calculation "from scratch" at each bar). At the same time, some additional possibilities appear. In particular, if levels do not change during the week, then if there is a signal for H=180p (for example), you can place a grid of orders in the expected direction of movement....
Further. This approach allowed not to use pure statistics, but to measure specific distributions and expectations on a specific symbol and block size. This slightly improved the results compared to "pure wandering". At the same time, we collected data on the length of wandering modulo (also in statfunctions I used the length of movement modulo, and it is understandable that the random walk, in general, symmetric about zero). I would like to note separately that the sample size of 1000 units can be called "statistically reliable" rather conditionally, but that is not the point.
Well, then I tried to collect the distribution of mileage lengths in blocks taking into account their sign (up or down). And the results got worse. Unexpectedly.
I checked on the set of symbols "28 majors" and on sets for each major (*AUD*, etc.). This is already in MT5. I checked different periods, parameters of block construction, various nuances of input-output.....
The general conclusion (according to the available results) is that when taking into account the direction of the length of movements in blocks, the efficiency of the system decreases. If someone can explain it, I will be glad to know what and how. Also, if someone has similar experience, but the opposite result, I will be glad to discuss and compare.
Yes, the chart depends on the initial point of construction, this is done on purpose for two reasons, so that the algorithm, despite the errors of price rounding with the help of blocks, better found the maximum scale and the second reason, to avoid fitting to history. When the chart changes depending on the starting point, you can more accurately assess the quality of the algorithm itself.
In the tester I tested and will show in the next article many tests of how it works. In demo it is long and not informative. A 1-year test on 28 currency pairs takes me 15 days (Ryzen 3700). In the way it is described, the returns are not great, but it passes backtests very steadily. I am improving the algorithm now.
There is a peculiarity about the statistical characteristics of the tool. They can be taken and the percentage of overweighting can be adjusted based on them, but it is necessary to use large sample values to evaluate the character of the instrument as a whole. Local deviations can be captured, but for other purposes. When I tested, separate overweighting parameters for long and short positions work well on stocks, i.e. when asymmetry is taken into account, it becomes better, I will also show this briefly in the next article.
The initial block size should also be adjusted to the current market situation. I initially made it through volatility, but now I am reworking it to take into account market peculiarities.
But the idea is to make a basic algorithm first, and then cover it with statistics. Statistics cannot be simply measured, we should take into account that we measure statistical characteristics on a large window and if they deviate from random walk, they will tend to it in the future, we should use suboptimal parameters in advance. And the statistics should not be bare, but taking into account the theory, but I did not write about the theory because I did not prove it, and I do not like to write what is not proved.
In general, the algorithm is a billet and at this stage its task is not to drain and earn something.
GREAT work,
thank you!