Discussion of article "Extract profit down to the last pip" - page 21

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
To filter out fake graality, I threw out ticks with zero spread from the tick history. I never thought that it would lead to such a mishap.
If you ignore ticks with zero spread in the trading logic on the real tick history.
the result improves by 15%. I.e. there was a profit of 1900 pips before such logic, and if it is prescribed, then 2150 pips at once. And this is with the same number of trades.
At once I thought that the TS is a total crap, since such a nonsense has such a significant effect on the result. But then I decided to investigate a little.
It turned out that there are 187K of such ticks out of 7500K. But among them 19K ticks influenced the formation of local extrema.
And that put everything in its place. The zero spread filter is a mistake. Spread is a fiction that has nothing to do with profitability. At the same time, I decided to use it for filtering the old-fashioned way. You can't do that. Or at least filter only the negative spread in case of a technical failure when writing to the tick archive.
The situation requires at least some justification: why do ticks with zero spread appear at local extrema? Perhaps it is some peculiarities of price aggregation.
In general, do not filter by zero spread.
A situation that requires at least some justification, for what reason ticks with zero spread appear at local extrema? Perhaps, it is some peculiarities of price aggregation.
Why should there be any justification? The zero spread is probably evenly distributed in the price flow.
And at the moment of a tick with a zero spread, nobody knows that it will be an extremum.
Why do we need to justify anything here? A zero spread is probably evenly distributed in the price stream.
And at the moment of a tick with a zero spread, nobody knows that it will be an extremum.
Makes sense, I agree.
Interestingly, when there is a market pattern, it is quite easy to write a profitable TS. You can write quite different TS, but they will give similar results precisely because of the presence of market regularity.
I.e. the point is not even to find the optimal TS, but just to see the presence of the regularity in time.
I have tried from absolutely stupid to very complicated trading logics. All plus/minus around the same thing. I cannot get a qualitatively different result. It turns out that when you know about the presence of a pattern, it is almost impossible not to trade it, no matter what logic you come up with.
And from this we can conclude that a profitable TS is not something unique that has found some complex regularity. And logic can only help a little not to drain, rather than to earn. Because even a fool will make money on a profitable TS. It's harder not to lose.
What's the point of all this? This dependence on 19K ticks is still a bit alarming, but isn't the logic of the TS full of logic, when the result changes by 15%.... I.e. it is not the pattern that sucks, but one of the many variants of the TS that trades it.
And from here we can conclude that a profitable TS is not something unique that has found some complex pattern. And logic can only help a little not to drain, rather than to earn. Because even a fool can make money on a profitable TS. It is harder not to lose.
What's the point of all this? This dependence on 19K ticks is still a bit alarming, and is not the full logic of the TS, when the result changes by 15%.... I.e. it is not the pattern that sucks, but one of the many variants of the TS that trades it.
I think the explanation is very simple. "If you torture a set of data for a very long time, you can get any result" (c) Something like that.
I had a similar situation today. There are two similar TCs. One trades at 2 and 3 o'clock, the other at 6 and 7. Sometimes a trade hangs since 3 o'clock, so at 6 o'clock we have to do something with it (close it if we open the opposite one and not only). I wrote (a long time ago) a gluing algorithm. Everything works. But if the algorithm receives all deals (for 2, 3, 6 and 7 hours) sorted by time, it throws out the wrong one. I constantly forget about it. As a result, today after such a gluing I got profitability increased by one and a half times. I am also sitting here and thinking, maybe it should be so, maybe there is some deep meaning there and it is possible to increase profitability! But probably not.
As a result, today after such a gluing I got the profitability increased by one and a half times.
The luck could have been reversed. It is still better to refine the gluing. But it is such a thing that is difficult to debug reliably. Because it's almost always real-time. It's the same story with synchroniser debugging.
And about the deep meaning of broken gluing. You need to make a portfolio of TCs and watch it in backtest. This way at least some arguments for self-assurance will be available.
Someone is the first to notice
It's a great result if there is an identification after a month from the start of the pattern. Solutions like MultiTester are very cool in this sense.
It happens that it is possible to catch it after two weeks. But the shorter the period, the more flair, not arguments.
In the trailer are the full reports after a single run of this script.
There's an example in there that's been cleaned up.
I can't figure out what the curveballs are.
then a bunch of repeaters come along and keep making money for a while.
I thought that would happen with this article. But it didn't.
There are plenty of monitors with night scalpers. Everything is as clear as the palm of your hand there: trading intervals and symbols are clearly marked. There is only one minus - sluggishness. On average, one trade per symbol per day. Or even less.
In this sense, the article went a bit further, clearly showing not only the method of finding gold rocks, but also giving one of them with a high frequency of transactions. No one went to the trouble of writing a rudimentary sieve for thinning the rock. This, apparently, is a consequence of one of the phenomena that sociologists study. There is a huge number of TS variants that could squeeze out a profit. But readers are fixated on the idea that there is some unsolved mystery here.
The whole article thing is a creepy prank on readers that will be realised one day. In the meantime, it's fine.
And from here we can conclude that a profitable TS is not something unique that has found some complex pattern. And logic can only help a little not to drain, rather than to earn. Because even a fool will make money on a profitable TS. It is harder not to lose.
Calculations have thrown arguments in favour of this hypothesis. I took a symbol, which shows a remarkable plus at some sections when optimising there. Not just a plus, but with the prefix "magnificent".
And there is an annual interval on it, where everything sucks on the best passes from optimisation on the excellent ones.
So I took only this disgusting annual interval and optimised it using BestInterval. Got probably a fitting plus on it. And then checked how that variant behaves on the gorgeous intervals.
It behaves almost as awesome!
I.e. it's very hard to bypass the super-profit spots to not get that super-profit on them.
ZЫ It is very interesting that BestInterval on a disgusting annual interval showed trading time exactly the same, where it is optimal to trade on excellent intervals. It feels as if a bad year had a mixture of plum and profitable parts. And BestInterval dug up the profitable part. Outside of this year, it is as if the profitable part of pricing was switched off.
fxsaber:
Waiting for the drain.
I did. Trading halted.