From theory to practice - page 1081

 
Maxim Kuznetsov:

When the MA turns around, it is safe to say that you are already too late to enter the trade, that's for sure.



There are trends that last six months or more, so no need to enter? Eh... Late, well, I'll enter next year, maybe I'll catch the extremum). And this year it's all gone, I'm late))).

 
khorosh:

There are trends that last six months or more, so no need to enter? Eh... Late, well, I'll enter next year, maybe I'll catch the extremum). And this year it's all gone, I'm too late)))

If your capital (volume first of all and money&risk policy) allows you to hold a position for half a year, then you can wait :-) And if not, then the annual trends do not bother you too much.

 

mA is a good thing. And the dream is simple - to always trade in the trend by predicting just a wristwatch, much smoother than the price. From my experiments with neural networks in nsdt I remember that by shifting Ma of almost any period by 1 bar in the future you can get a grail, but if you use neural networks to predict the same Ma with 0.999 correlation you get either turbulence or sinking.

I tried the thinned ma and there is even logic. Kind of like the bars often alternate up and down. I may build a mask without increasing its period; I may build it every bar or every 2 bars with a shorter period. I even wrote a function with recursion; I set the period and the number of bars to jump through, and it shows the array I have already calculated. It was convenient to input new and new parameters there using simple methods... It even looked interesting))) but I couldn't find any money. It was nice to alternate between regular and through the bar, for example, then regular again, then through two, for example, and so on in a circle))

 
vladevgeniy:

mA is a good thing. And the dream is simple - to always trade in the trend by predicting just a wristwatch, much smoother than the price. From my experiments with neural networks in nsdt I remember that by shifting Ma of almost any period by 1 bar in the future you can get a grail, but if you use neural networks to predict the same Ma with 0.999 correlation you get either turbulence or sinking.

I tried the thinned ma and there is even logic. Kind of like the bars often alternate up and down. I may build a mask without increasing its period; I may build it every bar or every 2 bars with a shorter period. I even wrote a function with recursion; I set the period and the number of bars to jump through, and it shows the array I have already calculated. It was convenient to input new and new parameters there using simple methods... It even looked interesting))) but I couldn't find the money. It was nice to alternate between regular and through the bar, for example, then regular again, then through two, for example, and so on in a circle))

NS is quite successful at 5m on TF 1m. Even if we use only NS without any extra attachments, there is already some money there.
 
Yuriy Asaulenko:
NS is quite successful at 5m on a 1m TF. Even if you use only NS without any add-ons, the small money is already there.

It's a hat, not the money)))) In comparison if the same ma is bluntly shifted by the projected period.

Conclusion - almost all the money is at unreachable Ma breakpoints

 

perhaps we should avoid periodicity altogether, i.e. the use of MAs and fixed sliding windows. There are separate considerations related to the foundation :-)

I don't know yet how exactly :-) My thoughts revolve around the principal components method when applied to convergent + divergent zigzags.

A converging zigzag is quite simple: on a very large interval (almost on the entire available history), look for min and max that form the first knee of the zigzag. Then the next extremum is sought, and so on. It converges very quickly, just a few reversals and "hello, now". The resulting shape is quite familiar - something like an exponentially decaying harmonic. You can interpolate and decompose it into components.

Reverse pass will give a "divergent zigzag" and its components. The idea is that by removing the components of both zigzags, a less noisy representation can be obtained.

but that's just sharing my thoughts :-)


 
Maxim Kuznetsov:

perhaps we should avoid periodicity altogether, i.e. the use of MAs and fixed sliding windows. There are separate considerations related to the foundation :-)

I don't know yet how exactly :-) My thoughts revolve around the principal components method when applied to convergent + divergent zigzags.

A converging zigzag is quite simple: on a very large interval (almost on the entire available history), look for min and max that form the first knee of the zigzag. Then the next extremum is sought, and so on. It converges very quickly, just a few reversals and "hello, now". The resulting shape is quite familiar - something like an exponentially decaying harmonic. You can interpolate and decompose it into components.

Reverse pass will give a "divergent zigzag" and its components. The idea is that by removing the components of both zigzags, a less noisy representation can be obtained.

but that's just sharing my thoughts :-)

Imho, a variation of the same channel strategy.
 
Yuriy Asaulenko:
Imho, it's a variation of the same channel strategy.
Did you see the word STRATEGY somewhere? it doesn't seem to be there...
 
I was distracted and didn't finish. The only advantage of training with the Ma grid, imho, is that there is very little retraining, very weak, rather than finding some kind of strategy (training for profit). But what's the use. Maybe someone could, I could not))
 
Maxim Kuznetsov:
did you see the word STRATEGY somewhere? it doesn't seem to be there...

Well, if you don't intend to make a strategy on that basis, then it won't work).

Reason: