Random Flow Theory and FOREX - page 18

 
OK, so the fit is kind of perfect. Now we need to replace a*x+b with something more meaningful that actually removes the trend, like 'ang PR (Din)-v1'.
 
Cool!!! I tried to do something similar once, but I didn't have enough knowledge :)
I wish I could draw shadows on BarsCount backwards during initialization, because they carry information too.
But everything is very slow when I'm working on it :).
And there is another question - whether there is a trend-flat on forex, or there is no trend-flat on forex. It has been discussed a lot and as many people as opinions.
You can do as "Now we need to replace a*x+b with something more meaningful that actually removes the trend, like 'ang PR (Din)-v1' "
Define the reference time. And I really liked the solution in time_avg_v1.0. mq4 regarding deviations from "trend".
Files:
 

The most interesting point here is that the correlation coefficient is a fundamentally limited value. The first solution to the problem of normalising the inductor was suggested byYurixx in 'Stochastic Resonance'. Can't find the picture, damn it.

There's not much left to do - to teach it to correlate with market extremes. But it looks like the entry/exit points should be sought in the zeros of this indicator.

P.S. No, not exactly. Negative values of the indicator (if in the sense it is some sort of ACF) do not indicate a falling trend. It can be the most unbridled flat.

 
Mathemat:
P.S. No, not really. Negative values of the indikator (if in the sense it is some sort of ACF) do not indicate a falling trend. It could be the most unbridled flat.
No they don't :) Yesterday I raced for 3 hours.
And again - flat - how much is it ? 25p or 50p. What about the higher timeframe? time_avg_v1. 0.mq4 gives some figure on this question.
 
Mathemat:
OK, so the fit is kind of perfect. Now we need to replace a*x+b with something more meaningful that actually removes the trend, such as 'ang PR (Din)-v1'.
Mathemat, but after all trends are really linear, at least at first glance. Besides, linear regression is faster to be considered (like looking under a lantern because it's brighter there :) . It seems to me more tempting to try to determine the interval of LR calculation. This means that it should be variable, depending on the current situation.
 
Candid, I was referring to dynamic linear regression, i.e. an indicator that linearly predicts the value on the next bar based on a given number of previous values. I was interested in it once, when I was playing with neural networks, and even calculated it analytically; it's a linear combination of SMA and LWMA with equal periods, rather small, not 1000. I'll remember - I'll post the calculation formula or the indicator itself.

Yes, the trends are linear, but very crudely and only on the largest TFs, like weeks. Take a look for yourself.

Then let's try to understand what the author (and not only author) is trying to achieve when he detrends the chart. Prival, probably, proposes to do it in order to first remove unpredictable regular components (trend) from the initial quotes leaving something close to a random process the expectation of which does not deviate too much from zero (in terms of the number of s.c.o.) and then analyzing ACF-properties of this process (autocovarian, and not autocorrelational), using ACF itself to predict those same trends that are unknown to us. Prival, where have you gone? Tell me, is it logical or not?

Detrending makes sense only when the deviation of the "trend line" from the graph itself is not too big, which dictates a small value of the period of "smoothing" of the regression itself (not the ACF). Otherwise there will be shallow local trends within the detrending interval, which is exactly what we want to get rid of. (Hypothesis: so we may be reducing the Hurst coefficient of the initial process, bringing it closer to the Gaussian one?

Now look at the 5-minutes and tell me if the trends are linear or not? The topic "Stochastic Resonance" seems like it should disprove this notion, if such a phenomenon exists.

P.S. By the way, have you paid any attention to why our inducator shows one on the zero bar?
 
Mathemat:
Candid,
P.S. By the way, have you ever paid attention, why our indicator shows one on zero bar?

It's not my question, but since I'm peeking at the topic...

Prival once tried to talk about common sense here. So, common sense or physical sense ACF just by definition equates its value to one at zero bar, because it shows correlation of time sample with itself. Its falling to zero can be treated as a practical loss of correlation in BP values after the corresponding number of bars are removed from zero. Only I still do not understand what we want to get out of it. Note that Wikipedia gives another definition of AFC, which, as I understand it, is closer to Matemet's heart, but here we consider only the first one.

P.S. Prival, I still don't have the inner starting impulse to start coding, because I don't have a clear understanding of the task. By the way, I don't consider myself a MQL Wizard, and I have never been one, but practice shows that everything can be coded when the task is understood. And there are masters here on the forum.

P.P.S. I associate the decrease of forum activity with a desire of forum users to quickly learn probabilistic neural networks (hope, Prival among them), or all run to open a real account while you can still use the signals of Better free of charge :-).

 

I just got some free time. I will try to answer the questions here tonight.

Tryto answer the question "...probabilistic ..." on the probability of what the Better neural network adjusts?

By evening I will try to program everything and post my ideas in pictures.

 
Prival:

rsi try to answer the question "...probabilistic ..." on the probability of what does the Better neural network set up ?

I've only recently started on nets (as usual in no hurry). On the forum there are experts on the subject (Vinin, Leo, klot and many others, including, of course, Better). (Although I see there's already a new thread on the subject - not surprising. ) But since you asked, I'll try. As in the proverb: If you can not do anything yourself, at least teach the others! :-) Here on the branch, too gathered a lot of people, so I hope they will correct me if I mess up.

As far as I understood for today, when building a probabilistic network the Bayesian approach is used, and each output of the network gets an estimate (and is obviously compared then with a threshold or with other outputs; Prival, maybe two thresholds are possible too :-)) of some sufficient statistic in the form of a scalar with exactness up to a constant factor, described by a maximum likelihood function for the input vector. Thus (to answer the question), the network (each output) is tuned to the maximum likelihood of the input vector to match the decision (output).

The network itself is usually three-layered - the input layer, the radial layer and the output layer. Vectors for classification (in Better's case, as we already know, they are most likely linear combinations of several moving averages, though Boolean functions may also be used). The number of outputs, corresponds to the required dimensionality of the output solution. For example, 4, if we need to make decisions buy, sell, closebuy, closesell. When training, the "winner takes all" principle is implemented, i.e. more than one output may not be close to the maximum. This is the creative part of work: we have to find the most suitable width (sigma) and number of neurons in the middle layer. Some input vectors are the most plausible match to the successful buy, others to other output vectors. The radial layer is so called because it uses the so-called radial basis - the bell-shaped function of the Euclidean distance (in the more general case of correlated components of the input vector - the Mahalanobis distance) instead of the sigmoid transformation function.

As we see, this approach is very similar to "hands" trading: a trader evaluates the input vector (price, TA pattern, indicator readings, etc.) and makes a decision, if the criterion is met according to his estimate.

About the training - I haven't studied it yet. It is a separate question, if I understand it, I may introduce it later :-). I remember, even before the Championship, I wrote that my Expert Advisor has been using one year's worth of data for training.

 
Mathemat:
Candid, I meant dynamic linear regression, i.e. an indicator linearly predicting the value on the next bar on the basis of a given number of previous values. I was interested in it once, when I was playing with neural networks, and even calculated it analytically; it's a linear combination of SMA and LWMA with equal periods, rather small, not 1000. I'll remember - I'll post the calculation formula or the indicator itself.

I call it moving linear regression and I have such an indicator, I can share it myself :)

P.S. By the way, have you ever noticed why our indicator shows one on the zero bar?

I mean on the first bar (zero bar is not processed)? Well the correlation of each point with itself is taken as one, assuming that this is the maximum possible degree of correlation. This assumption is further justified :). But since you probably knew this even without me, it seems that the question contained a hint, which I haven't understood yet.

Let's then try to understand what the author of this thread (and not only him) is trying to achieve by detrending the graph. Prival, probably, proposes to do it in order to first remove unpredictable regular components (trend) from the initial quotes leaving something close to a random process the expectation of which is not too different from zero (in terms of s.c.o. number) and then analyzing ACF-properties of this process (autocovarian, and not autocorrelation) and using ACF itself predict the very trends that are unknown in our case. Prival, where have you gone? Tell me, is it logical or not?

Prival promised to give his opinion, but I think the idea is not to leave only the random component, but to remove the "long" trend and get a series with zero expected value. This series will contain shorter trends corresponding to the expected game horizon. Remembering that matstatistics (and not only R/S analysis) prefers to work with just such series, we obtain more or less correct reduction of the problem to "search under the streetlight".

Yes, the trends are linear, but very crudely and only on the largest TFs, like weeks. Take a look for yourself.

Detrending makes sense only when the deviation ofthe"trend line" from the chart is not too big, which dictates a small value of the period of "smoothing" of the regression itself (not ACF). Otherwise there will be shallow local trends within the detrending interval, which is exactly what we want to get rid of. (Hypothesis: so we may be reducing the Hurst coefficient of the initial process, bringing it closer to the Gaussian one?

Now look at the 5-minutes and tell me, are the trends linear or not? The topic "Stochastic Resonance" should seem to disprove this notion, if such a phenomenon exists.

And now - dessert :). As written above, I understand the meaning of detrending in exactly the opposite way, namely: the task is just to remove the global trends and get the local trends corresponding to the chosen horizon of the game in a clean form. In this sense, speaking in terms of timeframes is rather sidetracking. There is just a time series, we can view it using different timeframes, but the trends as an objective reality do not depend on the choice of scale. And here's what I suddenly thought - there is a clear physical meaning in linear transformations: the result of such transformation is anotherinertial reference frame, i.e. a system in which the same forces will act as in the original one. In a nonlinear transformation, as we know from physics, we obtain a world in which "illogical" and hard-to-describe forces can appear and disappear in unexpected ways. Interestingly, using returns is also a linear transformation, but the zero expectation condition is fulfilled with substantially greater error than in detrending with linear regression.

Reason: