Why is the normal distribution not normal? - page 18

 
getch >> :

It would be interesting to hear the practical side of academic considerations.

This particular study, and most of the others, are not an end in themselves, but a by-product (if I may say so) of the search for arbitrage opportunities presented in the market. For example, constructing an autocorrelation series for adjacent readings of the first difference of a price series is an attempt to trade the colour of a candle. Matrix of pair correlation coefficients - an attempt to catch arbitrage ahead of strong news movements. Examination of the distribution function of WPD increments - maximizing the profitability of TS taking into account the limiting risks. Well, etc.

So the interest is quite practical and far from idle curiosity (though not without it). The market is a complex system with numerous feedbacks and to hope to rein it in with stops and MAK is useless. So you have to dig in.

 
You mean statistical arbitrage based on correlations?
 
Neutron писал(а) >>

This material is proof of my assertion above. A separate question is about the significance of the found dependencies. Unfortunately, for us as traders, these relationships are of practical value only if the product of an instrument's volatility on KK for the selected TF exceeds its transaction costs (spread of brokerage companies). This is not observed.

And I find KK(TF) chart on the 15th page interesting. Only the estimation of practical value is not quite clear. Can you cite the mathematics. It would be very interesting.

 
Colleagues, why do you investigate only the increment a(n)-a(n+1)? Try something like a(n)-a(n+5) or a(n)-a(n+30). For constructing predictive designs, the lag step doesn't matter. Check it out! I can assure you that you will be pleasantly surprised (given the title of the topic)...
 
muallch >> :
Colleagues, why are you only investigating the increment a(n)-a(n+1)? Try something like a(n)-a(n+5) or a(n)-a(n+30). For constructing predictive designs, the lag step doesn't matter. Check it out! I can assure you that you will be pleasantly surprised (given the title of the topic)...

In my graph, the horizontal axis is TF derived from minutiae using the following algorithm: TF1 a(n)-a(n+1), TF2 a(n)-a(n+2),...,TFk a(n)-a(n+k). So, colleague, we are doing exactly what you advise.

Doctor. wrote(a) >> But I found the QC(TF) chart on page 15 curious. Only the assessment of the practical value is not quite clear. Can you give me the maths. It would be very interesting.

By definition, the pairwise correlation coefficient between adjacent readings in RPM of a price series, is the probability of correctly predicting the colour of the next candle, or in other words, it's the MTS efficiency. To convert percentages to points we need to know the volatility of the instrument in the selected timeframe. By multiplying the efficiency by the volatility, we obtain an estimate for the profitability of the TS as the average value of points per transaction. This must be compared with the brokerage company commission (spread). If the profitability in any TF exceeds the spread, profitable trading is possible.

Here, for example, data on the EURCHF pair:

The red color shows the correlation coefficient between candlesticks as a function of the TF (in this example, the value is given modulo). Blue shows instrument volatility. Lilac is estimation of the average return. The data was used in 2005-2006, therefore 4 signs and spread in those times for this pair was point 2. We see that with this approach we do not outbid DT commission on any of given TFs (we lack statistics for TF>100 min, but CC is sure to fall there and drags profitability down in general). The bars show the confidence range corresponding to the statistical spread of the input data.

All this sadness is related to the attempt to exploit the stationary properties of the price series, and they all are knowingly covered by DC on spread, the field is trampled as a pasture. The only solution seems to be the search for quasi-stationary features that allow to "outplay" the DC.

Getch wrote(a) >> You mean statistical arbitrage based on correlations?

Yes, it was an idea, but the technical means available do not allow it to be realised.


 
It is not clear why there is a constant reference to time - n. What difference does it make (for trading purposes) if the price passed the figure in an hour or a day?
 

Absolutely right - it doesn't play a role! But, I don't understand the question...

Of course, the analysis can be done in terms of trading horizon (not time horizon, but price horizon). The only important thing, as you correctly noted, is the price movement - it determines our interests and type of trading. Pipsers work on short price ranges, medium term traders work in the 100-500 pips range (for 4 digits), etc.

 

You are analysing a time series where n is time. This is a conceptual error. a(n) - should be the price value of the local extremum (ZigZag), or the price value through equal accumulated financial volumes of the trading instrument.

 

Just good to see you (read), colleague!

Exactly. I myself only use the price (vertical) scale of the price series breakdown and exclude time from the analysis completely. It makes no sense (except for the time dependence of the instrument volatility, but those are details - small second order).

In general, we cannot do without time. This parameter will definitely appear when estimating the maximum profitability of TS per unit of real time (we live in the real world and we need to earn faster than in a million years).

 

Great! After all, market time is a measure of change in financial volume. I do not understand what is a(n) in your reasoning?

I don't agree about the need to take into account human time. One of the arguments can be the ReverseSystem Expert Advisor, which has no concept of human time.

Reason: