Market phenomena - page 33

 
joo:
Not that it's wrong. Right, as right as the expression 'buy cheap, sell dear'. It's not just correctness that matters, it's also formalisability. There is no point in constructing clever philosophical near-market constructs if they (constructs) are like the milk of a goat.
Do you think it is difficult to formalise a time lag after taking a loss? Or something else?
 
paukas:
Do you think that the time lapse after accepting a loss is difficult to formalise? Or what is different?
Yes, you can, of course, no problem. You could also prohibit trading when volatility is low.
 
gpwr:

Thanks. I'll be thinking about SOM at my leisure.

The article at the link provides an overview of time series segmentation methods. They all do about the same thing. Not that SOM is the best method for forex, but it's not the worst either, that's a fact ))

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.115.6594&rep=rep1&type=pdf

 

My colleagues, unfortunately, do not allow me to give more time to trading, but I found some time and decided to ask (for my own interest, so I won't forget it :o, so I will come back later, when I have more free time)

The essence of the phenomenon.

Let me remind you the essence of this phenomenon. It was discovered during analysis of "long tails" influence on future price deviations. If we classify long tails and look at the time series without them, we can observe some curious phenomena, unique for almost each symbol. The essence of the phenomenon is a very specific classification, based in some way on a "neural" approach. In fact, this classification "breaks down" the raw data, i.e. the quoting process itself into two sub-processes, which are conventionally called "alpha" and "betta". Generally speaking, the initial process can be broken down into more subprocesses.

System with random structure

This phenomenon applies very well to systems with random structure. The model itself will look very simple. Let us have a look at an example. The initial EURUSD series M15(we need a long sample, and as small as possible frame), from some "now":

Step 1: Classification

Classification is performed and two processes "alpha" and "beta" are obtained. Parameters of the control process are defined (the process, which deals with final "assembling" of the quote)

Step 2 Identification

For each sub-process a model based on the Volterry network is defined:

Oh what a pain to identify them.

Step 3 Sub-process prediction

A forecast is made for 100 counts for each process (for 15 minutes, i.e. just over a day).

Step 4: Simulation modeling

A simulation model is built, which will generate the x.o. number of future implementations. The scheme of the system is simple:


Three randomisations: an error for each model and process transition conditions. Here are the realisations themselves (from zero):

Step 5: The trading solution.

A bias analysis of these realizations is performed. This can be done in different ways. Visually, you can see that a large mass of trajectories are shifted. Let's look at the fact:

<>

Preliminary testing

Took about 70 "measurements" at random (takes a long time to count). About 70% of deviation detected by the system is correct, so it hasn't said anything yet, but I hope to get back to this track in a couple of months, although I have not finished working on the main project yet :o(.

 
Maybe not quite right: what principle is used to classify and actually decompose into what processes is intended?
 

to sayfuji

Может не совсем корректно: по какому принципу производится классификация и, собственно, разложение на какие процессы предполагается?

No, everything is correct. It was one of the subjects of discussion on several dozen pages of this thread. All that I considered necessary - I wrote. Unfortunately, I have no time to develop the topic further. Besides, this phenomenon in particular, though interesting, is not very promising. The phenomenon of "long tails" appears on long horizons, i.e. where there are large deviations of trajectories, but for this purpose it is necessary to forecast far away the processes alpha and betta (and other processes). And this is impossible. There is no such technology...

:о(

to All

Colleagues, it turns out that there are posts I haven't answered. Forgive me, there's no point in trying to move now.

 

Prohwessor Fransfort, please answer which program you use for your research.

And also...if anyone has a manual in Russian or a russifier for the program http://originlab.com/ (OriginPro 8.5.1)

 
Matlab is, if I'm not mistaken.
 
Farnsworth:
We will hopefully get to more serious "fractal" mathematics in the study of "fat tails". It will take some more time, but for now I am posting a near-scientific study that has given me some thoughts.
Model assumptions.
There is reason to assume that there are several processes sitting in the cots, which is what I want to find. The main, "carrying process" is supposedly some kind of increasing/decreasing trend, which by some kind of stochastic algorithm interrupts another process (or processes). The idea is simple to start with - remove those increments that theoretically belong to "fat tails" (or some other sub-process) and see what happens. The first, easiest way to classify is to "filter out" everything that sits inside +/- LAMBDA
Open(n)-Open(n-1) increments, M15, EURUSD:
From 0.0001 in increments of 0.0001 to 0.025 I go through the LAMBDAs, leave only those increments that fall within the specific +/- LAMBDA channel, add up, and determine the linear regression determination coefficient for each LAMBDA. Yes, it is clear that there will be omissions (I count them as zeros), but now I just want to look at the process itself.
Coefficient of determination (CD)/ LAMBDA
Let me remind you that CoD, quite simply, is a certain percentage which shows how much of the data explains the model. The maximum (0.97) is reached when LAMBDA = 0.0006
The filtered increments can be added together to give two processes:
The value of 0.0006 is slightly less than the RMS of the incremental process. For comparison, we can look at the second local extremum, with a LAMBDA value of 0.0023 (about 3 RMS):
Such "trends" can be identified on all quotients, and some (and most of them) are upward and some are downward. It is clear that this method is near-scientific, but on the other hand it gave some ideas, an alternative representation of systems with a random structure.

An interesting result.

Could this phenomenon be due to the fact that the historical data are Bid prices? (Lambda in the experiment is comparable to the spread).

Don't you think it makes more sense to test the quality of the resulting "trend" process using linear regression with piecewise constant coefficients when viewed as functions of time?

 
Prof, your chart from page 22, figure 2 is very similar to the monthly euro-dollar chart -very similar.

You can add up the filtered increments and you get two processes:



Reason: