Stochastic resonance - page 11

 

The first approach to the shell...

Identifying the channel

This is where things get tricky; channels classified as flat are not that easy and unambiguous to detect, let alone identify them. Although there are not many variants for this case and the first approximation is as follows: the centre of the channel is the time series mean value (TTR) and channel borders are defined as "TTR" +/- k*SCO, where TTR is the standard deviation and "k" is one of the scale factors. The search method is very simple: I take a historical sample, stand on some datum, and go to the right with iteration +1 datum, i.e. into the future. For the resulting time series (TP), I determine the SVR and RMS. I start another iteration process for the same series by the coefficient "k" and with the step 0.1 from 1 to 3 RMS and check compliance with the criterion. In general case at some coefficient the criterion may work (all values of the series will be inside the "SVR" channel +/- k*SCO). In this case I stop the nested cycle and record data for the current channel. That is, if a criterion worked for 2.1*SCO, all other coefficients greater than 2.1 will also satisfy this condition and there is no point in trying them. Finishing the cycle on "k" I continue increasing BP along the main cycle until I try all future readings from the current bar. Then I move to a new bar and start over again. I hope I explained it clearly, if there is anything else, please ask.

It is obvious that the matrix of channels with their parameters will be "full" including the non-flat channels. What the hell with it, there will be something to compare it with, especially since the next step is to collect statistics for the "process" that develops after each found channel using the method described above. But so far I haven't got down to it, I'm thinking how to parameterize BP that will appear after "termination" of a channel. If you have any thoughts on this, I would appreciate it, the task is not trivial.

Channel parameters

So far I haven't charged the prepared functions to separate flies from cutlets, i.e. signal from noise and calculation of parameters of the received noises. It's all ahead of me, but for now, on a not very large segment (5000) taken at random, I assembled the following:

  • channel length
  • price mean value (I suppose that in future the mean value of the channel series will be used as a signal)
  • standard deviation
  • amplitude of the obtained channel (max(y)-min(y))
  • channel boundary in the form of the "k" coefficient

Preliminary results

They are not even preliminary, but something like opening the door and peeking timidly. I present the data without much commentary with correlation calculation and other tricks. It's all quite simple so far, nothing interesting and to be expected:

Channel length - RMS

Channel length - spread

RMS - swing

Channel length - "k"

A nice picture

The colour indicates the average value of a time series bounded by a channel (scale below).

Axes:

  • Channel length (N-Bin)
  • Ram (R-Bin)
  • Standard deviation (SCO-Bin)
 
grasn:

The first approach to the shell...


I want to go back to my old job. Linear approximation, but I want to take the value and period with the change in RMS, maybe there is something hidden here too. But one problem arises, I do not know what to do if BP is new. What period to take for the calculations, so that the old data does not interfere.
 
Vinin:
grasn:

The first approach to the shell...


I want to go back to my old work. Linear approximation, but I want to take the value and period with the change in RMS, maybe there is something hidden here too. But one problem arises, I don't know what to do if BP is new. What period to take for the calculations, so that the old data does not interfere.

I have already written what principle I follow: if you don't know something, ask the market. I can not get distracted, especially, because my main thing now is not stochastic resonance, but another model. I do not need much from SR - noise together with fluctuations of flat level should answer several questions.

 
grasn:
Vinin:
grasn:

The first approach to the shell...


I want to go back to my old job. Linear approximation, but I want to take the value and period with the change of RMS, maybe there is something hidden here too. But one problem arises, I don't know what to do if BP is new. What period to take for the calculations, so that the old data does not interfere.

I have already written what principle I follow: if you don't know something, ask the market. I can not get distracted, especially, because my main thing now is not stochastic resonance, but another model. I do not need much from SR - noise together with fluctuations of flat level should answer several questions.


By the way, what do you use to make these "Pretty pictures"?
 

to Yurixx

It's about a variant of interpreting the physical sense. If it makes sense :-), then you can trade on every harmonic, and you can only trade on the main one. It all depends on how much money you need.

I'll save this concept for a last resort. Just in case you need to buy all the Pacific Ocean with its islands and the Moon. :о)

to Candid

Concerning my previous post with a picture: I wonder, how much our statements of questions coincide at that level?

In the picture I do not understand very well, what do the numerous lines symbolize and on what basis you draw the conclusion about predictability?

PS: I remember, you boasted that you collected statistics on the channels, for more than 40 parameters. Maybe you collected the noise parameters too?

to Vinin

By the way, what do you use to make such "pretty pictures"?

This is MineSet, developed by SGI and sold here: http://www.purpleinsight.com/ when, SGI collapsed. You can download a demo version. There is a necessary set of Data Mining tools including visualization capabilities (after all, SGI created it).

 
Is it possible to do full-fledged research on the demo version? How is it then any worse than the one for money?
 
grasn:

to Candid

Regarding my previous post with the picture: I wonder how much our questioning coincides on this level?

In the picture I don't really understand what the multiple lines symbolize, and on what basis do you conclude about predictability?

PS: I remember, you boasted that you collected statistics on channels, for more than 40 parameters. Maybe you've collected noise parameters as well? Share.

A lot of lines are automatically calculated trends and they symbolize the predictability of targets :) . In principle I understood that our problem statements, as usual, were different from the very beginning :), here's about the same picture, where I've marked with my hands what seems to me to be stable states. As you can see there are no horizontal flattens among them :)

A bit about your approach: 5000 points, LR starts with three, matrix is triangular, so (4997*4996)/2 channels. Continue each one and calculate noise parameters for each ... yep, would need to filter though. In this regard, it would be interesting to look at the (channel length)-(number of midline crossings) chart. Regarding my statistics - I didn't count noise parameters (as well as many other things), because it would be unrealistic to test such an algorithm on history. Also, I only worked with selected channels, i.e. you, as I understand it, mean a wider capture.

Vinin:
I want to go back to my old job. Linear approximation, but I want to take the value and period with the change of RMS, maybe there is something hidden here too. But one problem arises, I don't know what to do if BP is new. What period to take for calculations that the old data would not interfere.


In a famous branch from a parallel forum the question was solved by a total enumeration of all possible segments and the subsequent selection. Criteria of selection mainly revolved around minimum RMS, but in details in my opinion nobody disclosed :). The canonical one, as far as I understood it, was the minimum RMS provided that the out-of-sample RMS (+ half the length of the segment) did not exceed the base one.

 

to Yurixx

Is it possible to do full-fledged research on the demo version? How is it worse than the one for money?

Yes, limited to only 30 days. Quite expensive (at least it was). In Russia this tool is used by analysts of the President, Chubais, gas and oil producers.

to Candid

In principle I understood that our problem statements, as usual, don't agree from the start.

It seems to me that they are similar, I just did not immediately understand what the lines mean. But now we seem to be dealing with the side channels and you do not seem to deny their existence.

A bit about your approach: 5000 points, LR starts with three, matrix is triangular, so (4997*4997)/2 channels. Continue each and for each to calculate noise parameters ... yep, would need to filter though. In this regard, it would be interesting to look at the (channel length)-(number of midline crossings) chart.

LR for channels is not used, I have described it: average +-k*SCO. In the long run, you could limit the number of channels by the condition on the b coefficient in the LR equation (there would be very few of them). The dead band starts at 24. Channels satisfying this condition will be found a lot, it seems to me this is the point. I will try not to forget about midline intersection. :о)

Concerning my statistics - I did not calculate noise parameters (and many other things) because it would be unrealistic to test such an algorithm on a history

It will take a long time to be calculated - it's a medical fact :o).

Besides, I only worked with selected channels, i.e., as far as I know, you have in mind a wider capture.

Yes, that's including the "scale" I'm looking for.

 
grasn:
Also, I was only working with selected channels, which means you're referring to a wider capture, as I understand it.

Yes, that's what I'm looking for in terms of "scale".


In principle, there's certainly something in this "wide capture + channel extension" approach :), except that there's no supercomputer.
 
lna01:
grasn:
Also, I was only working with selected channels, which means you're referring to a wider capture, as I understand it.

Yes, that's what I'm looking for in terms of 'scale'.


In principle, there's certainly something to this "wide capture+continuous channels" approach :), except there's no supercomputer.

I hope there are many of us and we can put together a cluster :o)))) By the way, I'm still working with EURUSD according to this study, but I'll have to run the algorithm for other symbols soon, of course, if something will be found on EURUSD.

I'm still not sure who is interested in this research and is ready to parse quotes and provide its computing power, i.e. run the algorithm for collecting statistics? At least raise something :o)

PS: I'm preparing algorithm in MathCad now, but it can be rewritten for MT, nothing complicated there.


PS: I've decided to simplify a bit the search logic, I'm really afraid of the scale. I'll post the sequel soon :o)

Reason: