Edge effect on the way to the GRAAL - page 4

 

Stand down. Got it figured out.

All that's left to do is to find how to form the DLL.

To mathemat:

An interesting idea. But. :) The point is that due to human imperfection it is inherently human to invent something that works under certain boundary conditions.

From a wheel for land, an oar for sea to trend and flat strategies. We build our systems from several subsystems:

The trading strategy, the filter system that defines the boundary conditions and the money management subsystem, which is designed to limit the failures of the first two.

That's just the way we are used to doing it. But having one trading strategy (method) based on fundamental (basic) price properties

makes other "patch" subsystems unnecessary. And this single system must be simple.

Unfortunately (or fortunately) no one has found it yet. And if they have found it, we won't know about it :)

Back to work.

 
mql4com писал(а) >>

If you look for a pattern, it is in the price itself.

This is correct!

Our main mistake is that we try to use differential calculus (Taylor series, etc.) for BPs like the price series. Of course, this is impossible because the price series is not smooth (the first difference is sign-variable) and in this situation we make the next "ingenious" step - we smooth out the initial BP by mooing or wavelets and do whatever we want with smooth series, forgetting that this procedure adds no useful information to what we already have. We are stomping around trying, figuratively speaking, to pull ourselves out of the swamp by our hair. You can't smooth out a price series and then build a forecast on it (in any way) to get information that wasn't in the original BP.

That's why the only way not to waste time and efforts is to work with original price series without using differential calculus methods directly or indirectly, it makes sense, for example to use HC apparatus, regression methods etc.

 
Neutron писал(а) >>

This is correct!

Our main mistake is that we try to use differential calculus (Taylor series, etc.) for a price-type BP. Of course, this is impossible because the price series is not smooth (the first difference is sign-variable) and in this situation we make the next "ingenious" step - we smooth out the initial BP by mooing or wavelets and do whatever we want with smooth series, forgetting that this procedure adds no useful information to what we already have. We are stomping around trying, figuratively speaking, to pull ourselves out of the swamp by our hair. You can't smooth out a price series and then build a forecast on it (in any way) to get information that wasn't in the original BP.

Therefore, the only way not to waste time and effort, is to work with the original price series without the use of differential calculus methods directly or indirectly, it makes sense, for example, use the apparatus of NS, regression methods, etc.

No one is talking about adding any information to the existing transformation methods.

I'm saying on the contrary, transformation is a way of removing redundant information and focusing on the useful part of it.

By the way, you can't train NS on pure price data. They will still have to be normalized and smoothed somehow. And this is already a transformation :)

I'm not familiar with regression methods, so I'm not going to argue with them.

 
Desperado писал(а) >>

I say on the contrary, conversion is a way of removing superfluous information and focusing on the useful part of it.

By the way, you can't train NS on pure price data. You still have to normalize and smooth them out somehow. And this is already a transformation :)

You're right to say so.

By the way, you can train NS on any data, the only question is how long it takes... Learning, this is a very resource-intensive process and our task is to prepare the input data so as to facilitate the task for the NS as much as possible, but at the same time not to solve it for her :-)

What concerns preliminary smoothing of data for NS, it is nonsense, because unavoidable at this procedure FS will absolutely deprive NS of its prediction qualities, more precisely it will not give it anything new. But I already repeat myself.

 

Кстати, НС можно обучить именно что на любых данных, вопрос только - за какое время...

But if the data is noisy, learning should be less successful, shouldn't it? In addition, the examples change over time. And if you choose a long learning period, the data will be inconsistent. The network constantly needs to be retrained as the rules change and the masses react to events.

Training, this is a very resource-intensive process and our task is to prepare the input data in a way that makes the task as easy as possible for the NS, but at the same time not to solve it for it:-)

I agree :)

As for data pre-smoothing for NS, it is nonsense, because unavoidable at this procedure FS will absolutely deprive NS of its prediction qualities, or rather it will not give it anything new. But I am repeating myself already.

Have you really managed to train the network on unsmoothed data and to make it work for some time outside the training sample?

 
Desperado писал(а) >>

But if the data is noisy, learning should be less successful, shouldn't it?

Do you take it upon yourself to judge where the noise is and where the useful information is? I wouldn't be so sure of my knowledge of the truth, let NS solve this worthy task for her.

Besides, examples change over time. And if you choose a long period of training, the data will be inconsistent. The network needs to be retrained all the time as the rules change and the reaction of the masses to events.

I agree 100%.

Have you really been able to train the network on the unsmoothed data and let it work for some time beyond the training sample?

I retrain the network at each step of the forecast (on each sample), or rather I do not train it from scratch, I retrain it exactly on the unsmoothed data.

Right now I'm studying the dependence of the share of correctly recognized price movement directions (ordinate axis) as a function of the number of training epochs (abscissa axis). The data is given for bilayer nonlinear NS with 8 neurons in a hidden layer and 3 inputs. The red shading is for training sample, the blue shading is for test sample, on non-trained data. Each point is a result of statistical processing of 100 independent experiments.

 
Desperado >> :

Installed Matlab 7.01. Powerful stuff.

Found the wavelets.

But how do I load my signal into the system?

Is there a converter for example from text file to matlab?

Why not the latest 77? It has fixed bugs, in particular in dll handling. I have dll from 7.1 hangs periodically, I was tired to find the reason, but I couldn't find it. With 77 it works fine, in addition there are no excessive folders with files. If you bought the disk, I advise you to replace it with the latest R2008b.

 

Am I correct in assuming from the figure that the network guesses the direction 30% of the time?

Have you tried working with a collection of nets? For example with 3 or 5 to refine the decision.

Or with a pair of nets: one guesses only upwards, the other only downwards.

By the way, why exactly 3 (or 5, I'm confused ;) ) input neurons. I just met networks with 4, 7 or 15 inputs :)

p.s.

I once did an experiment. I memorized all the history I had and searched for the most similar situations to the current one

using vector distance method (normalized vectors, of course). In 60% of cases, history repeated itself :)

But it still depends on the forecast range and vector length.

 
vladevgeniy писал(а) >>

Why not the latest 77? It has fixed bugs, in particular in dll handling. I have dll from 7.1 hangs periodically, I was tired to find the reason, but I couldn't find it. With 77 it works fine, in addition there is no excessive folder with the files. If you buy it, I recommend to replace it with the latest R2008b.

This is the first one I found. I will replace it with 7.7 later.

Analyzed the wavelets in the toolbox. Meyer definitely fits better than Dobeshi.

But it's still wrong sometimes. For example, it shows a clear maximum at the moment of stasis before the up-throw :).

Although, the quick throw was indicated by the last level. It was at the low.

I want to make an indicator out of synthesized signal and two details and see the dependencies.

I'm working out the formation of the DLL at the moment.

 
Desperado, see private message.
Reason: