Is there a pattern to the chaos? Let's try to find it! Machine learning on the example of a specific sample. - page 28

 
RomFil #:

I added a column: "1" buy, "-1" sell. I think I did it right, but I didn't check it ... :) Lazy.

On the chart without spread and commission this is the result in points:

Results: PR=157488 +trades=778 -trades=18 (profit, number of positive and negative trades)

Spread of 0.00050:


Results: PR=117688 +trades=629 -trades=167

Spread at 0.00100:

PR=77888 +trades=427 -trades=369

Spread at 200:


PR=-1712 +trades=241 -trades=555

Congratulations on your super oscillator!

I got such results taking into account Target_P (signals that do not coincide in the direction are excluded) - I am confused by fluctuations in the beginning and then such a rapid growth.

If there is no error in your code, you can consider yourself a millionaire!

Can you tell me what the secret is? I understand that this is essentially a polynomial you have somehow driven into the limits of an oscillator.

 
Aleksey Vyazmikin #:

Congratulations on your super oscillator!

I got such results taking into account Target_P (signals that do not coincide in direction are excluded) - it is confusing, though, that it fluctuates at the beginning and then grows so rapidly.

If there is no error in your code, you can consider yourself a millionaire!

Can you tell me what the secret is? I understand that this is essentially a polynomial you have somehow driven into the limits of an oscillator.

There are almost no secrets. I've told you everything above. No polynomials.

I'll give you a few hints:

1) Target. It should be determined by itself, i.e. you should filter the data in such a way that the potential profit should be maximised on both strong and flat movements. I used to use Dobeshi wavelet for series decomposition and I will even say that it is the most correct variant (though now I have found a less resource-intensive one). The most important thing in filtering is not to do it "on the fly" and to exclude at least 10-20% of data on the edges of the sample trace. It is necessary to remove edge data to exclude edge effects.

2) Oscillator. Really it can be the simplest one. For example, the all-forgotten RVI ... :) Only the right period should be selected.

3) But the most important thing is the "right" neural networks and the algorithm of their application ... :) Including the correct interpretation of the result of neural networks.


The error in the code may be only in the time of signal appearance (although I have already checked eight times and did not find any error), but even if the signal appearance is shifted to the right (i.e. to make an artificial delay of the signal by 1-2 steps), there is also a profit - less than the initial one, of course, but the initial conditions from the first post are large.

 
RomFil #:

There's hardly any secrets. I told you everything above. No polynomials.

I'll write a series of hints:

1) Target. It should be determined by itself, i.e. you should filter the data so that the potential profit should be maximised on both strong and flat movements. I used to use Dobeshi wavelet for series decomposition and I will even say that it is the most correct variant (though now I have found a less resource-intensive one). The most important thing in filtering is not to do it "on the fly" and to exclude at least 10-20% of data on the edges of the sample trace. It is necessary to remove edge data to exclude edge effects.

2) Oscillator. Really it can be the simplest one. For example, the all-forgotten RVI ... :) Only the right period should be selected.

3) But the most important thing is the "right" neural networks and the algorithm of their application ... :) Including the correct interpretation of the result of neural networks.


The error in the code may be only in the time of signal appearance (although I have already checked eight times and did not find any error), but even if the signal appearance is shifted to the right (i.e. to make an artificial delay of the signal by 1-2 steps), there is also a profit - less than the initial one, of course, but the initial conditions from the first post are large.

The first hint - it's about my data, so let's skip it for now.

The second hint - I wonder - did you use the sample train?

Third hint - and here I don't understand - why so many neural networks? What do you feed then to the input, returns or what?

 
Aleksey Vyazmikin #:

First hint, it's about my data, so we'll skip it for now.

The second hint - I wonder if you've been sampling?

Third hint - and here I don't understand - why so many neural networks? What do you feed then to the input, returns or what?

2) Yes, only the train sample is overfished, because it is considered that the train sample and the test sample are the result of one and the same process. If the processes are different, then naturally nothing will be obtained.

3) It's very simple. Have you ever done genetics?

So when genetics works to solve an equation with ten variables, for example, the same result (very close) can be obtained with different variables. It's the same with neural networks. Create and train two neural networks on the same samples and then look at the error of these networks and their weight coefficients. They will be different!

Also, for different parts of the graphs you need different depths of samples fed to the input of neural networks. That is, neural networks with different sampling depths have different accuracy on different parts of the graph. So, the "right" committee allows to respond correctly on the whole length of samples. And especially that this committee itself determines this correctness. Perhaps this is already the rudiments of AI ... :)

 
RomFil #:

3) It's very simple. Have you ever done genetics?

So when genetics works to solve an equation with ten variables, for example, the same result (very close) can be obtained with different variables. It's the same with neural networks. Create and train two neural networks on the same samples and then look at the error of these networks and their weight coefficients. They will be different!

Also, for different parts of the graphs you need different depth of samples fed to the input of neural networks. That is, neural networks with different sampling depths have different accuracy on different parts of the graph. So, the "right" committee allows to respond correctly on the whole length of samples. And especially that this committee itself determines this correctness. Perhaps this is already the rudiments of AI ... :)

You are confusing me - I can't understand, do you have an oscillator formula with free coefficients, which are selected through genetics? Is the genetics implemented on a neural network (I don't know about such a variant)?

It is clear, but how did you collect it and distribute coefficients - on train or on another sample?

I understand correctly that the input is pure values with some indentation or even whole windows, but of different sizes in different networks?

 
Aleksey Vyazmikin #:

You are confusing me - I can't understand, do you have an oscillator formula with free coefficients, which are selected through genetics? Is the genetics implemented on a neural network (I don't know about this option)?

It is clear, but how did you collect it and distribute coefficients - on train or on another sample?

I understand correctly that the input is pure values with some indentation or even whole windows, but of different sizes in different networks?

Genetics is usual, without neural network (to be honest genetics on neural network is also unknown to me).

Everything is determined only on a sample of trains. The committee itself determines all the coefficients.

Yes, almost pure values, different depths, different windows, etc.

 
RomFil #:

Genetics are common, without neurosets (to be honest genetics on neurosets is also unknown to me).

Everything is determined only on a sample of trains. The committee itself determines all the coefficients.

Yes, almost pure values, different depths, different windows, etc.

Oscillator was mentioned earlier as a collective image not applied in reality?

 
Aleksey Vyazmikin #:

The oscillator was previously mentioned as a collective image not applied in reality?

No, the oscillator is not a collective image, but a reality (placed in the basement):


The actual oscillator itself + dots is the forecast. Dots appear on the first ticks of a new bar. But the appearance of a point is not a signal to a deal - it is only a warning. Further the further price movement is analysed and only then the decision on the deal is made. By the way, this chart also shows a stop (red mark), which in 98-99% of cases is not broken through - it is a defence against any sharp fluctuations. This is the signal to buy actually ... :)

 
RomFil #:

No, the oscillator is not a collective image, it's real (placed in the basement):


The oscillator itself + points is a forecast. Points appear on the first ticks of a new bar. But the appearance of a point is not a signal to a deal - it is only a warning. Further the further price movement is analysed and only then the decision on the deal is made. By the way, this chart also shows a stop (red mark), which in 98-99% of cases is not broken through - it is a defence against any sharp fluctuations. This is the signal to buy actually ... :)

Well, this is purely your system, and it has nothing to do with the data that I gave, because you did not use other data for analysis?

I am attaching a file - please apply on it the model that you have previously trained - I am interested in the result.

Files:
 
So your genetics is responsible for the data fed into the inputs to the network? And the data itself is the time series bias?
Reason: