Market etiquette or good manners in a minefield - page 20

 
Neutron >> :

Give me a hint as to what your problem is with the correlation of the input signals. What are you inputting, and why you think there is a problem. After all, it's easier to make sure it doesn't exist than to solve it :-)

This is not the problem, I am my mathematical illiteracy.

You think I'm like the SP? You won't believe it! Having divided in advance the range -/+1 into equal intervals of 0.02, in the loop I ran through the whole BP and if the BP value on a given bar fell into the range -0.98 : +1 I put in this "pocket" one "chatl" = 0.01. Then I displayed the contents of these 100 pockets on the last 100 bars. And to normalise the SP, I multiplied the investigated BP by a K factor > 1

Here is my code:

for(int i = limit; i >= 0; i--)
{
if(first)
for(int k = 200; k >0; k--)
Ind2[k] = 0.0; // Инициализация нулем
first = false;
//---------------------------------------------------
res = th(kf*(Close[i] - Close[i+1])/Close[i+1]); // Входной ВР
Ind1[i] = res;
//-------------------------------------------------------

pos = -1.0;
for(int j=200; j>0; j--)
{
if((res > pos) && (res < (pos+step))) // расчет МО
{
if( i > 2 )
{Ind2[j] = Ind2[j] + chatl;}
break;
}
else
{pos = pos + step;}
}

}


That is, I pick up this coefficient "manually", using an indicator, and then insert it as a constant in the NS code. This works, but only if the input BP increments taken at equal periods of time - for example at every bar or at equal n bars. But if input increments are taken at unequal time intervals - for example, according to Fibonacci series - 2,3,5,8... ... then for each such reference the MO normalization coefficient should be different! In general, the way I do it is efficient, but not intelligent -:)

 

I see.

So, what's the correlation on the NS inputs?

 
Neutron >> :

I see.

So, what about the correlation on the NS inputs?

Apparently there is little or no correlation at all. See the indicator below the chart. Each blue line is one of the inputs.



What is surprising is that NS is performing remarkably well! Two months after "sharpening" it is growing with a very acceptable (27%) drawdown

 

Here's another question I wanted to ask:

The output neuron of the grid is without FA, as we take the amplitude from it, which we use as a measure of the probability of success of the planned transaction, but it is also not normalised.

Is normalisation needed on the output? If not, how do we evaluate the resulting probability, since it is generally outside the +/-1 range?

 
paralocus писал(а) >>

It appears to be negligible or non-existent. See the indicator below the chart. Each blue line is one of the inputs.

What is surprising is that the NS is working remarkably well! Two months after "sharpening" it grows with a very acceptable (27%) drawdown.

So there's no need to whiten anything!

In fact, it would be good to get a numerical measure of the pairwise correlation coefficient for the two selected BPs. This is what a universal pairwise correlation meter looks like:

Transfer to MQL, put two BPs (X,Y) of the same length - n, and get an input number in the range +/-1. Just to make a quick guess, the correlation coefficient by indices reaches +0.6

As for the output neuron, if its output is used to estimate the probability of an event, it is necessary to have a non-linear FA as input. Any FA will do (this is proved strictly), so put th.

 
Neutron >> :


...

I don't get it, how do you define the value of n??????

 

I think it's simpler and clearer :o):



In this case it doesn't make much sense to create a loop, the summation operator will do the same job in terms of time.

PS Also, this is not an "FAC" at all, the meaning is a bit different.

 
Neutron >> :

As for the output neuron, if you use its output to estimate the probability of an event, you need to have a non-linear FA as input. Any one will do (this is proved rigorously), so put th.

That's what I do. I have everything via th() -:) pun intended, however.

Doing everything as you explained:

1. All neurons have one single input(+1)(this of course excluding other data inputs)

2. At the output of all but the last neuron - the output, non-linear FA - I use th(x)

Now, if I want to get a probability on the output, this output should be within +/-1(as far as I understand). Now let's see how this last output is formed.

And it is formed like this: OUT1*W1 + OUT2*W2 + OUT3*W3 + W4. If we take into account that OUT1 - OUT3 are hypertangent outputs of neurons of previous layer and take values from the range -/+1, and W1 - W4 are their weights taking values from the range -/+1, then output neuron amplitude is within -/+ 4. Ok, but if you take into account that the weights can vary over a wide range - as you said +/-20 is the norm - then the amplitude range of the output grid disappears over the horizon.

Questions:

1. How to estimate transaction probability in such conditions?

2. Do you need to normalise the output signal and if so, with what?


About outputs from MathLab - unfortunately, for me it's a dark forest... So is the MathLab itself.

I haven't understood a lot of what you told me a few posts ago, but I've been poking around in my head for an hour and a half.

And, forgive my intrusiveness, but I still have more questions than answers about the ORO algorithm.

With whitening it seems clear - everything works.

I understand the architecture.

The epochs and optimal sampling are almost clear:

I've tried to calculate the optimal sample size for your "nice scheme" and ended up with this:

number of inputs - 4

number of synapses - 4*6 + 3*3 + 4 = 37

factor of 4.

P = k*w*w/d = 4*37*37/4 = 1369 .... and it should be 1500.

Question: do the synapses of single inputs have to be counted? I counted.

 
grasn писал(а) >>

I don't understand, how do you determine the value of n??????

From outside.

Hi, Sergei!

You have five independent cycles, and I have only one! Which is better, to look elegant, or to always be that way?

paralocus wrote >>

Questions:

1. How do you estimate the probability of a transaction under these conditions?

2. Is it necessary to normalize the output signal and if yes, with what?

Put th() on the output of the last neuron - all your problems will disappear! And it won't make training the mesh any harder.

I've tried to calculate the optimal sample size for your "nice scheme", and ended up with this:

number of inputs - 4

number of synapses - 4*6 + 3*3 + 4 = 37

factor - 4

P = k*w*w/d = 4*37*37/4 = 1369 .... and it should be 1500.

Question: do the synapses of the single inputs need to be counted? I do.

Let's calculate the optimal length of training vector for each member. Since we have all members with the same architecture, we find for one:

1. Number of inputs d=3+1=4.

2. Number of synapses w=2*4+3+1=12

Optimal length of training vector P=4*w*w/d=4*12*12/4=144

Thus, to train the committee of networks, we need three training vectors (the number of committee members), each with length 144. If you, paralocus, think this value is small, increase the number of neurons in the hidden layer from two to 4 or 8 and you will get 500 or 2000 samples at once! Remember that you should train all this stuff in every sample.

 
Neutron писал(а) >>

From outside.

Hello, Sergei!

You have five independent cycles and I have just one! What's better - to look elegant, or to be like that forever?

I guessed it :o)

While I'm creating - I need to look like an artist! But when I'm coding ... :о)))

PS: Serega, it's almost monopenic in MathCAD, but in C++/FORTRAN ... will be different, but in MathCAD - to put it mildly not (for this particular case). And if you use mean, it'll count even faster, and if you use corr, for example, your "fast" algorithm will turtle :o). How they do it, I don't know, but as for "summing" operator - it's much faster than loop. If you don't have it that way - then put a fresh version.

Reason: