Market etiquette or good manners in a minefield - page 16

 
paralocus писал(а) >>

Neutron, I'll take a short time-out. I have to rethink everything again and put it into codes for at least one neuron. Anyway, a day or two, and then we'll continue.

I really appreciate it!

Ok.

 
Neutron >> :

Ok.

Hello Neutron. Here, I've made something.

I made such a small grid for ORO algorithm:



Actually, here is the code of the grid itself (it's the header file Neyro_test.mqh):

extern int neyrons = 3;
extern int in = 5;

double Data[3][5] = {0.0,0.0,0.0,0.0,1.0,
0.0,0.0,0.0,0.0,1.0,
0.0,0.0,0.0,0.0,1.0 };

double W[4][5] = {-0.786, 0.359,-0.186, 0.891, 0.238,
0.711,-0.923, 0.088, 0.417,-0.112,
-0.867,-0.229, 0.321, 0.921,-0.941,
0.995,-0.712, 0.012,-0.625, 0.0 };

//----------------------------
double OUT(int bar = 1)
{
int i;
double res = 0.0;

GetInd(bar);
res = W[3,0]*RSI_1() + W[3,1]*RSI_2() + W[3,2]*RSI_3() + W[3,3];

return(res);
}

double RSI_1()
{
int i;
double res = 0.0;

for(i = 0; i < 5; i++)
res += W[0,i] * Data[0,i];

return(th(res));
}

double RSI_2()
{
int i;
double res = 0.0;

for(i = 0; i < 5; i++)
res += W[1,i] * Data[0,i];

return(th(res));
}

double RSI_3()
{
int i;
double res = 0.0;

for(i = 0; i < 5; i++)
res += W[2,i] * Data[0,i];

return(th(res));
}

//----------

void OPO(double de)
{
int i;
for(i = 0; i < 4; i++)
W[3,i] += de;
}

//---------------------------------------------------------------------

void GetInd(int i)
{
int dt = 7,per = 14, prc = 0;
double ue_in = 0.02,kf = 3.0;

Data[0,0] = th((iRSI(NULL,0,per,prc,i)*ue_in - 1.0)*kf);
Data[0,1] = th((iRSI(NULL,0,per,prc,i+dt)*ue_in - 1.0)*kf);
Data[0,2] = th((iRSI(NULL,0,per,prc,i+dt*2)*ue_in - 1.0)*kf);
Data[0,3] = th((iRSI(NULL,0,per,prc,i+dt*3)*ue_in - 1.0)*kf);
}
//-------------------------------------------

double th(double x)
{
double sh,ch;
sh = MathExp(x) - MathExp(-x);
ch = MathExp(x) + MathExp(-x);
return(sh/ch);
}



Here is how I tried to teach it in an empty Expert Advisor:

extern int cikl = 10;
static int prevtime = 0;


#include <Neyro_test.mqh>
//-----------------------------------
int init()
{
return(0);
}

int deinit()
{
return(0);
}
//-------------------------------------
int start()
{
static double test, control;

if(Time[0] == prevtime)
return(0);
prevtime = Time[0];

int pr = 14, pc = 0, i;
double in = 0.02, k = 3.0;


for(i = cikl; i > 2; i--)
{
test = OUT(i);
control = th((iRSI(NULL,0,pr,pc,i-1)*in - 1.0)*k);
OPO(test - control);
}

Comment("\r\n ВЕСА ВЫХОДНОГО НЕЙРОНА : ",W[3,0],", ",W[3,1],", ",W[3,2],", ",W[3,3]);
return(0);

}



Basically, this grid goes "out of whack". Weighting coefficients become ridiculously high after 10 steps (bars). I searched for a long time for an error in the code and found out that it works as it should.

I.e. if we just add an error obtained at the output of the grid to all weights of the output layer, then these coefficients tend to grow too quickly.

The scalimeter can't hold... -:) So far I've only tried to propagate the error to the weights of the output layer.

Question one: What have I done wrong?

Question two:

I want to get the probability of successful buy/sell in the output of the grid or a recommendation to smoke bamboo. But this grid is trained to predict the value of RSI at n+1 bars...

What's in it for me?

 

in the ORO function, the weights are not modified correctly

Read the theory, at least here

 
maxfade >> :

in the ORO function, the weights are not modified correctly

read the theory, at least here

Thank you, but I don't understand anything there.

 

error in the output layer should be calculated using the formula e=OUT*(1-OUT)*(TEST-OUT) (this is for logistic transfer function, for hyperbolic tangent the formula seems to be a bit different, but also not too complicated)

neuron weight should be modified according to w+=nju*e*OUT, where nju is a learning step

if the step is too big - the network will be unstable and there will be an infinite increase of weights (as in your case, you are using (TEST-OUT) without all other multipliers)

If too small - the network will take too long to learn and might hit a local minimum

 
maxfade >> :

error in the output layer should be calculated using the formula e=OUT*(1-OUT)*(TEST-OUT) (this is for logistic transfer function, for hyperbolic tangent the formula seems to be a bit different, but also not too complicated)

neuron weight should be modified according to w+=nju*e*OUT, where nju is a learning step

if the step is too big - the network will be unstable and there will be an infinite increase of weights (as in your case, you are using (TEST-OUT) without all other multipliers)

If it is too small, it will take too long to learn and might hit a local minimum.


Thanks, I'll try it now.

 
maxfade >> :

If the step is too big - the network will be unstable and there will be an infinite increase of weights (as in your case, you use (TEST-OUT) without all other multipliers)

If it is too small, it will take too long to learn and might hit a local minimum.


I'm a little confused by the fact that the modification results in the "relative position" of the weights remaining unchanged. This is because all the incoming weights, as a result of the modification, change by the same amount. In this case, the original ratio of weights was set randomly. Should this be the case?

 

Hello, paralocus.

Let's take it one step at a time. You have a training vector of length n-counts, and we need to generate an error vector to correct all the weights of the network. It is clear that the length of the vector will be equal to the number of weights w counting the constant level weights. The error vector will be generated only by the end of the first (second, third, etc.) training epoch. The method of its formation is as follows:

1. At the first step of training (there are a total of n in each epoch), we form an individual error for each weight and do not correct the weights. Then, at the second step, having formed a similar correction, add it to the previous one, etc. n-times. You obtain a total correction (taking into account the sign of each term) for each weight. Here is an important point. This final correction should not be used - weights will go to pieces! It should be divided by the correction vector norm. To do this, count the sum of squares of each correction within one training epoch for each weight separately. As soon as we finish an epoch(n-cycles within it), take the square root of the sum of squares for each weight separately, and personally divide each correction by this norm. We correct each weight.

2. 10 to 1000 training epochs, depending on the purpose of the training (sign or amplitude). Proceed in the same way. Important point: when proceeding from epoch to epoch, you should strictly observe a monotonic decrease in the amplitude of the weights correction. The Grid should not be lazy to look for deeper and deeper minimum in the error function potential. It is realized simply. At the end of an epoch, before weight correction, multiply the corrector by the value 1-j/N, where N is the number of training epochs and j is the current epoch number.

3. To prevent the inevitable effect of saturation of some weights of CETI during its training, it is necessary to affect all weights with a hyperbolic tangent immediately after the next correction of weights. This procedure is not obvious, but extremely effective. Because of the smoothness and monotonicity of the FA used, all weights will always remain in the +/-1 range and the act of acting does not cause "hysteria" in the girl.

That's all for now. Digest and ask questions.

As for, the question of how to go from RSI to Buy/Sell, that's more of a question for you - after all, you came up with the move yourself. For example, I'm entering what I forecast (and I'm forecasting exactly the Buy/Sell). That's why there is no contradiction. And you are trying to predict the colour of wallpaper in an arbitrary flat by the colour of a car in front of its kitchen window...

 

As I was writing the post remembering the details of the implementation, there were some inaccuracies.

I looked at the code and it turns out that I was influencing FA on weights once when switching to a new forecast, i.e. not every epoch, but when it was necessary to retrain the network when new data came in.

One more thing. Correction of the weight is the product of the error from neuron output by FA derivative and by neuron output (amplitude, taking into account the sign) from which the signal is input.

Here's what it looks like for one perseptron with a non-linear output (for example):

Here epochs are numbered by the L index. I purposely show it in MathCad, it is more clear. In - number of inputs, x - input vector. The rest seems to be clear.

 
Neutron >> :

As for how to switch from RSI to Buy/Sell, that's more of a question for you - you came up with that move yourself. I, for example, what I predict, I also enter (and I predict exactly the Buy/Sell). That's why there is no contradiction. And you're trying to predict the colour of the wallpaper in an arbitrary flat by the colour of the car in front of the kitchen window of that flat...

Hello, Neutron.

I'm dealing with what you wrote right now. It's just that maths is really creaking up in my head. Programming's a little easier. And about the wallpaper - you're absolutely right - but I still don't know what to put in the grid, apart from all the same wallpaper, but another factory (I think the result will be the same). Here on the forum I read this article, it started my interest in neural nets. And in that article, as you know, on one input poor perceptron, clever fellows from forum shove AO index (or AC - I confuse them constantly), and to have something to watch, they divided it into "clusters" by sharing inputs of Perceptron on graphic of mentioned index(dt/dl ). They call it all "NEUROSETTING" or even "TRADING WITHIN THE NETHERLANDS O... (no, that's not what I meant) SLIVE AUTOMATIZATION". That's a lot of money... in a tester.

All right... that was a digression.

So, after that article and a couple of similar ones (by the links from it) I've started to experiment with perceptrons in different ways. Anyway, it's like that song:

"Vanka sits on the bench, pounding three kopecks with his dick.

Wants to make three rubles - Nothing comes out!"

However, my interest in non-growers for the purpose of using them in trading has remained and even grown over time. The fact that I can't think of anything better than RSI clustering yet - it's not my uncle's fault, but it's "a stick with two ends". I.e. to know what and how to put into input meshes, I need to know how these meshes are arranged and not only to know, but also to be able to do them. At least in some not very complicated forms, but enough for stable work on the real. That is why I turned to you.


P.S. Nothing that I went on a "you"? It's kinder that way.

Reason: