What to feed to the input of the neural network? Your ideas... - page 69

 
Ivan Butko #:

If it wasn't for dumdums, I'd be wasting my time on the MO god - Gizlyk right now :)

Thinking is bad for your body, borrow the experience ))

 
Maxim Dmitrievsky #:

Thinking is bad for your body, borrow from experience ))

Also true

 
Ivan Butko #:

Input isnot the strength of the signal

Its strength is given to it by weights. But the input number itself a priori already(!) carries a power element - its quantitative factor.


Earlier I raised the problem of understanding input data.

By giving a number to the input - we already initially endow the input with a force value. This is a mistake, because the essence of our NS is just to find the power factor and distribute it among the inputs.

I don't quite understand, are you teaching with a teacher?
 
Andrey Dik #:
I don't quite understand, are you teaching with a teacher?

In MT5 optimisation.

And I tried it with a teacher in NeuroPro.

The essence of the theoretical problem comes down to the following:

If the input range is from 0 to 1, the NS will never find the grail if the drain patterns in the upper range are buried in the numbers of the lower range, because it will have to "choke" the upper one, and with static weights everything that is lower will also go under the knife.

As a result, the adder will get a number that consists of both the drain data and the working data - 50/50.

And if the input range is from -1 to 1, and the grail is somewhere in the middle - the same thing will happen: the NS will muffle the extremes and the grail will be washed away.




But if you create a filtering module, when the number 0.9 will "turn" into 0.01, or into 0 at all. And the number 0.63 into 0.99 and so on - I assume that this method is at least better than the standard one, and at most - potential.

And already these numbers should be fed to the NS, which will create rules for working with input data "cleared" from noise .

 
Ivan Butko #:


The essence of the theoretical problem boils down to the following:

If the input range is from 0 to 1, the NS will never find the grail at the drain patterns in the upper range if it is buried in the numbers of the lower range, because it will have to "choke" the upper one, and at static weights everything that is lower will also go under the knife.

As a result, the adder will get a number that consists of both drain data and working data - 50/50 and it turns out.

And if the input range is from -1 to 1, and the grail is somewhere in the middle - the same thing will happen: the NS will muffle the extremes and the grail will be washed away.


But if you create a filtering module, when the number 0.9 will "turn" into 0.01, or into 0 at all. And the number 0.63 into 0.99 and so on - I assume that this method is at least better than the standard one, and at most - potential.

And already these numbers should be fed to the NS, which will create rules for working with input data "cleared" from noise .

Yeah... I think I see what you're talking about (correct me if it's wrong). A neuron of a conventional MLP summarises products of inputs to it and corresponding weights, shifts the sum and then nonlinearly transforms it (the shift is the same for each neuron regardless of the signals coming to it). That is, it is impossible to shift each input to it separately just linearly. If the task is to correct the input values to the network, you can improve MLP a bit by adding an additional correction layer (without activation function) between the input layer and the first hidden layer, the task of which is just to linearly correct the values of the input layer. It turns out, for each input of the network, 1 extra weight and one extra offset would be required. In fact, only one of these additional layers is required, and then it is business as usual.

Interesting, not difficult to implement.

It is clear that if it was known in advance how the input values should be converted, they would take and convert them as two bytes to send, but if it is unknown, such an additional layer makes sense.

IMLP - improved MLP
 
Ivan Butko #:


If the input range is from 0 to 1, the NS will never find the grail if the plum patterns in the upper range are buried in the numbers of the lower range, because it will have to "jam" the upper one, and with static weights everything that is lower will also go under the knife.

As a result, the adder will get a number that consists of both plum and working data - 50/50.

Your problem is easily solved:
Divide each fiche into the required number of ranges (3,5,10,50...) and feed them as separate fiches. Each range will have its coefficients adjusted individually.
If you have a grail hidden in the upper third, it will be found by this individual fiche.

 
Andrey Dik #:

Yeah... I think I understand what you're talking about (correct me if it's wrong). A neuron of a conventional MLP sums up the products of inputs to it and the corresponding weights, shifts the sum and then nonlinearly transforms it (the shift is the same for each neuron regardless of the signals coming to it). That is, it is impossible to shift each input to it separately just linearly. If the task is to correct the input values to the network, you can improve MLP a bit by adding an additional correction layer (without activation function) between the input layer and the first hidden layer, the task of which is just to linearly correct the values of the input layer. It turns out, for each input of the network, 1 extra weight and one extra offset would be required. In fact, only one of these additional layers is required, and then it is business as usual.

Interesting, not difficult to implement.

It is clear that if it was known in advance how the input values should be converted, they would take and convert them as two bytes to send, but if it is unknown, such an additional layer makes sense.

IMLP - improved MLP

Yes, this is one kind of filtering

The easiest way and also the most efficient: just add an "if" condition.
If N1 < IN < N2, then IN = Filter[i];

I did it this way with a loop

double CalculateNeuron(double &in[],    double &w[])
  {
   double NET=0.0;

   for (int n = 0; n < ArraySize(in); n++)
     {
      NET += /*x[n] **/ W(in[n], w);
     }    
    
   return((NET     ));
  // return(ActivateNeuron(NET));
  }
double W(double in,   double &w[])
  {
    double x = MathAbs(in);
  //  double x = in;
    
    double r1 = 1.0;
    double r2 = 1.0;
    double z  = 0.0125;           // затрагивает весь диапазон от 0 до 1
   // double z  = 0.00625;          // диапазон от 0.5 до 1
   // double z  = 0.003125;         // диапазон от 0.75 до 1
   // double z  = 0.0015625;        // диапазон от 0.875 до 1
   // double z  = 0.00078125;       // диапазон от 0.9375 до 1
   // double z  = 0.000390625;      // диапазон от 0.96875 до 1
   // double z  = 0.0001953125;     // диапазон от 0.984375 до 1
    r2 -= z;
    
    double res;
    
    int i = 0;   
   
    if (x >= r1)              res = w[i]; 
    for (i = 1; i < 80; i++)
      {
        if (x < r1 && x >= r2)    res = w[i];    r1 -= z;    r2 -= z;
      }
    if (x < r1 && x > 0)    res = w[i];    r1 -= z;    r2 -= z;

    if (in < 0) res = -res;      // если на вход попало минусовое число — скорректированное также умножаем на (-1)
    
    return res;
  }


If the input is fed with all sorts of channel boundaries, which have high probabilities of corrections from the boundaries, the range can be divided into 2 parts, one of which will be ignored (nulled), and the second will be more intensively corrected.

This way is the fastest fitting (retraining) that can be. Though no, the Q-table comes first, and this method comes second.

MLP, on the other hand, is a very... very specific tool for forex. I even think it is a disruptive one.


Forester #:

Your problem is easily solved:
Divide each feature into the required number of ranges (3,5,10,50...) and submit them as separate features. Each range will have its coefficients adjusted individually.
If you have a grail hidden in the upper third, it will be found by this individual fiche.

I think that's roughly how I did it
 
Maxim Kuznetsov #:

Any luck with NN and DL? Anybody here...

---

except for python charts and fitted tests:-) At least "EA trades on demo and plus".

---

or there is a feeling that this is a dead-end branch of evolution and all the output of mashobuch and neurons is in advertising, spam and "mutual_sending".

I guess the doubts are unfounded. Don't judge progress by fake grail pythonians and we'll see.

 
Ivan Butko #:

MLP, on the other hand, is a very... very specific tool for forex. I even think it is destructive.

As you look at it, I would say on the contrary, it is the most adapted for DEM. Everything else is a pure adjustment to imaginary labels, which actually mean nothing and are a figment of imagination of deterministic approaches in clustering that have nothing to do with the live market.

 
The evolution of AI according to the forum. Rosenblatt perceptron -> Reshetov neuron -> Butko neuron 😄

Reshetov is ahead of the curve so far. He even wrote a separate software :)