Discussion of article "Neural Networks: From Theory to Practice" - page 6

 
fyords: Of course, I am not against discussing the subtleties of neural networks in this thread, but originally, the article was planned for beginners. It omits some details, because these very details are able to confuse a novice neural networkist. Of course, the article does not specify various methods of training (fitting) neural networks, but it is not necessary at the initial stage. If you realise that neural networks are not that difficult, it does not give you an excuse to turn away and say "this is very difficult and not for me". If you know more, that's great, then the article is probably not for you.

So far, it is the absence of those details I asked about above that has put me, as an ordinary dummies, in a stupor. I reread the article three times, but found the necessary answers only on the forum.

fyords: At the moment the second part will cover the work with multilayer neural networks. If you have any wishes about its content - please write them briefly.
Only one question is of interest: how to create a self-learning programme that could do without using an "external" optimizer. If such a thing is possible at this stage, of course.
 
fyords:

After some deliberation, it was decided to write the second part of the article.

At the moment, the second part will cover the work with multilayer neural networks.
If you have any wishes about its content - please, briefly, write them.
Those ideas that I will be able to convey on my fingers will be described in the article.

Thank you.

I would like to see the gradient learning method described on fingers.
 
Yedelkin:

I "naively assume" that among native Russian speakers it is not customary to call the process of independent learning "parameter fitting". As well as it is not accepted to call the selection of parameters (with the help of external processes) for any system as learning.

No matter how you call fitting, it will not cease to be fitting.

Optimisation, fitting and learning for neural networks dealing with non-stationary data are synonyms. Because all three terms mean the same thing: selection of weighting coefficients for past historical data (training sample) in order to minimise errors in the neural network output. If it were possible to feed the grid with future data, then it would be a different matter. But they don't sell time machines in office equipment shops yet, so we have to fit the past.

 
Reshetov:

Whatever you call a fitting, it will not cease to be a fitting.

Retraining is defined in an elementary way. So there is nothing to blame on the mirror.
 
Yedelkin:

Only one question is of interest: how to create a self-learning programme that can do without using an "external" optimiser. If such a thing is possible at this stage, of course.

It's simple. The EA code can contain the network itself and its weights optimiser, which can be launched automatically when new data arrives. Under neural networks in most cases we mean such self-learning networks. Networks trained externally, for example by the tester optimiser, are toys.

 
Guys, help me! Did I understand correctly that normalisation of input data should be done for the whole training period of the network? I mean maximum and minimum values of xi should be taken from the whole period?
 
net:
Guys, help me! Did I understand correctly that normalisation of input data should be done for the whole training period of the network? I mean maximum and minimum values of xi should be taken from the whole period?
It is necessary on the whole training sample.
 
//+------------------------------------------------------------------+
//|macd-neuro-example.mq5 |
//| Copyright 2012, MetaQuotes Software Corp. | |
//|http://www.mql5.com |
//+------------------------------------------------------------------+
#property copyright "Copyright 2012, MetaQuotes Software Corp."
#property link      "http://www.mql5.com"
#property version   "1.00"
//+------------------------------------------------------------------+
//| Expert initialisation function|
//+------------------------------------------------------------------+

#define SIZE 1000

#include <Trade\Trade.mqh>        //connect the library for performing trade operations
#include <Trade\PositionInfo.mqh> //connect the library to get information about positions
#include <Indicators/TimeSeries.mqh>

//--- values of weighting coefficients 
input double w0=0.5;
input double w1=0.5;
input double w2=0.5;
input double w3=0.5;
input double w4=0.5;
input double w5=0.5;
input double w6=0.5;
input double w7=0.5;
input double w8=0.5;
input double w9=0.5;
input double w10=0.5;
input double w11=0.5;
input double w12=0.5;

string            my_symbol;         // variable to store the character
double            inputsH[13];        // array for storing input signals
double            inputsL[13];        // array for storing input signals
int               periods[13]={2,3,5,8,13,21,34,55,89,144,233,377,610};
int B,Ba;
double            weight[13];        // array for storing weight coefficients
double High[SIZE],Low[SIZE];

CTrade            m_Trade;           // object for performing trade operations
CPositionInfo     m_Position;        // object for position information
//+------------------------------------------------------------------+
//||
//+------------------------------------------------------------------+
int OnInit()
  {

   weight[0]=w0;
   weight[1]=w1;
   weight[2]=w2;
   weight[3]=w3;
   weight[4]=w4;
   weight[5]=w5;
   weight[6]=w6;
   weight[7]=w7;
   weight[8]=w8;
   weight[9]=w9;
   weight[10]=w10;
   weight[11]=w11;
   weight[12]=w12;

   my_symbol=Symbol();
   B=Bars(my_symbol,0);

//--- return 0, initialisation complete
   return(0);
  }
//+------------------------------------------------------------------+
//| Expert deinitialisation function|
//+------------------------------------------------------------------+
void OnDeinit(const int reason)
  {  }
//+------------------------------------------------------------------+
//| Expert tick function|
//+------------------------------------------------------------------+
void OnTick()
  {

   double Ask = NormalizeDouble(SymbolInfoDouble(_Symbol,SYMBOL_ASK),_Digits);                   // best offer to buy
   double Bid = NormalizeDouble(SymbolInfoDouble(_Symbol,SYMBOL_BID),_Digits);                   // best offer for sale

   B=Bars(my_symbol,0);
   if(B!=Ba)
     {
      CiHigh hi;
      CiLow li;
      hi.Create(_Symbol,_Period);
      li.Create(_Symbol,_Period);
      hi.GetData(0,SIZE,High);
      li.GetData(0,SIZE,Low);
     }
   Ba=B;

     {
      for(int i=0; i<13; i++)
        {
         int HB = ArrayMaximum(High,SIZE-periods[i],periods[i]);
         int LB = ArrayMinimum(Low, SIZE-periods[i],periods[i]);
         if(Bid>=High[HB]) inputsH[i] =1;
         else inputsH[i]=0;
         if(Bid<=Low[LB]) inputsL[i]=1;
         else inputsL[i]=0;
        }

      double outH=CalculateNeuron(inputsH,weight);
      double outL=CalculateNeuron(inputsL,weight);

      //--- if the neuron output value is less than 0
      if(outL>0)
        {
         //--- if there is already a position for this character
         if(m_Position.Select(my_symbol))
           {
            //--- and the type of this position is Sell, then close it
            if(m_Position.PositionType()==POSITION_TYPE_SELL) m_Trade.PositionClose(my_symbol);
            //--- and if the type of this position is Buy, then exit.
            if(m_Position.PositionType()==POSITION_TYPE_BUY) return;
           }
         //--- if you got here, there is no position, open it.
         m_Trade.Buy(0.1,my_symbol);
        }
      //--- if the neuron output value is greater than or equal to 0
      if(outH>0)
        {
         //--- if there is already a position for this character
         if(m_Position.Select(my_symbol))
           {
            //--- and the type of this position is Buy, then close it
            if(m_Position.PositionType()==POSITION_TYPE_BUY) m_Trade.PositionClose(my_symbol);
            //--- and if the type of this position is Sell, then exit.
            if(m_Position.PositionType()==POSITION_TYPE_SELL) return;
           }
         //--- if you got here, there is no position, open it.
         m_Trade.Sell(0.1,my_symbol);
        }

      if(outH>0.0 || outL>0) Print(outH,"    ",outL);
     }

  }
//+------------------------------------------------------------------+
//|| Neuron calculation function|
//+------------------------------------------------------------------+
double CalculateNeuron(double &inputs[],double &w[])
  {
//--- variable for storing the weighted sum of input signals
   double NET=0.0;
//--- in the loop by the number of inputs get the weighted average sum of inputs
   for(int n=0;n<ArraySize(inputs);n++)
     {
      NET+=inputs[n]*w[n];
     }
//--- multiply the weighted average sum of inputs by the additive factor
   return(ActivateNeuron(NET));
  }
//+------------------------------------------------------------------+
//|| Neuron activation function|
//+------------------------------------------------------------------+
double ActivateNeuron(double x)
  {
//--- variable for storing the result of the activation function
   double Out;
//--- hyperbolic tangent function
   Out=(exp(x)-exp(-x))/(exp(x)+exp(-x));
//--- return the value of the activation function
   return(Out);
  }
//+------------------------------------------------------------------+

I wrote this owl. Can we say that it is a neural network, because I have my doubts.

Owl for trading in the channel.

Algorithm is as follows: extremums are taken for the number of Fibo bars (2,3,5,8,13....). For each neuron to buy, for example - if the price is below or equal to the price of the extremum LOW for one period, then return 1 otherwise - 0. Further, as in the example with NeuronMACD. To sell - mirror the opposite.

I am waiting for criticism of the code and algorithm.

 
dimeon:

I wrote this owl. Can we say that it is a neural network, because I have my doubts.

Owl for trading in the channel.

Algorithm is as follows: extremums are taken for the number of Fibo bars (2,3,5,8,13....). For each neuron to buy, for example - if the price is below or equal to the price of the extremum LOW for one period, then return 1 otherwise - 0. Further, as in the example with NeuronMACD. To sell - mirror the opposite.

I am waiting for criticism of the code and algorithm.

In your case, the function of neuron activation can be thrown out, it is an unnecessary brake.
return(NET);
It's not a neural network, it's just a perceptron. You need at least a couple of perceptrons for a network.
 
her.human:
In your case, the Neuron Activation Function can be thrown out, an unnecessary brake.
return(NET);
It's not a neural network, it's just a perceptron. You need at least a couple of perceptrons for a network.
Can you give me an example, like adding an MA or another indicator or something else, and how to network it all?