"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 38

 
gpwr:
Filters are trained without a teacher by presenting randomly chosen 10000-40000 sections of history (the same number of iterations). Learning is very fast. Depending on computer, 10000 filters are trained on 20000 sections of history from 1-2 minutes on 360 GPU CUDA processors, about 1 hour on 4 Intel processors with 16 tracks, 3-4 hours on my laptop with one processor and two tracks. Time doesn't matter here, though. Even if it takes me a day or two to train filters like this, it is only done once for each quote (EURUSD, USDJPY, etc.). Once the filters are trained, they do not change and are used to filter new prices. Filtering itself is very fast - we count the sum of products of price and filter coefficients.

As far as I remember, it is necessary to pass each segment several times for the grid to be considered trained,

the question is, how many times do I have to submit each example?

 
Urain:

As far as I recall, in order for the grid to be considered trained, each section has to be passed several times,

the question is, how many times do you have to pass each example?

You don't have to go through the same section of history multiple times. And you don't have to go through every part of the story at least once. Some parts of the history can be omitted. Filters essentially gather statistics of quotes on random samples. The randomness of a sample is the main thing here. If the whole history is scanned sequentially, the filters will tend to the statistics at the beginning of the history.
 
gpwr:
You don't have to go through the same part of the story multiple times. And you don't have to go through every part of the story at least once. Some parts of the history can be omitted. Filters essentially gather quotes statistics from random samples.
No, you're talking about filters, I warned you that the question is sideways, I'm talking about the NS learning algorithms.
 
Urain:
No, you're talking about filters, I warned you that the question is sideways, I'm talking about NS learning algorithms.
Then I really don't get it. The network I am proposing consists of two modules: the module of data transformation by several layers of filters and the module of classification. The filters in the first module are trained without a teacher on the whole history once and are remembered for all subsequent applications of the network. Once the filters have been trained, we teach the second module with the teacher, i.e. the first module input contains a price pattern and the second module output contains known signals Buy (+1), Sell (-1), Hold (0). The second module can be any neural network we know, e.g. Feed-Forward Network (FFN or MLP), Support Vector Machine (SVM), Radial Basis Function (RBF). Learning this module is as long as without the first filtering module. As I explained before, in my opinion, the second module is not as important as the first one. You should first correctly transform (filter) quotes before you feed them to the network. The simplest filtering method is МА. You can apply other indicators as well, that most of neural network operators already do. I suggest a special "indicator" consisting of several filter layers like biological filters for such transformation of quotes that identical but distorted patterns are described by the same code at the output of this "indicator" (the first module of my network). Then it is possible to classify these codes in the second module using well-known methods.
 
Нейрокомпьютерные системы | Учебный курс | НОУ ИНТУИТ
Нейрокомпьютерные системы | Учебный курс | НОУ ИНТУИТ
  • www.intuit.ru
Излагаются основы построения нейрокомпьютеров. Дается детальный обзор и описание важнейших методов обучения нейронных сетей различной структуры, а также задач, решаемых этими сетями. Рассмотрены вопросы реализации нейронных сетей.
Files:
Books.zip  14709 kb
 

gpwr:

I really don't get it then. The network I propose consists of two modules: the module of data transformation by several layers of filters and the module of classification. The filters in the first module are trained without a teacher on the whole history once and are memorized for all subsequent applications of the network. Once the filters have been trained, we teach the second module with a teacher, i.e. a price pattern at the input and known Buy (+1), Sell (-1), Hold (0) signals at the output. The second module can be any neural network we know, e.g. Feed-Forward Network (FFN or MLP), Support Vector Machine (SVM), Radial Basis Function (RBF). Learning this module is as long as without the first filtering module. As I explained before, in my opinion, the second module is not as important as the first one. You should first correctly transform (filter) quotes before you feed them to the network. The simplest filtering method is МА. You can apply other indicators as well, that most of neural network operators already do. I suggest a special "indicator" consisting of several filter layers like biological filters for such transformation of quotes that identical but distorted patterns are described by the same code at the output of this "indicator" (the first module of my network). Then it is possible to classify these codes in the second module using well-known methods.

If I understood correctly, the filters themselves and their training can be attributed to the preprocessing module.
 
Urain:
If I understand correctly, the filters themselves and their training can be attributed to the preprocessing module.
Yes, the first module, self-learning once for life.
 
Urain:
Alex and how did you manage to attach 14 meters, moderator privilege or increased the limit?
It says to 16M. Probably increased.
 
Hooray, lecture 15 is the fuzzy logic networks I mentioned at the beginning.
 

u - activator input

y is an additional power factor.

//+------------------------------------------------------------------+
double sigma0(double u,double y=1.)// [0;1]
  {
   return(1./(1.+exp(-y*u)));
  }
//+------------------------------------------------------------------+
double sigma1(double u,double y=1.)// [-1;1] сдвинутый сигмоид
  {
   return((2./(1.+exp(-y*u)))-1.);
  }
//+------------------------------------------------------------------+
double func1(double u,double y=1.)// [-1;1]
  {
   return(y*u/sqrt(1.+y*y*u*u));
  }
//+------------------------------------------------------------------+
double func2(double u,double y=1.)// [-1;1]
  {
   return(sin(M_PI_2*y*u));
  }
//+------------------------------------------------------------------+
double func3(double u,double y=1.)// [-1;1]
  {
   return(2./M_PI*atan(y*u));
  }
//+------------------------------------------------------------------+
double func4(double u,double y=1.)// [-1;1]
  {
   return(y*u-y*y*y/3.*u*u*u);
  }
//+------------------------------------------------------------------+
double line0(double u,double y=1.)// [0;1] линейный активатор
  {
   double v=y*u;
   return((v>1.?1.:v<0.?0.:v));
  }
//+------------------------------------------------------------------+
double line1(double u,double y=1.)// [-1;1]
  {
   double v=y*u;
   return((v>1.?1.:v<-1.?-1.:v));
  }
//+------------------------------------------------------------------+
double Mline(double v)// [DBL_MIN;DBL_MAX]
  {
   return((v>DBL_MAX?DBL_MAX:v<DBL_MIN?DBL_MIN:v));
  }
//+------------------------------------------------------------------+
Reason: