"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 78

 
Better:
What stage is the project at now? Who's the boss?

No bosses, you can be you.

ZS: I think tomorrow afternoon would be a better time to talk.

 
Better:
What stage is the project at now? Who is the boss?
Isn't the bettor the same bettor who won the championship?
 
shyma:
Isn't the bettor the same bettor that the owl neuron once won the championship?
He was. He had a PAMM account somewhere... Google it.
 
R0MAN:
That one. He had a PAMM account somewhere... Look it up through Google.
Through his profile:-)
 
aharata:
Through the profile:-)
That's the first thing I looked there - I couldn't find it... I couldn't find it... :-) Wasn't awake yet... :-)
 

I decided to raise an old thread.

I propose to consider a universal model of NS.

Opponents are invited to present the type of grid that is not possible to describe this model!!!

The code is rough, so see the root of it.

The proposed implementation is easily transformable for both CPU and GPU. Delay operators are also provided in the grid.

there are 3 ways of transformation:

leave as is (all 4 arrays are two-dimensional), suitable for GPU

out is a one dimensional array, but also uses a two dimensional bool mask

out is a one-dimensional array and an index array built from the mask is used instead of the mask.

(we spoke earlier about binary masks, it is such a mask which uses zeros and ones to show whether there is a connection or not, in this case a neuron is a horizontal array and its connections with others are indicated in its horizon by corresponding values of the binary mask, in the same cells but in parallel arrays weights, outputs, and temporal data are stored, Zx are x-range delay operators)

Model Net

class CDmem
  {
public:
                     CDmem(void){};
                    ~CDmem(void){};
   double            m[];
  };
//+------------------------------------------------------------------+
//|                                                                  |
//+------------------------------------------------------------------+
class CBmem
  {
public:
                     CBmem(void){};
                    ~CBmem(void){};
   bool              m[];
  };
//+------------------------------------------------------------------+
//|                                                                  |
//+------------------------------------------------------------------+
class CProcessing
  {
public:
                     CProcessing(void){};
                    ~CProcessing(void){};
   virtual void      Processing(int i,CBmem &mask[],CDmem &weg[],CDmem &out[],CDmem &temp[],int I,int J)
     {
      for(int j=0;j<J;j++)
        {
         temp[i].m[j]=mask[i].m[j]*weg[i].m[j]*out[i].m[j];
        }
      double sum=0;
      for(int j=0;j<J;j++)
        {
         sum+=temp[i].m[j];
        }

      double outt=2./(1.+exp(-sum));
      for(int j=0;j<J;j++)
         out[i].m[j]=outt;
     };
   void              DelayOperator(int i,CDmem &out[])
     {
      // тут мы сдвишаем от конца к началу, реализуем оператор задержки
     };
  };
//+------------------------------------------------------------------+
//|                                                                  |
//+------------------------------------------------------------------+
class Unet
  {
   int               cnt_prcss;
   CBmem             mask[];
   CDmem             weg[];
   CDmem             out[];
   CDmem             temp[];
   CProcessing      *prcss[];
   void              Init()
     {
      ArrayResize(mask,5);
      ArrayResize(weg,5);
      ArrayResize(out,5);
      ArrayResize(temp,5);
      for(int i=0;i<5;i++)
        {
         ArrayResize(mask[i].m,19);
         ArrayResize(weg[i].m,19);
         ArrayResize(out[i].m,19);
         ArrayResize(temp[i].m,19);
        }
     };
   void              InitProcessing(CProcessing *p)
     {
      prcss[cnt_prcss]=p;
      cnt_prcss++;
     };
public:
                     Unet(void){Init(); cnt_prcss=0;};
                    ~Unet(void)
     {
      for(int i=0;i<cnt_prcss;i++)
         delete prcss[i];
     };
   void              DonwloadMask(){};
   void              DonwloadWeg(){};
   void              Processing()
     {
      for(int i=0;i<cnt_prcss;i++)
         prcss[i].Processing(i,mask,weg,out,temp,5,19);
     };
   void              DelayOperator()
     {
      for(int i=0;i<cnt_prcss;i++)
         prcss[i].DelayOperator(i,out);
     };
  };
 

That's not a model. Where are the neurons? Where are the connections? Where is the process? Where are the feedbacks?

And another question for you - why make a universal model for all networks?

I'd rather make a universal network for most things (heh) )

And why are you jumping right into implementation? You don't have an architecture ready in the first place.

 
TheXpert:

This is not a model. Where are the neurons? Where are the connections? Where is the process? Where are the feedbacks?

And another question for you - why make a universal model for all networks?

A universal network for most tasks would be better (heh) )

Neurons are horizontal arrays in the figure. the connections are encrypted with a boolean mask.

Ask leading questions, what is not clear I will explain, all at once you can not describe.

The architecture stems from the model, and it will be like this.

I prefer to have a universal network for most tasks (heh)).

ZZZY and I want to check whether this model suits all tasks or not. One head is good, but a choir is better.

By changing the CProcessing class descendants, you can change the neuron types one by one.

You can additionally add an array of neuron types and specify your own type for each one (by selecting from CProcessing descendants).

ZZZZY The inverse links are indicated in the figure by Zx

 
Urain:

Then why do you have the mask attached to the output and not to the neurons again?)

And how do you want to cram the activation function into the GPU?

Imho, you're just like the last time, you're going to cram something you can't cram. But it's an imho, so feel free to put it down.

I won't bother you anymore, unless it's business.

Ah, the cogitron. What else -- the hopfield network -- there the input is the output. Then there's sparse...

 
TheXpert:

Then why do you have the mask attached to the output and not to the neurons again?)

And how do you want to cram the activation function into the GPU?

Imho, like last time, you're going to cram what you can't cram. But this is imho, so feel free to screw it up.

I won't bother you anymore, unless it's business.

Ah, the cognitron. What else -- the hopfield net -- there the input is the output. There's also a sparse...

Answering from the bottom up,

the inputs are the outputs.

The GPU is easy to implement since all the processing is parallel.

Changing activation or some other process can be described by dynamic strings, MetaDriver has experience.

Mask, weights, outputs, temporal data are all connected in parallel, although it is possible to unlink them via index array which can be created from mask.

In general, the mask creates the topology, and it is stored together with the weights.


The project itself looks like this:

  • XML network storage standard (load/save), although I gravitate towards binary storage (but not crucial, people want XML for clarity)
  • GUI for creating a new network (create/save)
  • Program interface of new network creation (create/save)
  • Model of the universal network (basic)
  • Classes extending the basic network model
  • Training shell (extensible).

The last two points are up to the open source.

Reason: