Neuron Net
Macros | Functions
Updating Weights Calculation kernel

Describes the process of optimization weights for the Neuron Base. More...

Macros

#define def_k_UpdateWeightsMomentum   3
 Index SGD optomization Update weights kernel (UpdateWeightsMomentum) More...
 
#define def_k_uwm_matrix_w   0
 SGD Weights matrix (m+1)*n, where m - number of neurons in previous layer and n - number of neurons in current layer. More...
 
#define def_k_uwm_matrix_g   1
 SGD Tensor of gradients at current layer. More...
 
#define def_k_uwm_matrix_i   2
 SGD Inputs tesor. More...
 
#define def_k_uwm_matrix_dw   3
 SGD Matrix of delta weights in last correction. More...
 
#define def_k_uwm_inputs   4
 SGD Number of inputs. More...
 
#define def_k_uwm_learning_rates   5
 SGD Learning rates. More...
 
#define def_k_uwm_momentum   6
 SGD Momentum multiplier. More...
 
#define def_k_UpdateWeightsAdam   4
 Index Adam optomization Update weights kernel (UpdateWeightsAdam) More...
 
#define def_k_uwa_matrix_w   0
 Adam Weights matrix (m+1)*n, where m - number of neurons in previous layer and n - number of neurons in current layer. More...
 
#define def_k_uwa_matrix_g   1
 Adam Tensor of gradients at current layer. More...
 
#define def_k_uwa_matrix_i   2
 Adam Inputs tesor. More...
 
#define def_k_uwa_matrix_m   3
 Adam Matrix of first momentum. More...
 
#define def_k_uwa_matrix_v   4
 Adam Matrix of seconfd momentum. More...
 
#define def_k_uwa_inputs   5
 Adam Number of inputs. More...
 
#define def_k_uwa_l   6
 Adam Learning rates. More...
 
#define def_k_uwa_b1   7
 Adam First momentum multiplier. More...
 
#define def_k_uwa_b2   8
 Adam Second momentum multiplier. More...
 

Functions

__kernel void UpdateWeightsMomentum (__global double *matrix_w, __global double *matrix_g, __global double *matrix_i, __global double *matrix_dw, int inputs, double learning_rates, double momentum)
 Describes the process of SGD optimization weights for the Neuron Base (CNeuronBaseOCL). More...
 
__kernel void UpdateWeightsAdam (__global double *matrix_w, __global const double *matrix_g, __global const double *matrix_i, __global double *matrix_m, __global double *matrix_v, const int inputs, const double l, const double b1, const double b2)
 Describes the process of Adam optimization weights for the Neuron Base (CNeuronBaseOCL). More...
 
virtual bool CNeuronBaseOCL::updateInputWeights (CNeuronBaseOCL *NeuronOCL)
 

Detailed Description

Describes the process of optimization weights for the Neuron Base.

Detailed description on the link. For Adam optimization look the link.

Macro Definition Documentation

◆ def_k_UpdateWeightsAdam

#define def_k_UpdateWeightsAdam   4

Index Adam optomization Update weights kernel (UpdateWeightsAdam)

Definition at line 129 of file NeuroNet.mqh.

◆ def_k_UpdateWeightsMomentum

#define def_k_UpdateWeightsMomentum   3

Index SGD optomization Update weights kernel (UpdateWeightsMomentum)

Definition at line 120 of file NeuroNet.mqh.

◆ def_k_uwa_b1

#define def_k_uwa_b1   7

Adam First momentum multiplier.

Definition at line 137 of file NeuroNet.mqh.

◆ def_k_uwa_b2

#define def_k_uwa_b2   8

Adam Second momentum multiplier.

Definition at line 138 of file NeuroNet.mqh.

◆ def_k_uwa_inputs

#define def_k_uwa_inputs   5

Adam Number of inputs.

Definition at line 135 of file NeuroNet.mqh.

◆ def_k_uwa_l

#define def_k_uwa_l   6

Adam Learning rates.

Definition at line 136 of file NeuroNet.mqh.

◆ def_k_uwa_matrix_g

#define def_k_uwa_matrix_g   1

Adam Tensor of gradients at current layer.

Definition at line 131 of file NeuroNet.mqh.

◆ def_k_uwa_matrix_i

#define def_k_uwa_matrix_i   2

Adam Inputs tesor.

Definition at line 132 of file NeuroNet.mqh.

◆ def_k_uwa_matrix_m

#define def_k_uwa_matrix_m   3

Adam Matrix of first momentum.

Definition at line 133 of file NeuroNet.mqh.

◆ def_k_uwa_matrix_v

#define def_k_uwa_matrix_v   4

Adam Matrix of seconfd momentum.

Definition at line 134 of file NeuroNet.mqh.

◆ def_k_uwa_matrix_w

#define def_k_uwa_matrix_w   0

Adam Weights matrix (m+1)*n, where m - number of neurons in previous layer and n - number of neurons in current layer.

Definition at line 130 of file NeuroNet.mqh.

◆ def_k_uwm_inputs

#define def_k_uwm_inputs   4

SGD Number of inputs.

Definition at line 125 of file NeuroNet.mqh.

◆ def_k_uwm_learning_rates

#define def_k_uwm_learning_rates   5

SGD Learning rates.

Definition at line 126 of file NeuroNet.mqh.

◆ def_k_uwm_matrix_dw

#define def_k_uwm_matrix_dw   3

SGD Matrix of delta weights in last correction.

Definition at line 124 of file NeuroNet.mqh.

◆ def_k_uwm_matrix_g

#define def_k_uwm_matrix_g   1

SGD Tensor of gradients at current layer.

Definition at line 122 of file NeuroNet.mqh.

◆ def_k_uwm_matrix_i

#define def_k_uwm_matrix_i   2

SGD Inputs tesor.

Definition at line 123 of file NeuroNet.mqh.

◆ def_k_uwm_matrix_w

#define def_k_uwm_matrix_w   0

SGD Weights matrix (m+1)*n, where m - number of neurons in previous layer and n - number of neurons in current layer.

Definition at line 121 of file NeuroNet.mqh.

◆ def_k_uwm_momentum

#define def_k_uwm_momentum   6

SGD Momentum multiplier.

Definition at line 127 of file NeuroNet.mqh.

Function Documentation

◆ updateInputWeights()

bool CNeuronBaseOCL::updateInputWeights ( CNeuronBaseOCL NeuronOCL)
protectedvirtual

Method for updating weights. Calling one of kernels UpdateWeightsMomentum() or UpdateWeightsAdam() in depends of optimization type (ENUM_OPTIMIZATION).

Parameters
NeuronOCLPointer to previos layer.

Reimplemented in CNeuronAttentionOCL, and CNeuronConvOCL.

Definition at line 3266 of file NeuroNet.mqh.

◆ UpdateWeightsAdam()

__kernel void UpdateWeightsAdam ( __global double *  matrix_w,
__global const double *  matrix_g,
__global const double *  matrix_i,
__global double *  matrix_m,
__global double *  matrix_v,
const int  inputs,
const double  l,
const double  b1,
const double  b2 
)

Describes the process of Adam optimization weights for the Neuron Base (CNeuronBaseOCL).

Detailed description on the link.

Parameters
[in,out]matrix_wWeights matrix (m+1)*n, where m - number of neurons in previous layer and n - number of neurons in current layer
[in]matrix_gTensor of gradients at current layer
[in]matrix_iInputs tesor
[in,out]matrix_mMatrix of first momentum
[in,out]matrix_vMatrix of seconfd momentum
inputsNumber of inputs
lLearning rates
b1First momentum multiplier
b2Second momentum multiplier

Definition at line 189 of file NeuroNet.cl.

199  {
200  const int i=get_global_id(0);
201  const int j=get_global_id(1);
202  const int wi=i*(inputs+1)+j*4;
203  double4 m, v, weight, inp;
204  switch(inputs+1-j*4)
205  {
206  case 0:
207  inp=(double4)(1,0,0,0);
208  weight=(double4)(matrix_w[wi],0,0,0);
209  m=(double4)(matrix_m[wi],0,0,0);
210  v=(double4)(matrix_v[wi],0,0,0);
211  break;
212  case 1:
213  inp=(double4)(matrix_i[j],1,0,0);
214  weight=(double4)(matrix_w[wi],matrix_w[wi+1],0,0);
215  m=(double4)(matrix_m[wi],matrix_m[wi+1],0,0);
216  v=(double4)(matrix_v[wi],matrix_v[wi+1],0,0);
217  break;
218  case 2:
219  inp=(double4)(matrix_i[j],matrix_i[j+1],1,0);
220  weight=(double4)(matrix_w[wi],matrix_w[wi+1],matrix_w[wi+2],0);
221  m=(double4)(matrix_m[wi],matrix_m[wi+1],matrix_m[wi+2],0);
222  v=(double4)(matrix_v[wi],matrix_v[wi+1],matrix_v[wi+2],0);
223  break;
224  case 3:
225  inp=(double4)(matrix_i[j],matrix_i[j+1],matrix_i[j+2],1);
226  weight=(double4)(matrix_w[wi],matrix_w[wi+1],matrix_w[wi+2],matrix_w[wi+3]);
227  m=(double4)(matrix_m[wi],matrix_m[wi+1],matrix_m[wi+2],matrix_m[wi+3]);
228  v=(double4)(matrix_v[wi],matrix_v[wi+1],matrix_v[wi+2],matrix_v[wi+3]);
229  break;
230  default:
231  inp=(double4)(matrix_i[j],matrix_i[j+1],matrix_i[j+2],matrix_i[j+3]);
232  weight=(double4)(matrix_w[wi],matrix_w[wi+1],matrix_w[wi+2],matrix_w[wi+3]);
233  m=(double4)(matrix_m[wi],matrix_m[wi+1],matrix_m[wi+2],matrix_m[wi+3]);
234  v=(double4)(matrix_v[wi],matrix_v[wi+1],matrix_v[wi+2],matrix_v[wi+3]);
235  break;
236  }
237  double4 g=(double4)(matrix_g[i])*inp;
238  double4 mt=b1*m+(1-b1)*g;
239  double4 vt=b2*v+(1-b2)*pow(g,2);
240  double4 delta=l*mt/(vt>0 ? sqrt(vt) : l*10);
241  switch(inputs+1-j*4)
242  {
243  case 2:
244  matrix_w[wi+2]+=delta.s2;
245  matrix_m[wi+2]=mt.s2;
246  matrix_v[wi+2]=vt.s2;
247  case 1:
248  matrix_w[wi+1]+=delta.s1;
249  matrix_m[wi+1]=mt.s1;
250  matrix_v[wi+1]=vt.s1;
251  case 0:
252  matrix_w[wi]+=delta.s0;
253  matrix_m[wi]=mt.s0;
254  matrix_v[wi]=vt.s0;
255  break;
256  default:
257  matrix_w[wi]+=delta.s0;
258  matrix_m[wi]=mt.s0;
259  matrix_v[wi]=vt.s0;
260  matrix_w[wi+1]+=delta.s1;
261  matrix_m[wi+1]=mt.s1;
262  matrix_v[wi+1]=vt.s1;
263  matrix_w[wi+2]+=delta.s2;
264  matrix_m[wi+2]=mt.s2;
265  matrix_v[wi+2]=vt.s2;
266  matrix_w[wi+3]+=delta.s3;
267  matrix_m[wi+3]=mt.s3;
268  matrix_v[wi+3]=vt.s3;
269  break;
270  }
271  };

◆ UpdateWeightsMomentum()

__kernel void UpdateWeightsMomentum ( __global double *  matrix_w,
__global double *  matrix_g,
__global double *  matrix_i,
__global double *  matrix_dw,
int  inputs,
double  learning_rates,
double  momentum 
)

Describes the process of SGD optimization weights for the Neuron Base (CNeuronBaseOCL).

Detailed description on the link.

Parameters
[in,out]matrix_wWeights matrix (m+1)*n, where m - number of neurons in previous layer and n - number of neurons in current layer
[in]matrix_gTensor of gradients at current layer
[in]matrix_iInputs tesor
[in,out]matrix_dwMatrix of delta weights in last correction
inputsNumber of inputs
learning_ratesLearning rates
momentumMomentum multiplier

Definition at line 168 of file NeuroNet.cl.

176  {
177  int i=get_global_id(0);
178  int j=get_global_id(1);
179  int wi=i*(inputs+1)+j;
180  double delta=learning_rates*matrix_g[i]*(j<inputs ? matrix_i[j] : 1) + momentum*matrix_dw[wi];
181  matrix_dw[wi]=delta;
182  matrix_w[wi]+=delta;
183  };
b2
#define b2
Second momentum multiplier of Adam optimization.
Definition: NeuroNet.mqh:38
b1
#define b1
First momentum multiplier of Adam optimization.
Definition: NeuroNet.mqh:36
momentum
#define momentum
momentum for SGD optimization
Definition: NeuroNet.mqh:33