![]() |
Neuron Net
|
NeuroNet.mqh Library for creating Neural network for use in MQL5 experts. More...
#include <Arrays\ArrayDouble.mqh>
#include <Arrays\ArrayInt.mqh>
#include <Arrays\ArrayObj.mqh>
#include <OpenCL\OpenCL.mqh>
Go to the source code of this file.
Classes | |
class | CConnection |
Class of connection to anothe neuron. More... | |
class | CArrayCon |
Array of connections to anothe neuron. More... | |
class | CNeuronBase |
The base class of neuron. More... | |
class | CNeuron |
Class of neuron for full connected layers. More... | |
class | COpenCLMy |
Class for working with OpenCL. More... | |
class | CLayer |
Class of neurons collection in one layer of Neural Net. More... | |
class | CArrayLayer |
Class of layers collection in Neural Net. More... | |
class | CNeuronProof |
Class of pooling layer. More... | |
class | CNeuronConv |
Class of convolution layer. More... | |
class | CLayerDescription |
Class of layer decription. Used to describe the structure of a neural network from the main program. More... | |
class | CNet |
The main class of the neural network. Contains basic methods for the functioning of a neural network. More... | |
class | CNeuronLSTM |
Class of recurrent LSTM unit. More... | |
class | CBufferDouble |
Class of OpenCL buffer data. Used for transfer data from CPU to GPU and back. More... | |
class | CNeuronBaseOCL |
The base class of neuron for GPU calculation. More... | |
class | CNeuronProofOCL |
Class of pooling layer GPU calculation. More... | |
class | CNeuronConvOCL |
Class of convolution layer GPU calculation. More... | |
class | CNeuronAttentionOCL |
Class of Self-Attention layer GPU calculation. More... | |
Macros | |
#define | lr 0.001 |
learning rate More... | |
#define | momentum 0.5 |
momentum for SGD optimization More... | |
#define | b1 0.99 |
First momentum multiplier of Adam optimization. More... | |
#define | b2 0.9999 |
Second momentum multiplier of Adam optimization. More... | |
#define | defArrayConnects 0x7782 |
Array of connections. More... | |
#define | defLayer 0x7787 |
Layer of neurons. More... | |
#define | defArrayLayer 0x7788 |
Array of layers. More... | |
#define | defNet 0x7790 |
Neuron Net. More... | |
#define | defConnect 0x7781 |
Connection. More... | |
#define | defNeuronBase 0x7783 |
Neuron base type. More... | |
#define | defNeuron 0x7784 |
Full connected neuron. More... | |
#define | defNeuronConv 0x7785 |
Convolution neuron. More... | |
#define | defNeuronProof 0x7786 |
Proof neuron. More... | |
#define | defNeuronLSTM 0x7791 |
LSTM Neuron. More... | |
#define | defBufferDouble 0x7882 |
Data Buffer OpenCL. More... | |
#define | defNeuronBaseOCL 0x7883 |
Neuron Base OpenCL. More... | |
#define | defNeuronConvOCL 0x7885 |
Conolution neuron OpenCL. More... | |
#define | defNeuronProofOCL 0x7886 |
Proof neuron OpenCL. More... | |
#define | defNeuronAttentionOCL 0x7887 |
Attention neuron OpenCL. More... | |
#define | def_k_FeedForward 0 |
Index of FeedForward kernel. More... | |
#define | def_k_ff_matrix_w 0 |
Weights matrix (m+1)*n, where m - number of neurons in layer and n - number of outputs (neurons in next layer) More... | |
#define | def_k_ff_matrix_i 1 |
Inputs tesor. More... | |
#define | def_k_ff_matrix_o 2 |
Output tensor. More... | |
#define | def_k_ff_inputs 3 |
Number of inputs. More... | |
#define | def_k_ff_activation 4 |
Activation type (ENUM_ACTIVATION) More... | |
#define | def_k_CalcOutputGradient 1 |
Index of Output gradients calculation kernel (CalcOutputGradient) More... | |
#define | def_k_cog_matrix_t 0 |
Target tensor. More... | |
#define | def_k_cog_matrix_o 1 |
Output tensor. More... | |
#define | def_k_cog_matrix_ig 2 |
Tensor of gradients at previous layer. More... | |
#define | def_k_cog_activation 3 |
Activation type (ENUM_ACTIVATION) More... | |
#define | def_k_CalcHiddenGradient 2 |
Index of Hidden gradients calculation kernel (CalcHiddenGradient) More... | |
#define | def_k_chg_matrix_w 0 |
Weights matrix (m+1)*n, where m - number of neurons in previous layer and n - number of neurons in current layer. More... | |
#define | def_k_chg_matrix_g 1 |
Tensor of gradients at current layer. More... | |
#define | def_k_chg_matrix_o 2 |
Output tensor. More... | |
#define | def_k_chg_matrix_ig 3 |
Tensor of gradients at previous layer. More... | |
#define | def_k_chg_outputs 4 |
Number of outputs. More... | |
#define | def_k_chg_activation 5 |
Activation type (ENUM_ACTIVATION) More... | |
#define | def_k_UpdateWeightsMomentum 3 |
Index SGD optomization Update weights kernel (UpdateWeightsMomentum) More... | |
#define | def_k_uwm_matrix_w 0 |
SGD Weights matrix (m+1)*n, where m - number of neurons in previous layer and n - number of neurons in current layer. More... | |
#define | def_k_uwm_matrix_g 1 |
SGD Tensor of gradients at current layer. More... | |
#define | def_k_uwm_matrix_i 2 |
SGD Inputs tesor. More... | |
#define | def_k_uwm_matrix_dw 3 |
SGD Matrix of delta weights in last correction. More... | |
#define | def_k_uwm_inputs 4 |
SGD Number of inputs. More... | |
#define | def_k_uwm_learning_rates 5 |
SGD Learning rates. More... | |
#define | def_k_uwm_momentum 6 |
SGD Momentum multiplier. More... | |
#define | def_k_UpdateWeightsAdam 4 |
Index Adam optomization Update weights kernel (UpdateWeightsAdam) More... | |
#define | def_k_uwa_matrix_w 0 |
Adam Weights matrix (m+1)*n, where m - number of neurons in previous layer and n - number of neurons in current layer. More... | |
#define | def_k_uwa_matrix_g 1 |
Adam Tensor of gradients at current layer. More... | |
#define | def_k_uwa_matrix_i 2 |
Adam Inputs tesor. More... | |
#define | def_k_uwa_matrix_m 3 |
Adam Matrix of first momentum. More... | |
#define | def_k_uwa_matrix_v 4 |
Adam Matrix of seconfd momentum. More... | |
#define | def_k_uwa_inputs 5 |
Adam Number of inputs. More... | |
#define | def_k_uwa_l 6 |
Adam Learning rates. More... | |
#define | def_k_uwa_b1 7 |
Adam First momentum multiplier. More... | |
#define | def_k_uwa_b2 8 |
Adam Second momentum multiplier. More... | |
#define | def_k_FeedForwardProof 5 |
Index of the kernel of the Pooling neuron for Feed forward process (FeedForwardProof) More... | |
#define | def_k_ffp_matrix_i 0 |
Inputs tesor. More... | |
#define | def_k_ffp_matrix_o 1 |
Output tensor. More... | |
#define | def_k_ffp_inputs 2 |
Number of inputs. More... | |
#define | def_k_ffp_window 3 |
Size of input window. More... | |
#define | def_k_ffp_step 4 |
Step size. More... | |
#define | def_k_CalcInputGradientProof 6 |
Index of the kernel of the Pooling neuron to transfer gradient to previous layer (CalcInputGradientProof) More... | |
#define | def_k_cigp_matrix_i 0 |
Inputs tesor. More... | |
#define | def_k_cigp_matrix_g 1 |
Tensor of gradients at current layer. More... | |
#define | def_k_cigp_matrix_o 2 |
Output tensor. More... | |
#define | def_k_cigp_matrix_ig 3 |
Tensor of gradients at previous layer. More... | |
#define | def_k_cigp_outputs 4 |
Number of outputs. More... | |
#define | def_k_cigp_window 5 |
Size of input window. More... | |
#define | def_k_cigp_step 6 |
Step size. More... | |
#define | def_k_FeedForwardConv 7 |
Index of the kernel of the convolution neuron for Feed forward process (FeedForwardConv) More... | |
#define | def_k_ffc_matrix_w 0 |
Weights matrix (m+1)*n, where m - input window and n - output window. More... | |
#define | def_k_ffc_matrix_i 1 |
Inputs tesor. More... | |
#define | def_k_ffc_matrix_o 2 |
Output tensor. More... | |
#define | def_k_ffc_inputs 3 |
Number of inputs. More... | |
#define | def_k_ffc_step 4 |
Step size. More... | |
#define | def_k_ffc_window_in 5 |
Size of input window. More... | |
#define | def_k_ffс_window_out 6 |
Size of output window. More... | |
#define | def_k_ffc_activation 7 |
Activation type (ENUM_ACTIVATION) More... | |
#define | def_k_CalcHiddenGradientConv 8 |
Index of the kernel of the convolution neuron to transfer gradient to previous layer (CalcHiddenGradientConv) More... | |
#define | def_k_chgc_matrix_w 0 |
Weights matrix (m+1)*n, where m - input window and n - output window. More... | |
#define | def_k_chgc_matrix_g 1 |
Tensor of gradients at current layer. More... | |
#define | def_k_chgc_matrix_o 2 |
Output tensor. More... | |
#define | def_k_chgc_matrix_ig 3 |
Tensor of gradients at previous layer. More... | |
#define | def_k_chgc_outputs 4 |
Number of outputs. More... | |
#define | def_k_chgc_step 5 |
Step size. More... | |
#define | def_k_chgc_window_in 6 |
Size of input window. More... | |
#define | def_k_chgc_window_out 7 |
Size of output window. More... | |
#define | def_k_chgc_activation 8 |
Activation type (ENUM_ACTIVATION) More... | |
#define | def_k_UpdateWeightsConvMomentum 9 |
Index of the kernel of the convolution neuron to update weights SGD (UpdateWeightsConvMomentum) More... | |
#define | def_k_uwcm_matrix_w 0 |
Weights matrix (m+1)*n, where m - input window and n - output window. More... | |
#define | def_k_uwcm_matrix_g 1 |
Tensor of gradients at current layer. More... | |
#define | def_k_uwcm_matrix_i 2 |
Inputs tesor. More... | |
#define | def_k_uwcm_matrix_dw 3 |
Matrix of delta weights in last correction. More... | |
#define | def_k_uwcm_inputs 4 |
Number of inputs. More... | |
#define | def_k_uwcm_learning_rates 5 |
Learning rates. More... | |
#define | def_k_uwcm_momentum 6 |
Momentum multiplier. More... | |
#define | def_k_uwcm_window_in 7 |
Size of input window. More... | |
#define | def_k_uwcm_window_out 8 |
Size of output window. More... | |
#define | def_k_uwcm_step 9 |
Step size. More... | |
#define | def_k_UpdateWeightsConvAdam 10 |
Index of the kernel of the convolution neuron to update weights Adam (UpdateWeightsConvAdam) More... | |
#define | def_k_uwca_matrix_w 0 |
Weights matrix (m+1)*n, where m - input window and n - output window. More... | |
#define | def_k_uwca_matrix_g 1 |
Tensor of gradients at current layer. More... | |
#define | def_k_uwca_matrix_i 2 |
Inputs tesor. More... | |
#define | def_k_uwca_matrix_m 3 |
Matrix of first momentum. More... | |
#define | def_k_uwca_matrix_v 4 |
Matrix of seconfd momentum. More... | |
#define | def_k_uwca_inputs 5 |
Number of inputs. More... | |
#define | def_k_uwca_l 6 |
Learning rates. More... | |
#define | def_k_uwca_b1 7 |
First momentum multiplier. More... | |
#define | def_k_uwca_b2 8 |
Second momentum multiplier. More... | |
#define | def_k_uwca_window_in 9 |
Size of input window. More... | |
#define | def_k_uwca_window_out 10 |
Size of output window. More... | |
#define | def_k_uwca_step 11 |
Step size. More... | |
#define | def_k_AttentionScore 11 |
Index of the kernel of the attention neuron to calculate score matrix (AttentionScore) More... | |
#define | def_k_as_querys 0 |
Matrix of Querys. More... | |
#define | def_k_as_keys 1 |
Matriz of Keys. More... | |
#define | def_k_as_score 2 |
Matrix of Scores. More... | |
#define | def_k_as_dimension 3 |
Dimension of Key. More... | |
#define | def_k_AttentionOut 12 |
Index of the Attention Neuron Output calculation kernel (AttentionOut) More... | |
#define | def_k_aout_scores 0 |
Matrix of Scores. More... | |
#define | def_k_aout_values 1 |
Matrix of Values. More... | |
#define | def_k_aout_inputs 2 |
Inputs tesor. More... | |
#define | def_k_aout_out 3 |
Output tesor. More... | |
#define | def_k_MatrixSum 13 |
Index of the kernel for calculation Sum of 2 matrix with multiplyer (SumMatrix) More... | |
#define | def_k_sum_matrix1 0 |
First matrix. More... | |
#define | def_k_sum_matrix2 1 |
Second matrix. More... | |
#define | def_k_sum_matrix_out 2 |
Output matrix. More... | |
#define | def_k_sum_dimension 3 |
Dimension of matrix. More... | |
#define | def_k_sum_multiplyer 4 |
Multiplyer for output. More... | |
#define | def_k_AttentionGradients 14 |
Index of the kernel for gradients calculation process (AttentionIsideGradients) More... | |
#define | def_k_ag_querys 0 |
Matrix of Querys. More... | |
#define | def_k_ag_querys_g 1 |
Matrix of Querys' Gradients. More... | |
#define | def_k_ag_keys 2 |
Matrix of Keys. More... | |
#define | def_k_ag_keys_g 3 |
Matrix of Keys' Gradients. More... | |
#define | def_k_ag_values 4 |
Matrix of Values. More... | |
#define | def_k_ag_values_g 5 |
Matrix of Values' Gradients. More... | |
#define | def_k_ag_scores 6 |
Matrix of Scores. More... | |
#define | def_k_ag_gradient 7 |
Matrix of Gradients from previous iteration. More... | |
#define | def_k_Normilize 15 |
Index of the kernel for matrix normalization (Normalize) More... | |
#define | def_k_norm_buffer 0 |
In/Out Matrix. More... | |
#define | def_k_norm_dimension 1 |
Dimension of matrix. More... | |
#define | def_k_NormilizeWeights 16 |
Index of the kernel for weights matrix normalization (NormalizeWeights) More... | |
Enumerations | |
enum | ENUM_ACTIVATION { None =-1, TANH, SIGMOID, LReLU } |
Enum of activation formula used More... | |
enum | ENUM_OPTIMIZATION { SGD, ADAM } |
Enum of optimization method used More... | |
Variables | |
double | eta =lr |
learning rate for SGD optimization More... | |
NeuroNet.mqh Library for creating Neural network for use in MQL5 experts.
Definition in file NeuroNet.mqh.
double eta =lr |
learning rate for SGD optimization
Definition at line 41 of file NeuroNet.mqh.