"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 67

 
TheXpert:

Tell me what you're going to do. It's all done for the programmer. I.e. API.

And for the user there is nothing, I usually automate, with minimal possibility of intervention on the part of unauthorized hands. In other words, no user interface.



Told you so ((

Don't bother.

 
Mischek:

Told you so ((

Then neuro-packages, pick them. Try to look from the opposite side of yours, you can't get a "pish" button, only for a specific task.

You need to prepare the data, feed it correctly, train it correctly, and evaluate the results correctly.

 

Actually, this is in the theme "Interesting ...", but the subject is closer here.

Evolution and artificial life

and there's life here http://www.math.com/students/wonders/life/life.html
Эволюция и искусственная жизнь
  • alt-future.narod.ru
Искусственная жизнь (ИЖ, ALife) как отдельное научное направление выделилась из теории искусственного интеллекта (ИИ) в 80-х гг. прошлого века, когда состоялась первая Международная конференция ALife I (1989 г., Лос-Аламос). Вскоре за ней последовали Европейская конференция по искусственной жизни и Международная конференция по...
 

Gentlemen, who is familiar with the learning algorithms listed below,

List the variants of the move in training.

For example in the case of backpropagation, first there is a forward move of calculations, then a backward move of error propagation.

What other variants of the move are there in numerous learning algorithms?

SZ This is needed in order to put the necessary virtual functions in the engine.

ZZZY below is a table, who knows what write what option (s) move used in these algorithms.

 
Paradigm Learning rule Architecture Learning algorithm Task
With the teacher Error correction Single and multilayer perceptron Perceptron Learning Algorithms
Reverse Propagation
Adaline and Madaline
Pattern classification
Function approximation
Prediction, control
Boltzmann Recurrence Boltzmann Learning Algorithm Pattern Classification
Hebb Multilayer Direct Propagation Linear Discriminant Analysis Data Analysis
Pattern Classification
Competition Competition Vector Quantization In-class categorization Data compression
ART Network ARTMap Classification of Images
Without a teacher Error correction Multilayer direct propagation Sammon projection In-class categorization Data analysis
Hebb Direct propagation or competition Principal component analysis Data analysis
Data compression
Hopfield network Learning associative memory Associative memory
Competition Competition Vector Quantization Categorization
Data compression
Kohonen SOM Kohonen SOM Categorization
Data analysis
ART networks ART1, ART2 Categorization
Mixed Error correction and competition RBF network RBF learning algorithm Pattern classification
Feature approximation
Prediction, control
 

Okay, the general silence so I understand from the complexity of the question.

Let me rephrase the question:

For which learning algorithm, the backtracking of layers is not appropriate?

 
Urain:

Okay, the general silence so I understand from the complexity of the question.

Let me rephrase the question:

For which learning algorithm, the backtracking of layers is not appropriate?

Rather, "For which learning algorithm, backtracking of layers is not necessary?"

I know one thing for sure, genetic algorithm doesn't need it.

In other cases, I could be wrong, it is.

 
her.human:

Most likely "For which learning algorithm, the backtracking of layers is not necessary?".

I know one thing for sure, the genetic algorithm doesn't need it.

In other cases, I could be wrong, it is.


The point is, set for the learning algorithm (to adjust weights) only the reverse stroke of the enumeration of layers, if the learning itself is not needed then you can call a straight move of the calculation of the grid, if you need it, then call and reverse.

Just me a doubt, maybe some algorithm needs the forward stroke of grid computation and the forward stroke of weights adjustment?

I really don't know such algorithms, but I can't know everything.

 

Good afternoon, not really on topic.

I myself set a task before me. It is necessary to choose an adaptive time window of the current moment, and not to set it in the parameters, such as 10 bars. Then run through the history in depth to determine which cluster the selected window belongs to. Can neural networks handle it, or is it easier to do something else? If you don't mind, please send me a book on grids at the sausage level.

 

"Reverse" is needed to find partial derivatives of a fitness function by adaptive arguments, i.e. any gradient methods (e.g. BackProp in any modification) require "reverse"

Other methods do not require

Reason: