Discussion of article "Neural Networks: From Theory to Practice" - page 9

 

Please explain:

Suppose I trained perceptron1 with a test sample from file1. It learnt to predict the same file1 100% correctly.

Then I tested this perceptron1 on new data (file2). It predicted them 95% correctly.

How canI train perceptron1?

Option 1:
I glue file1 and file2 = file12. I train perceptron2 from scratch by feeding it file12 + correct answers.

Option 2:

I manually correct the correct answers to file2 and retrain perceptron1.

Option 1 is self-explanatory. It is just training a new perceptron from scratch.

But how to implement option 2 ? Is it feasible?

=========

I am currently experimenting in Jupyter in Python with the scikit-learn library. There the perceptron has no method to train it with new data....

http://scikit-learn.org/dev/modules/generated/sklearn.neural_network.MLPClassifier.html#sklearn.neural_network.MLPClassifier

sklearn.neural_network.MLPClassifier — scikit-learn 0.18.dev0 documentation
  • scikit-learn.org
class hidden_layer_sizes=(100, ), activation='relu', algorithm='adam', alpha=0.0001, batch_size='auto', learning_rate='constant', learning_rate_init=0.001, power_t=0.5, max_iter=200, shuffle=True, random_state=None, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1...
 

The article is super, probably the only one with more or less detailed and understandable presentation,

I would like to ask the author to correct the picture, after all, in this example we consider not a network, but a perceptron,

and we are waiting for an example of a neural network, for example: 2 neurons at the input, 3 in the hidden layer, 1 at the output.

thank you very much for the article!

Files:
pyb.jpg  1958 kb
 
Very good article, I will study this week to try to implement that is defined in the article.
But I have a question, how do I implement more neurons?

Osb: I'm still a beginner in programming.


I have some basic questions and some will come up during the development I will try to do. Could I consult you?

 
I'm new to programming, I know almost nothing, but I'm studying and adapting your EA for several tests. This task is very interesting.
Now I'm thinking of a self-optimisation system and found an interesting article about it. Could it be possible to implement a structure of your EA. I do not have the capacity for that at the moment.

Would you be interested in developing this work?

https://www.mql5.com/en/articles/2279

Rede neural: Expert Advisor auto-otimizável
Rede neural: Expert Advisor auto-otimizável
  • 2016.10.17
  • Jose Miguel Soriano
  • www.mql5.com
Será que é possível criar um Expert Advisor que, de acordo com os comandos do código, otimize os critérios de abertura e fechamento das posições automaticamente e em intervalos regulares? O que acontecerá se nós implementarmos no EA uma rede neural (um perceptron multi-camada) que, sendo módulo, analise o histórico e avalie a estratégia? É possível dar ao código um comando para uma otimização mensal (semanal, diária ou por hora) de rede neural com um processo subsequente. Assim, é possível criar um Expert Advisor que se auto-otimize.
 

Changing the steepness of the activation function is completely unnecessary!

See the formula:

for(int n=0; n<10; n++) 
  {
   NET+=Xn*Wn;
  }
NET*=0.4;

When training, the network must pick up the multipliers Wn. If it is more favourable for the network to have a total of *0.4, it will simply select all the weights of Wn, each of which will already be *0.4. I.e. we just put the common multiplier in brackets, which will be determined by the minimum error.

In this implementation, you can simply reduce the step for selecting the weights. In more serious neural networks, the necessary coefficients will be found by themselves.

 

Normalisation is performed incorrectly, and then a coefficient of 0.4 for some reason....

Suppose there is a series of values: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10

The values from this series should be reduced to the sequence [0,1]. Logically, this would be the series: 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.

However, your methodology produces just random numbers. Suppose we get the values from the indicator: 6, 7, 8, 9, 10. Simplifying your formula:

We get:

6 >> 0

7 >> 0.25

8 >> 0.5

9 >> 0.75

10 >> 1

In this series, normalised according to the instructions in your article, only the last value is true.

If you have had a course in linear algebra and can distinguish cosine from tangent, it is incomprehensible how you can mess up such a simple task. The results of your work are purely random!!!

But I admit, I used this very publication as a starting point. I printed it out, reread it carefully, made notes with a pen. Then I went to the House of Books and bought Osovsky's"Neural Networks for Information Processing". I read it, became very smart, and here I am writing....

 
Thank you for this easily understandable introduction into AI for trading on MT5. So the 'weights' are found via optimization, what Cagatay called curve-fitting. In reality neural networks (their weights) are "trained" by inputting massive amounts of human-labelled training data and labelling the outcome as right or wrong, too. Is there an efficient way to do this in MT5?
 
I want to know how to use and buy this smart robot my micro signal is 13552272531 please teacher add me, I want to do something.
 

Why do i only get 365 dollars of profit when backtesting?

 
Nice Post, In your codes, you mentioned Trade\Trade.mqh and Trade\PositionInfo.mqh, can you provide the download link of those two? many thanks!