You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Please explain:
Suppose I trained perceptron1 with a test sample from file1. It learnt to predict the same file1 100% correctly.Then I tested this perceptron1 on new data (file2). It predicted them 95% correctly.
How canI train perceptron1?
Option 1:
I glue file1 and file2 = file12. I train perceptron2 from scratch by feeding it file12 + correct answers.
Option 2:
I manually correct the correct answers to file2 and retrain perceptron1.
Option 1 is self-explanatory. It is just training a new perceptron from scratch.
But how to implement option 2 ? Is it feasible?
=========
I am currently experimenting in Jupyter in Python with the scikit-learn library. There the perceptron has no method to train it with new data....
http://scikit-learn.org/dev/modules/generated/sklearn.neural_network.MLPClassifier.html#sklearn.neural_network.MLPClassifier
The article is super, probably the only one with more or less detailed and understandable presentation,
I would like to ask the author to correct the picture, after all, in this example we consider not a network, but a perceptron,
and we are waiting for an example of a neural network, for example: 2 neurons at the input, 3 in the hidden layer, 1 at the output.
thank you very much for the article!
Osb: I'm still a beginner in programming.
I have some basic questions and some will come up during the development I will try to do. Could I consult you?
Would you be interested in developing this work?
https://www.mql5.com/en/articles/2279
Changing the steepness of the activation function is completely unnecessary!
See the formula:
When training, the network must pick up the multipliers Wn. If it is more favourable for the network to have a total of *0.4, it will simply select all the weights of Wn, each of which will already be *0.4. I.e. we just put the common multiplier in brackets, which will be determined by the minimum error.
In this implementation, you can simply reduce the step for selecting the weights. In more serious neural networks, the necessary coefficients will be found by themselves.
Normalisation is performed incorrectly, and then a coefficient of 0.4 for some reason....
Suppose there is a series of values: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
The values from this series should be reduced to the sequence [0,1]. Logically, this would be the series: 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.
However, your methodology produces just random numbers. Suppose we get the values from the indicator: 6, 7, 8, 9, 10. Simplifying your formula:
We get:
6 >> 0
7 >> 0.25
8 >> 0.5
9 >> 0.75
10 >> 1
In this series, normalised according to the instructions in your article, only the last value is true.
If you have had a course in linear algebra and can distinguish cosine from tangent, it is incomprehensible how you can mess up such a simple task. The results of your work are purely random!!!
But I admit, I used this very publication as a starting point. I printed it out, reread it carefully, made notes with a pen. Then I went to the House of Books and bought Osovsky's"Neural Networks for Information Processing". I read it, became very smart, and here I am writing....
Why do i only get 365 dollars of profit when backtesting?