What to feed to the input of the neural network? Your ideas... - page 69

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
If it wasn't for dumdums, I'd be wasting my time on the MO god - Gizlyk right now :)
Thinking is bad for your body, borrow the experience ))
Thinking is bad for your body, borrow from experience ))
Also true
Input isnot the strength of the signal
Its strength is given to it by weights. But the input number itself a priori already(!) carries a power element - its quantitative factor.Earlier I raised the problem of understanding input data.
By giving a number to the input - we already initially endow the input with a force value. This is a mistake, because the essence of our NS is just to find the power factor and distribute it among the inputs.
I don't quite understand, are you teaching with a teacher?
In MT5 optimisation.
And I tried it with a teacher in NeuroPro.
The essence of the theoretical problem comes down to the following:
If the input range is from 0 to 1, the NS will never find the grail if the drain patterns in the upper range are buried in the numbers of the lower range, because it will have to "choke" the upper one, and with static weights everything that is lower will also go under the knife.
As a result, the adder will get a number that consists of both the drain data and the working data - 50/50.
And if the input range is from -1 to 1, and the grail is somewhere in the middle - the same thing will happen: the NS will muffle the extremes and the grail will be washed away.
But if you create a filtering module, when the number 0.9 will "turn" into 0.01, or into 0 at all. And the number 0.63 into 0.99 and so on - I assume that this method is at least better than the standard one, and at most - potential.
And already these numbers should be fed to the NS, which will create rules for working with input data "cleared" from noise .
The essence of the theoretical problem boils down to the following:
If the input range is from 0 to 1, the NS will never find the grail at the drain patterns in the upper range if it is buried in the numbers of the lower range, because it will have to "choke" the upper one, and at static weights everything that is lower will also go under the knife.
As a result, the adder will get a number that consists of both drain data and working data - 50/50 and it turns out.
And if the input range is from -1 to 1, and the grail is somewhere in the middle - the same thing will happen: the NS will muffle the extremes and the grail will be washed away.
But if you create a filtering module, when the number 0.9 will "turn" into 0.01, or into 0 at all. And the number 0.63 into 0.99 and so on - I assume that this method is at least better than the standard one, and at most - potential.
And already these numbers should be fed to the NS, which will create rules for working with input data "cleared" from noise .
Yeah... I think I see what you're talking about (correct me if it's wrong). A neuron of a conventional MLP summarises products of inputs to it and corresponding weights, shifts the sum and then nonlinearly transforms it (the shift is the same for each neuron regardless of the signals coming to it). That is, it is impossible to shift each input to it separately just linearly. If the task is to correct the input values to the network, you can improve MLP a bit by adding an additional correction layer (without activation function) between the input layer and the first hidden layer, the task of which is just to linearly correct the values of the input layer. It turns out, for each input of the network, 1 extra weight and one extra offset would be required. In fact, only one of these additional layers is required, and then it is business as usual.
Interesting, not difficult to implement.
It is clear that if it was known in advance how the input values should be converted, they would take and convert them as two bytes to send, but if it is unknown, such an additional layer makes sense.
IMLP - improved MLPIf the input range is from 0 to 1, the NS will never find the grail if the plum patterns in the upper range are buried in the numbers of the lower range, because it will have to "jam" the upper one, and with static weights everything that is lower will also go under the knife.
As a result, the adder will get a number that consists of both plum and working data - 50/50.
Your problem is easily solved:
Divide each fiche into the required number of ranges (3,5,10,50...) and feed them as separate fiches. Each range will have its coefficients adjusted individually.
If you have a grail hidden in the upper third, it will be found by this individual fiche.
Yeah... I think I understand what you're talking about (correct me if it's wrong). A neuron of a conventional MLP sums up the products of inputs to it and the corresponding weights, shifts the sum and then nonlinearly transforms it (the shift is the same for each neuron regardless of the signals coming to it). That is, it is impossible to shift each input to it separately just linearly. If the task is to correct the input values to the network, you can improve MLP a bit by adding an additional correction layer (without activation function) between the input layer and the first hidden layer, the task of which is just to linearly correct the values of the input layer. It turns out, for each input of the network, 1 extra weight and one extra offset would be required. In fact, only one of these additional layers is required, and then it is business as usual.
Interesting, not difficult to implement.
It is clear that if it was known in advance how the input values should be converted, they would take and convert them as two bytes to send, but if it is unknown, such an additional layer makes sense.
IMLP - improved MLPYes, this is one kind of filtering
The easiest way and also the most efficient: just add an "if" condition.
If N1 < IN < N2, then IN = Filter[i];
I did it this way with a loop
If the input is fed with all sorts of channel boundaries, which have high probabilities of corrections from the boundaries, the range can be divided into 2 parts, one of which will be ignored (nulled), and the second will be more intensively corrected.
This way is the fastest fitting (retraining) that can be. Though no, the Q-table comes first, and this method comes second.
MLP, on the other hand, is a very... very specific tool for forex. I even think it is a disruptive one.
Your problem is easily solved:
Divide each feature into the required number of ranges (3,5,10,50...) and submit them as separate features. Each range will have its coefficients adjusted individually.
If you have a grail hidden in the upper third, it will be found by this individual fiche.
Any luck with NN and DL? Anybody here...
---
except for python charts and fitted tests:-) At least "EA trades on demo and plus".
---
or there is a feeling that this is a dead-end branch of evolution and all the output of mashobuch and neurons is in advertising, spam and "mutual_sending".
I guess the doubts are unfounded. Don't judge progress by fake grail pythonians and we'll see.
MLP, on the other hand, is a very... very specific tool for forex. I even think it is destructive.
As you look at it, I would say on the contrary, it is the most adapted for DEM. Everything else is a pure adjustment to imaginary labels, which actually mean nothing and are a figment of imagination of deterministic approaches in clustering that have nothing to do with the live market.