Machine learning in trading: theory, models, practice and algo-trading - page 3083

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I have a question for machine learning experts. If I use one character's data for training, another character's data for validation and a third character's data for testing, is this a good practice?
Also, I get the following results from the test data: green cells are very good, yellow cells are good, red cells are average.
And also a question about modifying the data to train the model. I noticed that the model has a hard time finding extrema, in my case values above 60 and values below 40.
So I find values above 60 and below 40 in the training data, which I additionally re-add to the training data before feeding it into the model, so the question is: can I improve the accuracy of the model by increasing the training data containing information about extrema?
If you can't tell the difference between the instruments, you can. Or force them to that state by subtracting the difference.
At the moment it does look that way.
However, before I give up on this idea, I'll see what I get from training the model by mixing different instruments (symbols) together and then creating data containing only extreme values.
At this point, it really does look like this.
However, before I give up on this idea, I'll see what I get from training the model by mixing different instruments (characters) together and then creating data containing only extreme values.
If you can't tell the difference between the instruments, you can. Or force them to that state by subtracting the difference.
Practice with different symbols for training, validation and testing nowadays allows you to improve the accuracy of prediction. As a plus for this practice I can mention that there is no limit on the size of the data, you can give as much as you want or need for validation or training.
When testing with a third symbol, you can immediately see if the model is capable of finding universal patterns, rather than getting caught up in narrow market events specific to a particular symbol.
The practice with different symbols for training, validation and testing nowadays allows you to improve the accuracy of prediction. As a plus for this practice I can mention that there is no limit on the size of the data, you can give as much as you want or need for validation or training.
When testing with the third symbol, you can immediately see if the model is capable of finding universal patterns rather than being driven by narrow market events.
seq = remove_repeating_values(seq, 5)
As far as I understand, such equal values tend to reach several tens in case of flat market. Which in my opinion hinders the training of the model.Usually models randomise the values, not in a row.
Yes,
but the large number with the same values makes me question the overall quality of the data.
Example: seq = ([5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5]) = [5,5,5,5,5,5,5,5][5]; [5,5,5,5,5,5,5,5][5]; [5,5,5,5,5,5,5][5] ....
I don't see the point of feeding the model such training data;
So I'm still sifting out all the data that isn't unique.
inputs_unique, indices = np.unique(inputs, axis=0, return_index=True) outputs_unique = outputs[indices]
I could be wrong, but it seems wrong to me to also feed the model the following training data:
[1,2,3,4,5] [5];
[1,2,3,4,5] [6];
[1,2,3,4,5] [7];
[1,2,3,4,5] [8];
...
Hello everyone. I am trying to train Expert Advisors taken from a large series of articles about neural networks on this site. I get the impression that they are not trainable. I tried to ask the author questions under the articles, but he unfortunately does not answer them practically...(
Accordingly, a question to the forum members - please tell me how much to train a neural network so that it starts to give some (not random) result?
I tried all EAs from articles 27 to the last one - the result is the same - random. I went from 300 to 1000 epochs of training, as indicated by the author. If the Expert Advisor is just with iterations, I did from 100 000 to 20 000 000 iterations and so on 2-3 approaches, still random.
How much should be trained? What is the size of a sufficient training sample (if it is pre-created)?
PS: Simple information on neural networks in google read, in general with neural networks are familiar. All write about 100-200 epochs and there should be already a result (on pictures, figures, classifications).
Hello everyone. I am trying to train Expert Advisors taken from a large series of articles about neural networks on this site. I get the impression that they are not trainable. I tried to ask the author questions under the articles, but unfortunately he does not answer them practically...(
Accordingly, a question to the forum members - please tell me how much to train a neural network so that it starts to give some (not random) result?
I tried all EAs from articles 27 to the last one - the result is the same - random. I went from 300 to 1000 epochs of training, as indicated by the author. If the Expert Advisor is just with iterations, I did from 100 000 to 20 000 000 iterations and so on 2-3 approaches, still random.
How much should be trained? What is the size of a sufficient training sample (if it is pre-created)?
PS: Simple information on neural networks in google read, in general with neural networks are familiar. All write about 100-200 epochs and there should be already a result (on pictures, figures, classifications).
Do you have no result on the sample for training?
The cycle of those articles is not a ready-made solution out of the box - nobody will reveal the most valuable thing in machine learning - predictors. So before trying the methods proposed there, you need to develop a set of predictors that can potentially describe price behaviour.