Machine learning in trading: theory, models, practice and algo-trading - page 3606

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
We have a great calendar in MT5, don't we?
I don't know, I haven't used it. And for what period is the data there?
I think it is important to collect information not only about facts, but also about expectations - you need to collect text from news, traders' reactions to them in forums or wherever they communicate, as well as the opinion of bank analysts.
It is possible that traders are actually trading ahead of the news, and in fact they are already selling currency to those who need it - for example, to importers.
On traine and validation, of course. The test has no labels. What data are you passing to the function. Some beginner's questions.
I am asking because I am surprised that everything works so well for you, judging by pictures and words. So I think either I am doing something wrong, or there is a mistake on my or your part....
I will keep your terminology in mind. Of course, the model itself becomes simpler in theory, as there are less contradictions, but whether it is better or not on new data, outside of training, I can't understand yet, as I don't observe enough stable clusters - I don't know why it is so.
Do I remember correctly that you markup on each bar? I just don't do that. I thought about it, but I am confused by the fact that bars standing next to each other get the same marks and at the same time have sometimes inclusions of opposite marks. So it turns out that the market is flat, 30 bars are marked in one point, but the market is fast and 2-3 bars are marked, while the value of these observations is the same, but in the sample the first ones will have a preponderance. Do you have any ideas how to deal with this correctly?
What period is the data for?
Perhaps the CalendarValueHistory() function will help in answering this question.
I also think it is important to collect information not only about facts, but also about expectations - you should collect text from news, traders' reactions to them in forums or wherever they communicate, as well as the opinion of bank analysts.
Yeah, that wouldn't hurt either
It is possible that traders are actually trading ahead of the news, and in fact they are already selling currency to those who need it - for example, to importers.
I'm asking because I'm surprised that everything works so well for you, judging by the pictures and words. So I think, either I am doing something wrong, or there is a mistake on my or your side....
I'll keep your terminology in mind. Of course, the model itself becomes simpler in theory, as there are less contradictions, but whether it is better or not on new data, outside of training, I can't understand yet, as I don't observe enough stable clusters - I don't know why it is so.
Am I remembering correctly that you have markup on every bar? I just don't do that. I thought about it, but it confuses me that bars next to each other get the same markings and at the same time have sometimes inclusions of opposite markings. So it turns out that the market is flat, 30 bars are marked in one point, but the market is fast and 2-3 bars are marked, while the value of these observations is the same, but in the sample the first ones will have a preponderance. Do you have any ideas how to deal with this correctly?
Let's train the model auto-learning with Open AI API ?
Experiments also show good curves.
Can you provide statistics on what percentage of, let's say, 1000 attempts will create a strategy with a positive outcome, taking into account a reasonable spread?
Ideally, of course, statistics on clusters of these 1000 attempts would be better.
How many examples in one cluster with a biased probability as a percentage of its class?
You won't find such information anywhere else, because these are my intellectual su... products :)
Thanks for sharing your achievements!
Markup on each bar gives more data that can be filtered and/or corrected.
I think that to calculate statistics we should take into account these clusters and classify them as one example, because if a signal passes repeated on such a cluster, it will be considered that there are 10 signals when building the model, not 1. This is from an economic point of view I think, because instead of 1 lot you will not open 10 on each bar... or will you? or will you?
Let's train the model auto-learning with Open AI API ?
Tell us what it is and what result you get.
Can you provide statistics on what percentage of, let's say 1000 attempts, will create a strategy with a positive outcome, given a reasonable spread?
Ideally, of course, it would be better to have statistics on clusters of these 1000 attempts.
How many examples in one cluster with a biased probability as a percentage of its class?
Thanks for sharing your progress!
I think that to calculate statistics we should take into account these clusters and classify them as one example, because if a signal passes repeated on such a cluster, it will be considered that there are 10 signals when building the model, not 1. This is from an economic point of view I think, because you will not open 10 lots on each bar instead of 1... or will you? or will you?
I haven't done such measurements because you have to spend a lot of time on them to have reliable statistics. Some algorithms stand on reals, pluses.
Here are the statistics on 35000 hourly bars.
Half of them have an increment of less than 4 pips and 75% have less than 9 pips in four digits.
If we take into account that the stop should be less than the profit, i.e. we need marks with the increment over at least 15 pips, then it is impossible to build anything on the hour-markers.
Something weakly draws out on H2
And something can be discussed in terms of spread on H3