Machine learning in trading: theory, models, practice and algo-trading - page 3616

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
So?
I couldn't run Numpy in R-studio, this Pichton with its environments is a curse, not a language... so I spent an hour fiddling with it and sat down to watch cartoons :)
I couldn't run Numpy in R-studio, this Pichton with its environments is a curse, not a language... so I struggled for an hour and sat down to watch cartoons :)
Yes, do it in R, I have discounted. I think by eye it counts the same as mine in Python.
Mistral rewrites codes normally. Sometimes dull, of course.
Just do it in R. I gave you the link. I think it looks the same as my Python one.
Mistral rewrites the codes just fine. Sometimes it gets a little slow.
you need to be sure that the code works the same way, you need to do it in python, but god knows I tried))))
you need to be sure that the code works the same way, you need to do it in python, but god knows I tried))))
What quotes do you study on?
I did a little research here in a neighbouring thread, I'll post it here too - I think it's useful.
Forum on trading, automated trading systems and testing trading strategies
Will a good strategy work on randomly generated data?
Aleksey Vyazmikin, 2024.09.10 19:30
I was thinking, are MQ quotes similar to real DC quotes? I had at hand RC RF (this is not an advertisement), and I made a sample on EURUSD hours in the same way as I described before, and at the same time I converted a custom chart, which would be at hand. I trained the models - 10 pieces for each setting.
This is how the Accuracy spread turned out to be
And here is the Precision metric - it shows the accuracy of class definition "1" - in other words, the percentage of correctly classified examples, i.e. for us it is the percentage of EURUSD hours from MQ that we managed to accurately define as belonging to MQ.
In theory, if the samples are the same, the training will be around 0.5 on these metrics, i.e. the model will classify all examples as "1" or "0" and the metrics will be around 0.5.
However, learning should not occur, but we have it running briskly - strange thing!
The outliers on the exam sample are interesting, small but noteworthy, let's look at the positive deviation, these are the settings of Test_CB_Setup_48_000000005 - Excluded ATR, iDelta, Volume and OHLC types + iVIDyA, iBWMFI, iChaikin.
Interesting, but not clear - what the training is on - logically difficult to justify :)
But one thing is clear, something is wrong here - apparently there are differences in the charts, I decided to write a script that:
Below are the results of these calculations in the form of graphs, for example the calculations for M1, H1 and D1 are presented sequentially.
So, where I guess it remains to be seen now, where are the random quotes and where are the real ones?
If there are regularities, where should they be? Are there different regularities in different brokerage centres?
Hmmm.... what do you think?
P.S. Maybe I made a mistake in the code - attached for audit and private use.The trick there is that if you apply this feature, you'll immediately see an improvement on new ones after training, unless the dataset is full random. That is, traine and test will look more similar, less overtraining.
Here you write "less overtraining", and THERE.
"never overtrains." Where is the truth? The function seems to be the same.
Here you write "less retraining," and here you write
"never retrains." Where is the truth? The function seems to be the same.
Obviously, it is somewhere near. In the first case about retraining in relation to the validation sample, in the second case about retraining in the context of the test sample. We are, after all, discussing non-stationary markets.
In the first case, overtraining really never happens, because the training involves a piece of marked, corrected sample. The traine/val errors are equalised. In the second case, there may be variations, depending on the length of the training sample, the presence of patterns, the combination of matched clusters.
It is possible to collect statistics on clusters from the test before training, but then it will be a kind of peeking.
It is possible, before training, to collect statistics from the clusters and from the test.
Unfortunately, nothing helps, I tried all your options, but the oos is empty.
You can estimate the level of overtraining using random_state, if at different values on oos all lines are rising, then the model has been successfully trained.
It is possible, before training, to collect statistics on the clusters and from the test, but then it would be a kind of peeking.
Unfortunately nothing helps, checked all your options. That leaves only fundamental analysis, evaluating news sentiment with chatgpt.
Hopefully they will add a new feature to automatically download news from the platform.