Machine learning in trading: theory, models, practice and algo-trading - page 3670

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
On the flat ones, the mystery is solved and is already close to perfect. Sometimes one wonders how it is possible.
Some theses that this can be obtained only on tick inefficiencies have been refuted. No, it is possible on opening prices.
H.I. Now I am already interested in TC through LLM's.
On the flat ones, the mystery is solved and is already close to perfect. Sometimes one wonders how it is possible.
Some theses that this can be obtained only on tick inefficiencies have also been refuted.
what's the average per transaction per pt?
Where is it written in the tester?
Where does it say that in the tester?
Then average in the money and lot per trade?
0.01 - $2.5 at t/p 200
$5.06 at t/p 500.
On flat it works on purely MOSH principles with thresholds and everything else, you can do calibration.Well, I have tried markup on the breakout of the maximum/minimum of previous values, which slightly improves the result. Maybe it is necessary to add a similar filter so that the model does not go flat. And the first one should be trained only on selected cases. We just need a slightly different approach.
Are there standard ways to change the indexing order of vectors?
If I want the zero bar to be last instead of first.
"This will copy the data so that the oldest element in time is placed at the beginning of the matrix/vector. There are 3 variations of the method."
Machine learning (ML) has revolutionized the world of trading by enabling data-driven decision-making and improving trading efficiency. Traders and firms leverage ML to predict market movements, optimize portfolios, and execute strategies with minimal human intervention. This article explores the theoretical foundations, popular models, practical applications, and the role of ML in algorithmic trading.
Ok.
If someone decides or at least come close to the right solution (that is, the topic will be alive), then I:
will post the correct solution - the algorithm for generating the dataset
explain why a number of other " Predictor Estimation and Selection" algorithms failed
I'll post my method, which robustly and sensitively solves similar problems - I'll give the theory and post the code in R.
This is done for mutual enrichment of "understanding" of machine learning tasks.
And let's look at one more theoretical point.
So:
Forest. Each tree votes with one leaf, the leaf has a different number of samples/rows and an average value, which is the leaf prediction.
1) The forest prediction is the average of votes of all trees, i.e.
(sum of predictions of all activated leaves) / (number of trees).
2) There is an option: samples weighted. Leaves with a larger number of samples will influence the forest result more.
I don't know how it is done in packages, but I did it as follows:
(sum of predictions of all activated leaves * number of samples) / (sum of samples of all activated leaves).
Thus, the influence of a leaf with 0.9 prediction and 10 examples will be less than a leaf with 0.7 prediction and 20 examples.
In the 1st case by number of trees prediction =(0.9+0.7)/2=0.8
In the 2nd case, weighted by the number of examples, prediction = (0.9*10 + 0.7*20)/(10+20)=0.76. I.e. the sheet with more examples pulled the prediction in its direction. This is probably a good thing.
But there can also be such an example: 0.9 with 10 examples and 0.5 with 100 examples (the leaf in which the rondom was collected).
In the 1st case based on the number of trees the prediction =(0,9+0,5)/2=0,7.
In the 2nd case weighted by the number of examples the prediction = (0,9*10 + 0,5*100)/(10+100)=0,54. The prediction is close to a random sheet and this is rather bad.
In my experiments on the balance line I didn't see 1 of these variants significantly improve the results. And on the considered examples, sometimes the 1st variant is better, and sometimes the 2nd one.
Who thinks what? Which variant is more correct and better in your opinion? Maybe there are some observations from practice?
Imho, it depends on the way of building trees (stopping splits). It is possible to make stopping just on the basis of leaf sizes.
In any case, a more or less meaningful theoretical answer can be obtained with the help of Monte Carlo. A slightly more practical answer is probably with the help of cross-validation.