Machine learning in trading: theory, models, practice and algo-trading - page 1220
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
https://developers.google.com/machine-learning/crash-course/
this is python support from google and the lab already has built-in TFlow, free Tesla K80 GPU and some TPU (tensor processing unit) specifically for TFlow
The funny thing is that you do not need to install anything and can experiment to your heart's content even from a weak laptop
Alyosha, I'll tell you a terrible secret, don't tell anyone)))
You were just given a couple of methods to aggregate precision and recall.
F-measure - the harmonic mean of accuracy (precision) and completeness (recall)
R-accuracy is the balance point for t. There are others.
I know what F is, but it's semi-automatic because you have to set the coefficient yourself, which tells you the preference match, not the overall grade. F1 is meaningless because of the equivalence of 30*50 and 50*30 (R*P), which to me is not equal.
This is not only Alexey said, in my opinion this is an obvious way to check strategies not only with MO, but even with simple indicators, if the system learns as "cool" on SB as on the price(s), then of course it is an overfit. For example for minutki forex prediction accuracy for the next 5-30 min should not exceed 55-57%, if over 60% then it is clearly worth double-checking everything, unless of course you have ultra-HFT and you date-mine all the data in the world that you can get for money, as well as with violence and blackmail.
I got higher and didn't work out. But it seemed to me that the spread, swap commission, these 5% will eat up. So I gave up.
Or is it possible to earn from it?
They keep on going nuts. acuras around 50% is random and there is no useful signal there. this is the fitting through noise, or underfitting. acuras should be at least 0.2-0.3
Even if you look at the signal distribution (which we did with the other Alexey, although he did not fully understand it), at such an acuras it will be around 0.5 (in the case of classification), i.e. the probability will always be around 0.5 and useful signals, ie classified marks with 100% accuracy, of the entire set will not be at all
all upside down )) i tell you they are really misleading here, and grandson Sanych correctly noted
For example for minutki forex prediction accuracy for the next 5-30 min should not exceed 55-57%, if over 60% then it is clearly worth double-checking everything, unless of course you have ultra-HFT and you datamaynite all the data in the world that can be obtained for money, as well as with violence and blackmail.
40% is already very good. Once tested, so at 30% profit was more or less. The profit margin is somewhere around 25%.
What the fuck 100% accuracy?))) That's probably with someone who laughed, probably after the peas against the wall for a long time banging, 53-55% is all, the story of 70-90% - bullshit, well, how many times have repeated ...
Why bother building a model with 50% accuracies? )) it's easier to trade randomly, it will be the same
that's what you're all doing.
3-5% is nothing, an error, the models are built on a different principleNot 50 but 55%, take the data from numerai for the sake of comparison, see how much accuracy and loglos
What do you think, everyone is either fooling themselves, or they're all just dumping, it's the market...
Okay, from 55% is still possible, if several control samples are the same, otherwise it's random. But it's never the same in several samples
Let's not bullshit theoretical, try the numerai date if you don't believe me, they have about adequate forecasts for real, but they have a longer term model, intraday is better predicted, but no more than 60% (with a lot of purchased data)
we are our own numeraire )) ok, ok, maybe on trillions of data it's relevant, but I don't teach it on such large datasets
There is another approach - probability threshold is set and when the model starts to go bad with time there are less and less trades, as a result everything starts to hang around 0.5 and there are no signals. But for that one should be trained with good acuracy from the very beginning
we are our own numbers )) ok, ok, maybe on trillions of data it is relevant, but I do not teach on such large datasets
There is another approach - probability threshold is set, when the model starts to go bad with time there are less and less trades, as a result everything starts to hang around 0.5 and there are no signals. But it should be trained with good akurasi
Do you use the pp levels in any way in your predictors?