Machine learning in trading: theory, models, practice and algo-trading - page 3172

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Thanks, I'll try MathRand increments.
Is there supposed to be an OOS dropout on the SB?I don't think it's supposed to be that way by definition of SB.
I usually try to "move" the task a bit - slightly change all possible parameters (and available metaparameters) and see how the result changes. Sometimes it becomes a bit clearer.
Thanks. Usually it's laziness to look too deeply that stops me. Superficial "wiggling", of course, I practice.
Take one retrained coin on new data - it will behave like a sb. Add a few more (according to the number of TS parameters), sum up the errors and you will get sharp plums, and sometimes sb, and sometimes vice versa. Some of the coins were tied to the trend, which changed. Part on small fluctuations. The first part started predicting all the time in the wrong direction, and the second part was predicting badly even without it, because it was retrained on noise. The negative effects added up, and there were no compensating coins left.
This statement compares to the fact that when adding up the SB rows, sharp dumps will be seen. Well, the SB itself has the dips. There's no need to add anything.
I may be wrong, but I see it this way.
The OOS on the left is also a fit, only kind of second order
Imagine you only have 1,000 variations of a TC, in general.
your steps 1 and 2
1) You start to optimise/search for a good TS, this is the train data (fitting/searching/optimisation).
Let's say you've found 300 variants where the TC makes money...
2) Now you are looking for a TC out of these 300 variants which will pass OOS is test data. You have found say 10 TCs that earn both on the traine and on the test ( OOS ).
So what is point 2 ?
It is the same continuation of fitting, only your search(fitting/searching/optimisation) has become a little deeper or more complex, because now you have not one condition of optimisation (pass traine), but two (pass test + pass traine).
I don't practice this kind of self-deception. I only do it this way.
This statement compares to the fact that when you add up the SB rows, you will see sharp dumps. Well, the SB itself has the dips. There's no need to add anything.
I may be wrong, but it seems so.
I've given you an explanation. Maybe it takes time to understand.
We're probably just talking about different things. Or there is a terminological conflict.
Above, for example, there is a perception of OOS as a section of forward testing. I.e. one term, but approaches are different.
Forum on trading, automated trading systems and testing trading strategies
Machine Learning in Trading: Theory, Models, Practice and Algorithm Trading
Maxim Dmitrievsky, 2023.08.17 06:33 AM
Take one retrained coin on new data - it will behave like a sb. Add a few more (according to the number of TS parameters), sum up the errors and you will get sharp plums, and sometimes sb, and sometimes vice versa. Some of the coins were tied to the trend, which changed. Part on small fluctuations. The first part started predicting all the time in the wrong direction, and the second part was predicting badly even without it, because it was retrained on noise. The negative effects added up and there were no compensating coins left.This explanation gives an example that it is possible to find a situation in the SB where the right result will be obtained on the right side: not necessarily a sharp plum, but any result at all. For example, a sharp profit.
But this is just some "luck" of choosing a sample interval on the SB.
All this is, of course, a naked theory.
Forum on trading, automated trading systems and testing trading strategies.
Machine Learning in Trading: Theory, Models, Practice and Algorithm Trading
fxsaber, 2023.08.17 06:27 pm
Thanks, will try MathRand increments.
I'll have to get to the charts and take a look.
We're probably just talking about different things. Or a terminological conflict.
Above, for example, there is a perception of OOS as a forward testing section. I.e. one term, but approaches are different.
This explanation gives an example that it is possible to find a situation in the CB where the right result will be obtained: not necessarily a sharp plum, but any result at all. For example, a sharp profit.
But this is just some "luck" of choosing a sample interval on the SB.
All this is, of course, a naked theory.
I'll have to get to the charts and see.
fxsaber #:
Any combination (addition, etc.) of several SBs is a SB.
Absolutely true when adding multiple SBs with fixed weights. More fancy combinations can result in something more complicated, mainly due to volatility fluctuations.
fxsaber #:
Any TS on a SB is a SB.
This is only partially true when all trades are with roughly the same volumes, stops and takeouts.
Mathematically speaking,"Any TS on SB is a martingale" (not to be confused with martingale). For example, a poker drawn on the SB during over-sitting, averaging, etc. is also martingale, but not SB.
The OOS should always be to the RIGHT.
If the OOS is LEFT, it is impossible to guarantee that the TS is NOT overtrained and is NOT looking ahead. These are the first major issues that must be addressed when testing a TC BEFORE anything else.
Which one do you have? It makes no difference! It doesn't matter if it's one or both of them. You need to test it correctly and basta - OOS on the right.
And it is better to forget about the tester and form files for testing as follows:
Highly categorical statements without any doubt. I made a post on the topic of OOS placement.
It's not the first time I've encountered dislike for the tester. I don't know why I disliked the number crusher.
We have two files.
The first file is divided randomly by sample into three parts: training, testing and validation. Study on a (random) training sample, then check on a random testing and validation sample - these are all DIFFERENT pieces of the first file. Compare the result. If they are approximately equal, then check on the second "natural sequence" file. If they are approximately equal here too, we get the main conclusion: our TC is NOT overtrained and does NOT look ahead. Only having this conclusion it makes sense to talk about anything else: accuracy, profitability and other things, all of which are SECONDARY.
I note that there are virtually no other ways to test for looking ahead and overtraining.
I don't see well how you can look ahead in optimisation.
On methodology. I don't understand the necessity of splitting into train/test/exam. Claiming, even with the most favourable statistical study, that the TC is NOT overtrained seems too self-defeating.
The most I can get in a conclusion is "it is likely that the TC found some pattern that was present some time before and after the training interval. At the same time, there is no guarantee that this pattern has not already broken down."