Discussion of article "Probability theory and mathematical statistics with examples (part I): Fundamentals and elementary theory" - page 5
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Would love to read an article describing your approach)
The problem with constant and piecewise constant models is that we cannot not use them) In fact, the approach using Expert Advisors that are optimised or over-optimised is the use of this approach. And only the freedom and creative flight of manual trading allows us to avoid these models)
It is simple, the re-optimisation step should be much smaller than the average trade duration.
It will be logical to make such frequent re-optimisation an internal part of the Expert Advisor. This will lead to the appearance of a new EA with new parameters regulating the algorithm of re-optimisation of old parameters, which will also have to be optimised and re-optimised occasionally)
It would be logical to make such frequent re-optimisation an internal part of the EA. This will lead to the appearance of a new EA with new parameters regulating the algorithm of re-optimisation of old parameters, which will also have to be occasionally optimised and re-optimised).
Interesting article. You may find it interesting in your future articles to cover mutual information and probability: '' '
----The pointwise mutual information can be understood as a scaled conditional probability. The pointwise mutual information represents a quantified measure for how much more- or less likely we are to see the two events co-occur, given their individual probabilities, and relative to the case where the two are completely independent. '' https://eranraviv.com/understanding-pointwise-mutual-information-in-statistics/
--- And permutation entropy: https: //www.aptech.com/blog/permutation-entropy/
h ttps:// github.com/danhammer/info-theory/wiki/permutation-entropy
It's simple, the over-optimisation step should be much smaller than the average transaction duration.
Let's put it in numbers:
average trade duration is 1 hour, accordingly optimisation should take place < 1/2 hour , we consider that the minimum available input data is TF M1
Well, we get that at optimisation we have new data of less than 30 bars (M1), and we don't know the result of the first open trade yet, but we have to make a decision - a forecast, what trade should we open next? (or close the current one ?)
in my opinion, if we take away all the tinsel, this approach coincides with trading on an indicator with a calculation period of less than 30 and on TF M1, i.e. there is no optimisation here because of the uncertain result of the previous trade (from the theory of management - no feedback).
Well, replace "overoptimisation" with "model recalculation".
The essence will not change - we will always deal with an Expert Advisor with a certain set of parameters, which we will sometimes change. If we try to make these parameters somehow time-dependent, we will just end up with a new system with new parameters that will determine how the old parameters depend on time.
Interesting article. You may find it interesting in your future articles to cover mutual information and probability: '' ''
----The pointwise mutual information can be understood as a scaled conditional probability. The pointwise mutual information represents a quantified measure for how much more- or less likely we are to see the two events co-occur, given their individual probabilities, and relative to the case where the two are completely independent. '' https://eranraviv.com/understanding-pointwise-mutual-information-in-statistics/
--- And permutation entropy: https: //www.aptech.com/blog/permutation-entropy/
h ttps:// github.com/danhammer/info-theory/wiki/permutation-entropy
Thank you. I'll write a little about information theory for discrete distributions. This is an important manifestation of probabilistic dependence.
But the exposition of permutation entropy will be too complicated for this series of articles.
The essence will not change - we will always deal with an Expert Advisor with a certain set of parameters, which we will sometimes change. If we try to make these parameters somehow time-dependent, we will end up with a new system with new parameters that will determine how the old parameters depend on time.
Not on time. On changes in the properties of the market we want to exploit.
The reaction rate should, to a first approximation, match the rate of events to which the system is reacting.
If the reaction rate is higher, good. If it is lower - bad, we do not have time to react.
I.e."sometimes" is different and depends on the specific system.
let's put it in numbers:
average trade duration is 1 hour, accordingly optimisation should take < 1/2 hour , we consider that the minimum available input data is TF M1
Well, we get that at optimisation we have new data of less than 30 bars (M1), and we don't know the result of the first open trade yet, but we have to make a decision - a forecast, what trade should we open next? (or close the current one ?)
Yes, this is called "reduce the delay of decision making".
The decision to close the current trade and the decision to keep it are the same.
Decisions should be made so often that this TS does not suffer from delay.
For example, my systems are recalculated on every tick, because a price change of a couple of points is critical for them.
Yes, it's called "reduce decision latency".
The decision to close the current deal and the decision to leave it are the same in force.
Decisions should be made so often that this TS does not suffer from delay.
OK, I think there is something in it, I need to think about it