You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Read about OLAP again. Maybe I'm weakly grasping it, but there is no finding the optimum there.
What is the point of just plotting such sections of data?
Forum on trading, automated trading systems and testing trading strategies.
Libraries: BestInterval
fxsaber, 2018.10.12 16:24
So, if I understood correctly, generalisation to the library no side, unfortunately.
But generalisation is possible, of course.
I've been thinking and thinking, but I can't think of anything that can be filtered as effectively as time.
This something should not be part of the strategy and should directly influence market behaviour.
a stack? other datafeeds?
And another question - is there any sense in making hypercubes? I think that individually it should be filtered well too.
Parameters, symbols, days of the week (they seem to be "time" too, but I haven't seen them used in BestInterval), hold duration (seems to be "time" too, but not absolute, but relative), lot sizes, number of trades with a given outcome in close history, etc.
Hypercube allows evaluating data in different combinations - it provides multi-parametric analysis. According to this logic, one can ask why we optimise the Expert Advisor for all parameters at once, instead of one at a time - it would be less expensive, and some people have to do this when the number of parameters is too high.
Read about OLAP again. I may have misunderstood, but there is no finding the optimum there.
What is the point of just plotting such cross-sections of data?
So, if I understood correctly, generalisation has nothing to do with the library, unfortunately.
But generalisation is possible, of course.
Well, in this sense you can consider the hypercube as intermediate meta-data, in which it is easy to find an optimum in the sense of the BestInterval approach - no? It's enough to "bind" all hypercube entries to some value to be maximised (like profit) and filtering the hypercube (like BestInterval filters time) to get the best values for other parameters, not just time.
Imagine that the code of BestInterval remains the same, but instead of time some other numbers have been fed there - the programme doesn't care about it - it will still find Best "Interval".
I was talking about the possibility of replacing time. I don't understand why the table should be called a hypercube?
BestInterval itself only maximises profit by the filter. It's not about the filter, it's about finding profit with consistent rejection.
As for the filter in the form of opening time, it is very convenient for TS on pending orders. It can be used in the Tester without any problems.
If we consider other filters, then market orders are required there, as it is usually impossible to predict the filter value for a moment in advance.
I was talking about the possibility of replacing time. I don't understand why a table should be called a hypercube?
BestInterval itself only maximises profit by the filter. It's not about the filter, it's about finding profit with consistent rejection.
As for the filter in the form of opening time, it is very convenient for TS on pending orders. It can be used in the Tester without any problems.
If we consider other filters, then market orders are required there, as it is usually impossible to predict the filter value for a moment in advance.
When there are a lot of "spatial" dimensions, we get a hypercube, we can call it just a multidimensional array. A table is its special, simplest case. Of course, you can screw in any algorithm of "finding profit with successive rejection".
I don't understand about predicting the filter ahead. There is actual data, we have processed it, received recommendations - where are the predictions here?
When there are many "spatial" dimensions, we get a hypercube, we can call it just a multidimensional array. A table is its particular, simplest case.
In the context of application to the history of trading, a hypercube is a table. Its section - we make another table by running through the columns of the initial one.
Of course, any algorithm of "finding profit with consecutive rejection" can be screwed in.
The only thing left is to screw it in. For example, making a table is ~5% of the bible. The rest is to "bolt on" and apply.
About predicting the filter ahead of time - I don't get it. There is actual data, we processed it, got recommendations - where are the predictions here?
Prediction is just on the topic of "apply". If, for example, it is said that a position should not be opened when RSI > 90, it is impossible to trade this condition through pending orders, because it is unpredictable.
And when it comes to applying the filter, one cannot do without the virtual environment. Therefore, "screwing in" is a somewhat different task.
In the context of application to the trading history, a hypercube is a table. Its section - we make another table by running through the columns of the initial one.
The only thing left is to bolt it on. For example, making a table is ~5% of a hypercube. The rest is "bolt on" and apply.
Prediction is right on the topic of "apply". If, for example, it is said that no position should be opened when RSI > 90, then such a condition cannot be traded through pending orders because it is unpredictable.
There is some terminological confusion everywhere - I don't think it makes sense to cling to words - it seems that we mean the same thing, but we "studied in different special schools" (for me a table is always flat ;-), it is formed by crossing a hypercube with some hyperplane; you can call the whole thing a multidimensional table, but since a shorter term has been introduced, I prefer it). About prediction - similarly (after the explanation the original meaning became clear, but the chosen wording seems inappropriate to me, purely imho).
I did OLAP - for multifactor analysis (the above graphs with a breakdown by periods (and other parameters), imho, are also interesting), and I did not solve the problem of automatic optimisation on the basis of the hypercube directly. Ideally, the cube can be shoved into the standard optimiser and used in the matrix calculation mode.
In general, we deviated a bit: the idea (as it turns out, not only for me, but also for you) was that the best "intervals" can be searched not by one dimension (intrade time segments), but by a bunch of different ones.
As for terminology, we understand multidimensional spaces and planes the same 100%. Just like a table is an NxM flat matrix.
Any slice of your hypercube is obtained from a flat table - that's what it's about.
Ideally, you can shove the cube into the standard optimiser and use it in the mat calculations mode.
This is closer to MO, but has nothing to do with BestInterval. There, for each pass of the optimiser, the "matlab mode" is launched.
The value is exactly in the fact that for each optimisation pass. That is, you can completely throw out (before optimisation) the parameters responsible for the time range (or other filter) of the trade from the original TS.
Due to this, the number of passes decreases by orders of magnitude during a complete search and genetics wanders less.