Libraries: BestInterval - page 18

 
Igor Makanu:

then it's a task for a neural network

It's not even hypothetically possible that the NS has anything to do with it.

 
fxsaber:

NS has nothing at all to do here, even hypothetically.

If there is data and the result of evaluating this data, but it is not possible to find a formula how to get the result, it is a task for the NS, and the NS will perfectly take into account not significant data for calculating the result, i.e. if you go back to the "multiplication table", instead of 2x2 = 4 - teach 2x2 + new input rand() = 4 - the NS will still learn, i.e. the third input with rand() it will not take into account in the calculation of the output.


the main problem of the NS is the users' belief that it can calculate something on the data that was missing in the training sample.

 
Igor Makanu:

if there is data and a result for evaluating this data, but it is not possible to find a formula how to get the result

There is no result. This is the difficulty of formalisation.

There is a simpler classical analogy of a problem with no result: finding the optimal optimisation criterion for a TS.

Here on the forum many different variants have been proposed based on one's perceptions/feelings and intuition. Therefore, NS is not a helper here.

Similarly, the BestInterval problem suffers from the same ailment.

 
fxsaber:

With a large number of trades and thrown intervals, an interesting object of study emerges.

We need to figure out how to identify good chunks of trade from this data. There should be a combination of a decent number of trades in a chunk, serious growth per unit of time, etc. I have not been able to formalise this yet.


The task is roughly as follows: given a large number of thrown intervals, determine probably system chunks and only on their basis calculate a custom Optimisation criterion.

The solution of the problem will allow to reveal much more potential of the TC.

In general, I don't see the point in such a large number of cut intervals.

On the contrary, I would like to load them (for example, to 1 or 5 minutes) and limit the number to a small number.

All the rest seems to be fitting, which will not be of any use.

 
fxsaber:

There is no result. This is precisely the difficulty of formalisation.

I offered you the result "to prepare by hand" - what is good and what is bad is not formalisable at all.

 

It might make sense for some strategies to go deeper inside the hour (and cut the same chunks of all hours, as you now cut the same chunks of all days).

And it absolutely makes sense to go in the opposite direction and look for different intervals for different days of the week. Then you can increase the number of intervals so that you get 2-3 pieces for each day.

 
I have in my rare but regular interval studies loaded up to an hour and differentiated between Monday, mid-day and Friday. I never got a stable time dependence with any expert. Everything turned out to be an adjustment, even when sampling over several years.
 
Andrey Khatimlianskii:

I don't see the point of so many cut intervals at all.

You should compare the cut intervals with the volatility of the symbol, if the maxima/minima of volatility are cut out, it makes sense, if there are no volatility spikes, then yes, it is fitting

 
Andrey Khatimlianskii:

I don't see the point of so many cut intervals at all.

And I didn't see it until now. It turned out that cutting a lot and giving it out as it turned out is the purest fitting. But if you treat a lot of cuts as an intermediate stage of analysing good 2-3 intervals, then everything becomes somewhat different.

 
Igor Makanu:

I kind of offered you the result of "cooking with your hands" - what is good and what is bad is not formalisable at all

If I can't do it with my head, I can't do it with my hands.