Floating market parameters - page 7

 
Valeriy Yastremskiy:

100 bars makes no sense. 120 - 132 makes more sense to me. 10 years, 2 years, quarter, 3 weeks, working time of the week)

There is something about zooming out))) Haven't found the truth yet. Sermyazhnaya truth against the older TF not to go like that, but maybe there is something)

I took 100 from the ceiling, there are still many things to do, number of bars, model parameters, parameter for the optimization, number of values for the optimization parameter calculation, etc.

 
Maxim Romanov:
It will not work with bars (candlesticks); time sampling adds a random component, we should use other methods of sampling.

I have the following idea for determining the optimal initial number of bars: First, we should decide on the type of pattern for which we are preparing the initial data. We start the calculation by the selected pattern with the minimum amount of input data. Determine the error of the model as compared to the actual data. Then increase the amount of source data by one unit, repeat the process of finding the relative error and also save it. The amount of the initial data giving a minimum error is taken as the optimal amount of the initial data at the moment. We conduct such a search every time a new bar appears. I don't know another way. What do you think? I intend to open a special thread soon to discuss this problem.

 
Maxim Romanov:
Bars (candlesticks) will not work, time sampling introduces a random component, we should use other sampling methods.

I was figuring it out in terms of Tsos, Kotelnikov, it turns out that you need to take the ticks, smooth them out and only then split them into TFs. Otherwise we get aliasing, appearance of nonexistent frequencies. On the other hand, smoothing will add a lag.

In general, we should try to preprocess price, filtering, renders and equities. We should also try to break down currency pairs into individual currencies.

 
Yousufkhodja Sultonov:

I have the following idea for determining the optimal initial number of bars: First, we should decide on the type of pattern for which we are preparing the initial data. We start the calculation by the selected pattern with the minimum amount of input data. Determine the error of the model as compared to the actual data. Then increase the amount of source data by one unit, repeat the process of finding the relative error and also save it. The number of the initial data giving a minimum error is the optimum number of the initial data at the moment. We conduct such a search every time a new bar appears. I don't know another way. What do you think? Soon I intend to open a special thread to discuss this problem.

Approach from another perspective when we initially gather enough bars for a reliable result and see which mathematical model fits best with the least number of parameters/polynomials. And then reduce the number of bars.

 
Rorschach:

I was figuring it out in terms of Tsos, Kotelnikov, it turns out that you need to take the ticks, smooth them out and only then split them into TFs. Otherwise we get aliasing, appearance of nonexistent frequencies. On the other hand, smoothing will add a lag.

In general, we should try to preprocess price, filtering, renders and equities. We should also try to split currency pairs into separate currencies.

Smoothing out ticks through some scale step is an interesting but costly task. And there is an option, and it seems logical, to find cycles conditioned by real external factors that are bound to the time of their influence, i.e. a working chart.

There are too many parameters to optimise, the availability of an optimal solution depends on the choice. If the parameters are chosen incorrectly, there may be no solution.

 
Valeriy Yastremskiy:

Smoothing out ticks in some scale step is an interesting but costly task. And there is an option, and it seems logical, to find cycles due to real external factors that are linked to the time of their impact, i.e. a working schedule.

There are too many parameters to optimise, the availability of an optimal solution depends on the choice. If the parameters are chosen incorrectly, there may be no solution.

There are no cycles, it is checked.

I ran the model in the optimizer and did not like the fact that the parameters change drastically on each bar - I would like better stability.

 
Rorschach:

There are no cycles, it's been tested.

I ran the model in the optimizer, I didn't like the fact that at each bar the parameters change drastically, I would like more stability.

I do not have it in its pure form. There are similar repetitions of price movements). There is no stability of parameters in processes similar to SB by definition. If there is stability, these are other processes).

 
Valeriy Yastremskiy:

In its pure form, no. There are similar repetitions of price movements). There is no parameter stability in SB-like processes by definition. If there is stability, these are other processes.)

That's what we need then, recognition, clustering

 
Rorschach:

I was figuring it out in terms of Tsos, Kotelnikov, it turns out that you need to take the ticks, smooth them out and only then split them into TFs. Otherwise we get aliasing, appearance of nonexistent frequencies. On the other hand, smoothing will add a lag.

In general, we should try to preprocess price, filtering, renders and equities. We should also try to split currency pairs into separate currencies.

We need a sampling method that doesn't introduce randomness. Time has no meaning for the market, an hour candle may contain any number of trades and an arbitrary trading turnover. The price is driven by money, trades and reallocation of funds. To understand why candlesticks are not suitable and what is needed, we can create a simple model: take a sine, sample it at a random sampling rate and obtain a random plot at the output. That is, the process is known and simple and we broke it. Is it now possible to reconstruct the original signal with a large enough sample? Probably somehow, but I don't know how.

It's better with ticks, but it's not perfect either. The main price driver is the executed trade operation and its volume. If we take a tick, we do not know the volume and the number of deals.

 
Rorschach:

That's what we need then, recognition, clustering.

Yes.

Maxim Romanov:

You need a sampling method that does not introduce a random component. Time has no meaning for the market, an hour candle may contain any number of trades and an arbitrary trading turnover. The price is driven by money, trades and reallocation of funds. To understand why candlesticks are not suitable and what is needed, we can create a simple model: take a sine, sample it at a random sampling rate and obtain a random plot at the output. That is, the process is known and simple and we broke it. Is it now possible to reconstruct the original signal with a large enough sample? Probably somehow, but I don't know how.

It's better with ticks, but it's not perfect either. The main price driver is the executed trade operation and its volume. If we take a tick, we do not know the volume and how many operations have passed.

Kantarovich's lectures on econometrics in the first lecture give an overview of the subject. The conclusion is that the breakdown of a mathematical model accurate enough for the historical period to date cannot be predicted in the estimation of temporal economic parameters.

Reason: