Machine learning in trading: theory, models, practice and algo-trading - page 2042

 
Aleksey Vyazmikin:


By the way, have you seen such a generator that randomly outputs a number from an array without repeats - I need exactly that.

In the generator you need to have a condition to check if there was the next random and by the way the quality of the generator will be visible at once.

 
Aleksey Vyazmikin:



It turned out more trees, quite good - on the test sample accuracy is more than 60%.

It turns out all the same time to find the deal, stops and exit are intertwined, which is logical - if the deal is already long open, the stops are not knocked out, probably from the fact that they are large...

Shall I attach the models?

Yes, attach them.

I expected that there would be a dependence on the day of the week of entry, the hour of entry, SL and TP.

We should use the tester to run the system by the exit time, see what happens. Although the exit time and duration are indirect parameters and become known only when the SL or TP triggered. We will have to bruteforce.

 

Again.

To predict a time series, according to Kolmogorov, 2 things are necessary:

1. expectation = const

2. the ACF is not 0.

"White noise", "coin", etc. junk are not predictable in principle, because their ACF=0.

We are very lucky that ACF of market time series increments is not equal to 0. Thus, the increments can be predicted.

But we haveno luck with the 1st condition"expectation = const". The mean value of increments in any sample on standard TF "floats" very strongly relative to 0.

Conclusion: BP requires preprocessing (thinning) aimed at obtaining the minimum variance of expectation relative to 0. Then the chance to make a profitable Expert Advisor based on the forecast of the next increment (not price!) increases manifold.

That's it.

Прогнозирование временных рядов при помощи экспоненциального сглаживания
Прогнозирование временных рядов при помощи экспоненциального сглаживания
  • www.mql5.com
В настоящее время известно большое количество различных методов прогнозирования, основывающихся только на анализе прошлых значений временной последовательности, то есть методов, использующих принципы, принятые в техническом анализе. Основным инструментом этих методов является схема экстраполяции, когда свойства последовательности, выявленные на...
 
Alexander_K:

Again.

To predict a time series, according to Kolmogorov, 2 things are necessary:

1. expectation = const

2. the ACF is not 0.

"White noise", "coin", etc. junk are not predictable in principle, because their ACF=0.

We are very lucky that ACF of market time series increments is not equal to 0. Thus, the increments can be predicted.

Butwe have no luck with the 1st condition"expectation = const". The mean value of increments in any sample on standard TF "floats" very strongly relative to 0.

Conclusion: BP requires preprocessing (thinning) aimed at obtaining the minimum variance of expectation relative to 0. Then the chance to make a profitable Expert Advisor based on the forecast of the next increment (not price!) increases manifold.

That's all.

Apparently these reflections are better discussed in another thread. MO and NS are a bit far from the subject of strategy logics. today the practice of trying to train on the history of packages and considering the result on real data.

I do not agree with condition of static expected payoff. If a mathematical model describes a series with less error then the series is stable and mathematical expectation is not always static or more than necessary. It may be less, but the mathematical model may describe it with minimal error.

 
Valeriy Yastremskiy:

Apparently these reflections are better discussed in another thread. IO and NS are a bit far from the topic of strategy logics. today the practice of trying to train on packet history and considering the result on real data.

I do not agree with condition of static expected payoff. If the mathematical model describes the series with less error then the series is stable and mathematical expectation is not always static or more than necessary. It may be less, but the mathematical model can describe it with minimal error.

I rarely appear here and do not want to discuss anything.

I tell it like it is. No super-engineered neural network can handle the standard data set of the standard TF (M1, H1, ...). It is an axiom.

Only preprocessing of BP increments may give the Path to the Grail. Amen.

 
Alexander_K:

I rarely come here, and I don't want to discuss anything.

I tell it like it is. No super-engineered neural network can cope with the standard data set of standard TFs (M1, H1, ...). It is an axiom.

Only preprocessing of BP increments may give the Path to the Grail. Amen.

It can do it, but it will be 60-70% accurate. On H,H4,D timeframes it is enough.
 
Alexander Alekseyevich:
It will do, but the accuracy will be 60-70%. On H,H4,D timeframes it is enough.

Hmmm... I will take a look. I have not worked with timeframes higher than M15 yet...

 
Rorschach:

Yes, attach it.

I expected that there will be a dependence on the day of the week of entry, the hour of entry, SL and TP.

It is necessary to run the system by the exit time in the tester, see what will happen. Although the exit time and duration are indirect parameters and become known only when the SL or TP triggered. We will have to bruteforce.

So the experiment essentially confirmed the rule "Cut losses and let profits flow".

The model is attached.

Files:
result_4.zip  64 kb
 
Alexander_K:

Hmm... I'll have a look. I haven't worked with TF higher than M15 yet...

What is the sense in forecasting of small timeframe? I've made a forecast for 1 bar with 60-70% accuracy, the same accuracy in small timeframes, but the spread kills all the profit.
 
Aleksey Vyazmikin:

So the experiment essentially confirmed the "cut the losses and let the profits flow" rule.

Model attached.

Norm

losses are slightly less

Reason: