Machine learning in trading: theory, models, practice and algo-trading - page 1631

 
mytarmailS:

mikha! will you answer my question on the last page or not? did I understand your statement correctly?

Well look, really if you build increments from the target it will be normalization, of course lying in a range of real values, which you can further lead to a range of 0:1.

If we are talking about forecasting, then the target should be shifted back by one lag, so as to think today about tomorrow. Again, you can just take the slope sign and bring it to the 0:1 range. But here you will only know the direction, but not the depth of the forecast. Which also requires extra resources from the grid. In general ZigZag (if it's him) isn't really a good target, because for classification it makes no sense at all, but for prognostication it's better to take another one, the one with the last value.

Based on my work, indeed my target also does not have an extreme value. That is the result of the current signal we don't know, because it is in work. That's why I indent 1 (required) 2 additional signals to see how the model works in general. Although I have a test period before the training one, I still indent 2 signals, no more.

 
Evgeny Dyuka:

This can not be, it seems that you are not at all on the subject.

What do you mean you can't? I've recently switched from 15 minutes to 5 minutes, and I feel great there. I don't want to change the instrument, because C is the most liquid. I simply don't want to do anything for experiments. Only trade, only Hardcore.
 
Today is a good day for me professionally. For the first time I was able to run my TS in the tester and got similar results as on the history. I thought MT5 couldn't handle my math, but when I wrote everything correctly it calculates and even trades very fast. Indicators might sometimes make some mistakes, I`ll try to find it out, but in general this is a great event for the instrument SI, because I`ve managed to add more algotrader to it :-)
 
mytarmailS:

mikha! are you going to answer my question on the last page or not? did i get your article right?

Again, you are normalizing with a function. Try doing it with simple math through subtraction or here, the formula from Reshetov. double x0 = 2.0 * (v0 + minimum) / maximum - 1.0. Then there will be no interpittance and inverse transformation issues. I don't know how this function does this normalization....
 
Mihail Marchukajtes:
Again, you are normalizing with the help of a function. Try to do it with simple math by subtraction or you can use Reshetov's formula. double x0 = 2.0 * (v0 + minimum) / maximum - 1.0. Then there won't be any interpittance and inverse transformation issues. I don't know how this function does this normalization....

What are you talking about? Are you drunk or what? :) I was asking you about your article and your algorithm of actions.

I'll duplicate it

Бегло прочитал статью нашего михи) Уже второй раз, первый раз читал давно, нечего тогда не понял...  Попробую кратко изложить его подход  если я что то не понял то пусть он меня поправит...



1) Из цены выделяем точки которые (математически/логически)  одинаковые (кластера), у михи это сигнал торг. системы секвента но по факту это может быть что угодно - пересечение машек, черная свеча в 10 утра итп. он собственно и сам об этом говорит что может быть что угодно.

Человеческим языком : мы просто субъективно сокращаем размерность в данных , уменьшаем степени свободы, для АМО (алгоритм машинного обучения).  И это наверное правильно.

2) далее миха учитывает "контекст рынка"  отчеты там всякие , открытый интерес и рассматривает сигнал от системы секвента в контексте "контекста рынка" (сори за каламбур)  и на каждую такую комбинацию тренирует сеть..

Человеческим языком:   "контекст рынка"  - это тоже по сути кластер и тут тоже может быть что угодно. В ситуации когда один кластер(п.1) есть вложенный в другой кластер(п.2)  мы еще сильнее сокращаем размерность, на порядки. И теперь уже на полученных сжатых данных мы обучаем АМО . И это тоже наверное правильно.

3)Обучаем АМО1 классификации на три класса "бай", "сел", "не знаю"       (миха обучал на "бай" и на "сел" отдельно но я не вижу смысла)

4)На "новых данных 1" смотрим ошибку распознавания АМО1   и создаем новые метки в классах типа "бай/не угадала" или "бай/угадала" или "сел/не знаю"

5) Обучаем второй АМО2 на данных и ответах от первой АМО1 

6) На "новых данных 2" смотрим ошибку распознавания АМО2

Человеческим языком: Вторым АМО2 мы предсказываем правильно ли угадает АМО1 свой класс

правильно миха?? смотри я в 10 строчек всю твою статью уместил))
 
I do not know how it is with prediction. Could not try because of the Max in due time, but for the classification one of the last transformations do the scaling and centering of the data in a fixed smoothing window. Tried to do it on the whole range of incoming data, but it's not very good. Otherwise... I have an anti-aliasing window of 10 to 19. The vtrit selects which one has the maximum number of features (yuck you swore again) for the target. That's the window I'm working with at the moment...
 
Evgeny Dyuka:

It can not be, it seems that you are not at all on the subject.

There's a squirrel in there :)

 

It's so creepy when I call you mikha, and with a small letter :-(

This is all wrong. The article is outdated in terms of context. In any case, I was not able to get improvements with it. I tried to multiply input data by changing context and thus weave it into input, but I failed to get a qualitative improvement, but I also failed to finish my work in this direction. That is, if possible, I would resume experience with context.

Let's take it in order.

1) All right we mathematically reduce the sample without changing the time interval. Or on the contrary by means of simple conditions we throw out unnecessary things and thus can increase the time interval, if the quality of the trained network is satisfactory.

2) The context of the market has only nine states. Hence, we can build nine models, each of which is trained and works in its specific period of context. But here we reveal another problem of data obsolescence. That is, if the model was working three days ago during the day, then if you switch it on today, it will not take into account what was yesterday, and this is significant for the market. This is where the Achilles' heel of using context becomes apparent, which is why I tried to weave it into the input data through multiplication, but it didn't help even here in a hurry. But I'd still have a hard time with it.

3) This is a mess. You got it all mixed up. Let me explain. The Sequence has two buy and sell signals and these signals are independent of each other. If we take ONLY buy signals from the basic strategy, then we get a more stable basic strategy where buy signals are already dependent on each other. That is, we cannot get two signals with several bars difference, unlike using full Sequence, where buy signal emergence does not depend on the appearance of sell signal.

Regarding training in the optimizer Reshetov (I hope nobody thinks that I'm promoting something there), but I took a lot of ideas from it. Trained two polynomials, that is, two grids, where at the same answer is YES, with the same answer is NO, with different answers is unknown. The sample is divided into two chatsy traine and test, where for one polynomial traine for the other it is test and vice versa. Cross training. But in the end, if we take from one network the test part and from another network the test part, this will be our training set as a whole.

4)5)6) Indeed I tried to do multilevel models, where the first level inputs, the second level outputs from the first, etc. I remember, Maxim even called this method scientifically, but I don't do it now, because it is too much trouble. Now enough just the first level. This approach I think is appropriate for higher level tasks. Well, let's say my TS now works 1 week. With this approach, I think it is possible to increase the lifetime of the TS, but not significantly. That is the next levels are trying not to prevent errors of lower levels. If I understand you correctly.

And yes, I drink a little bit. I got a couple beers to talk to. Where the hell is Trickster????? I thought of something for him... :-)

 
Mihail Marchukajtes:

It's so weird when I call you mikha, and with a small letter :-(

no offense)

 
Evgeny Dyuka:

That's impossible, it looks like you're not in the loop.

Damn, what a young people are..... I know, so listen to me carefully tell you for the penultimate time (the last time will be a video). It is because of people like you and I want to record a video that does not interrupt every time to explain the essence of being. There is a causal model of price formation. Not temporal, but cause-sequence. So the reason for price changes are the following factors.

First, the Optionists form the expectation of the market by warping the smile of volatility. Then, in accordance with this expectation or the Internet (the Option traders can be mistaken) there is a trading volume with the delta. The volume indicates the number of participants delta indicates the direction of the traded volume + open interest. Only then the price changes in accordance with the traded volume, and only then will change the values of indicators that you are ALL trying to use for price forecasting. That is, you are trying to predict the cause. Well, who among us is not in the know????

There is all this data on SI, but on bitcoin is there? So don't pi...di.... I don't have enough nerve with you. Learn the math, gentlemen..... And that is why my approach works, unlike yours. Yours may also be workable, but if it is not based on the above-mentioned model, the probability of randomness in your work is high. Any questions?

Reason: