Machine learning in trading: theory, models, practice and algo-trading - page 1515

 
Biqvi:

The picture is just an illustration of phase 1 and 2.

Yes, I understand that it is just a picture, but for the programming of similar sets, personally for me, I need a clearer understanding of how it looks like. Does it take into account the height of bars, what kind of limitation should be set. The total height of this set... and other clarifications are possible.

 
Biqvi:

The picture is just an illustration of phase 1 and 2.

Are the colors of the candles all right there?

There seems to be no problem requesting this "set-up"
 

I want to teach the neuronics to see it (more precisely the part that set-up 1) to solve two problems:

1) to get out of it what it saw, to understand "what it is hooked on" in the set-up and through this better understand what I see and what characteristics of the graph are important.

2) To transfer the trade to it, or (a minimum variant) hang a bell.

My question to the pro, please advise whether the problem is set correctly and where to go in order to solve it.

The picture is just a picture to make it clear what I call a set-up.

 
Biqvi:

1) To get out of it what it saw, to understand "what it caught on" in the set-up and through it to better understand what I see and what characteristics of the chart are important.

A neural network will not give you an understanding of why it made any decision. But solving a single tree can be rewritten with some conditional operators like if(height2>10){ if(delta<50){ .... }}. To do this, you have to build a forest with a single tree. If there are many trees, e.g. 100, you would need to average the solutions of 100 such chains of if(){if(){.... }} manually this will be difficult. But the forest usually has a better solution by averaging multiple solutions.

Most likely as a result of spending a year you will make sure that after pattern 1, pattern 2 happens about 50% of the time, the other 50% will be exactly the opposite(a double top followed by a fall, and a lot of other variations). The lack of successful signals among the frequenters of this branch proves it.
The man sees more than those 6 bars of pattern 1. If you can do it - better to trade manually than to spend a year.

 
Biqvi:

I want to teach the neuronics to see it (more precisely the part that set-up 1) to solve two problems:

1) to get out of it what it saw, to understand "what it is hooked on" in the set-up and through this better understand what I see and what characteristics of the graph are important.

2) To transfer the trade to it, or (a minimum variant) hang a bell.

My question to the pro, please advise whether the problem is set correctly and where to go in order to solve it.

Just a picture, just a picture to make it clear that I call the set-up.

One of the simplest options is to make a training sample of eight input bars with a color (direction) ratio of 2:1:2:1:2 and one output (target) on the prediction of which you can make training.

This approach, of course will reduce efficiency of the pattern, but the probability of its observance and recognition will be higher, and it's important for the beginning, to understand if there are fish in it.

If this variant is suitable, I can do it, on neural networks it will be murky, but on solving trees it should be fine - with readable logic.

 
Andrey Dik:

That's cool. It's a little noisy, though...

Yeah, you can change the settings there, too. I'm too lazy.

most likely it will be nonsense in the end, like any prediction

 
Biqvi:

I want to teach neuronics to see it (more precisely the part that set-up 1) to solve two problems:

P.S. I am calm about the fact that it will have to spend a year or two on the solution and even more calm about the collaboration with the pros.

Neuronics (MLP) and other classifiers (random forest, SVM, kNN etc.) are needed to automatically look for such and much more non-trivial patterns, for your problem a simple convolution (sliding scalar product) will do, it is programmed from scratch in an hour, and with ready toolkit in minutes, you don't need year.

But I can disappoint you in advance that the probability of success is close to zero, as all such simple structures are found without problems by automata, and if you managed to trade hands in the plus, it means that besides the pattern you relied on a number of auxiliary conditions, which are probably "obvious" to you, but nevertheless significantly affect the result. Remember the tale about the "axe soup"? It is the same with many candlestick formations in manual traders, a simple pattern seems, but before that the trader looks at all news, all markets, listens to gossip and trades or not his simple pattern)))

 
Maxim Dmitrievsky:

No one has ever really understood hmm, only at the level of dumb copying of libs? can't you rewrite it in mql? only the market is full of such neural network poop

This is the basis of the basics by the way.

Maybe you haven't read articles by other authors, except for this thread and your own articles? Where do you get such generalizing conclusions?

Hidden Markov models in their pure form are not applicable to our case. In our timeseries, state change does not occur at every time step. The state lasts for several (many/small) clock cycles and the probability of the state change changes with each step. Such Markov models are called Semi-Markov models. In one of myarticles I used such models for smoothing predicted states of a target. That is, unlikely sequences were sifted out using hsmm. Some "academics" here were shocked at the mentioning that HSMM could be used to smooth out a nominal sequence. It happens.

To help you write an article on this topic and translate the math to hsmm, I am enclosing the literature. I have worked my way through it thoroughly. Download from Dropbox atthe link (~46MB). The packages in R : mhsmm, SemiMarkov, markovchain, HiddenMarkov, hmm.discnp, HMMmlselect are just the ones I checked at a glance.

Good luck in this hopeless case (I mean translation to MKL).

 
Vladimir Perervenko:

You probably do not read other authors' articles besides this thread and your own? Why such generalizing conclusions?

Hidden Markov models in pure form are not applicable to our case. In our timeseries, the state change does not occur at every time step. The state lasts for several (many/small) clock cycles and the probability of the state change changes with each step. Such Markov models are called Semi-Markov models. In one of my articles I used such models for smoothing predicted states of a target. That is, unlikely sequences were eliminated using hsmm. Some "academics" here were shocked at the mentioning that HSMM can smooth out a nominal sequence. It happens.

To help you write an article on this topic and translate the math to hsmm, I am enclosing the literature. I have worked my way through it thoroughly. Download from Dropbox at the link(~46MB). The packages in R : mhsmm, SemiMarkov, markovchain, HiddenMarkov, hmm.discnp, HMMmlselect are just the ones I checked at a glance.

Good luck in this hopeless case (I mean translation to MCL).

Thank you, I already rewrote everything, but for discrete tasks

I don't know how to do it for continuous problems.

I was offered a variant of filtering instead of smoothing BP by Mitramiles, he gave me his dataset and asked why nothing works in the sketch window. I asked: why don't you know math of hmm, not just use packages, to understand why it does not work? That's all I asked.

 
Maxim Dmitrievsky:

Thanks, I have already rewritten everything, but for discrete tasks

I haven't figured out how to do it for continuous ones.

was offered a variant of filtering, but not smoothing BP Mitramiles, he slipped his dataset and asked why it does not work in the sk. window? I asked: can you not understand math of hmm, and not just use packages to understand why it does not work? That's all I asked

So I didn't see the previous posts. I took it personally.

Read the literature, it will help you a lot. There are plenty of them in Russian.

Good luck again.

Reason: