Machine learning in trading: theory, models, practice and algo-trading - page 1786

 
mytarmailS:

The article is the bomb, I didn't understand anything, but I read it with my mouth open... Thank you!


I even had an idea, if a lot of random rules will still converge to some common structure, just in different ways, then we can draw an analogy with the algorithm Random Forest, its creators have conducted a lot of tests and it turned out that it doesn't matter the sequence of forming rules or breakdown, just by random you can get similar results always...


So I think what if we take for example a 5 min/hour chart and take it as a certain big pattern - "BP", and within it generate a sample using different sliding windows (of course with normalization by scale)

Then train a Forest on this sample, i.e. generate a bunch of random rules inside BP

Then scale BP to sample mastab and predict BP by its internal rules which were generated earlier...

Take into account fractality and check if all this mutual nesting works

Interesting...

Don't trust theories on assumptions. Similarity of results of simple rules and laws of physics is not a proof, but an assumption.

You can try it, only the rules are only random to us. In fact they are not random. It is interesting to see the result)))

 
Valeriy Yastremskiy:

Don't believe theories on assumptions. The similarity of the results of simple rules and the laws of physics is not proof, but an assumption.

We can try, but the rules are only random to us. In fact they are not random. It's interesting to see the result)))

did not work(

Forrest is not trained, I tried to recognize it with the analogy method, but it didn't work either!

 
mytarmailS:

did not work(

Forrest is not trained for some reason, tried to recognize the method of analogues, but it did not work either(

It's not supposed to. Random search for random rules.))
 

Before I started the MO, I spent about half a year fine-tuning the EA, testing it on history and improving its performance, finding visual patterns that led to losses - in other words, manually adjusting it for history. That was in 2017, at the end of which I ran the EA until around February 2018.

The law of meanness, or as I decided at the time of the fit, immediately sent the balance in the negative direction, the project was deemed a disappointment and closed.

At the end of 2018, I tried again to revisit this project after seeing an interesting result for the year ending.

Again began to drink, the advisor was removed, after monitoring a little in the tester results in 2019, I was convinced that I made the right decision to abandon the project.

Yesterday I decided to check out the old tree leaf models and the EA, which was last tuned in 2018, but not very much.

I honestly could not believe my eyes - the result is very good!

And so theses and questions arise:

1. Why was the Expert Advisor created using this method more stable than the one created using the MO method?

2. Unfavorable periods of trading exist - 2019 was purely flat for Si.

3. As soon as I start the EA it will start losing?

4. By what metrics can I classify the global trading period best suited for a particular TS/MO model?

5. How to tolerate a year of dangling near zero or even losing, while waiting for a suitable trading period?

 
Aleksey Vyazmikin:


4. What indicators can be used to classify the global trading period most suitable for a particular TS/MO model?

5. How to tolerate a year of dangling near zero or even plummeting, while waiting for a suitable trading period?

The main question. It seems that not one. It's one decision. To raise the pins or not.)))). It appears that the indicators should be from the average of history and at the moment. And something from ZZ needs to form. The correlations don't look good. I think so, but they are lagging behind and too average. In general, I'm maraing for now.

It would be good to look at the pirson coefficients of all the tf extremums of ZZ, the number of trends or minimum/maximum extremums, the average width of volatility, and the average speed of price. Nothing comes to mind about the data at the moment except the increments.

Some people here take a simple way. They take a lot of simple TS and try to use them optimally - by simple training with the best result.

There is no service yet, we use all instruments from 70 to 20 years))))

 
Valeriy Yastremskiy:

...

It is a good idea to look at all tf's pyrson coefficients on ZZ extrema, the number of trends, or minimum/maximum extrema, the average width of volatility, and average price velocities. Nothing comes to mind about the data at the moment except the increments.

...

I did something similar, I divided ZZ segments into 3 groups by length and yes it is a good indicator of success of my TS, but it can only define the past and what to do with the present - that's the mystery.

 
Aleksey Vyazmikin:

I've done something similar, I've split the ZZ segments into 3 groups by length, and yes - this is a good indicator for the success of my TS, but this can only determine the past, and what to do with the present - that's the mystery.

3 is not enough. And somehow we need to understand / define the logic of all TF data. It's good, if all indicators on the history are determined and we use them as a criterion for decision making, then if the new data repeats the history, then everything is good, if not - this is new data. If the new data is more than 30%, then there is an error in the data. It's just not enough or it's not meaningful. Or it's apophenia and there is no connection.

The increments have to be measured and compared to the data)))) Apart from increments, I want something else. But everything that is invented is a derivative of increments. Of course, the volume remains, but I don't know how to approach it.

 
Valeriy Yastremskiy:

3 is not enough. And somehow we need to understand / determine the logic of the data of all TFs. It is good, if all indicators on the history are defined and we use them as a criterion for decision making, then if the new data repeats the history, then everything is good, if not, then it is new data. If the new data is more than 30%, then there is an error in the data. It's just not enough or it's not meaningful. Or it's apophenia and there is no connection.

The increments have to be measured and compared to the data)))) Apart from increments, I want something else. But everything that is invented is a derivative of the increments. Of course there is a volume, but I don't know which way to look at it.

I do not use increments in their bare form - only relative normalized values in fact.

It makes no sense to mix predictors for model performance and predictors for determining the favorability of a particular model. I think one model should determine the favorability and the other the TC itself. Then there remains the question of markup for the training of such favorable conditions, and for this it is necessary to define the threshold when the TS works effectively. This may be some indicators, for example the balance of errors and profit growth, or maybe some other metric indicators. And, accordingly, the classification should probably be a week for the minutes or at least a day.

 
Aleksey Vyazmikin:

But that's the only way to determine the past, and what to do with the present - that's the mystery.

It's no mystery.

1) It is necessary to determine favorable periods for TC and unfavorable periods, i.e. to create target "Y" in the same usual binary form Y = 0000111100000

2) Create variables, which will reflect "market characteristics" 2. Create variables that will reflect the "market characteristic", honest and not displaced. The DSP, in particular spectral analysis, will help here.

From DSP we know that a signal of any complexity can be described by the sum of sine waves, a sine wave has only three parameters - amplitude, frequency and phase; this sum of sine waves or rather their parameters can be taken as a market characteristic and it will be objective.


If it's difficult for you, you can prepare the data for me, the price and "Y" for the classification, and I will make up a code and check if it is possible to recognize a favorable condition for trade or not, since this topic is interesting to me too

 

Only how to count Y? just on the profit is probably not the best option, the entry point is important ... After all, the profit was obtained from a good entry point, but not from the range between entry and exit.

It turns out we need only the entry point of the system and the market parameters at this time ...

It turns out that AMO will receive a signal from TS to enter and decide whether to open a position or not


It's scary to think about, but this is what our Micha constantly trended))

Reason: