Machine learning in trading: theory, models, practice and algo-trading - page 1965

 
Maxim Dmitrievsky:
Just run and trade, learns as you go

What is in the trading logic? Is it a rolling learning in a 1.5 month window? How often does it learn, or it depends on something? On what TF?

 
Rorschach:

I suggest testing on this data, there is definitely a pattern there and it is clear what to strive for.

ps to remove .txt from the name

Network: 56.58% of correct answers, 2.63 expectation

Forest: 55.89% of correct answers, 2.36 expectation

Forest cumulative increments: 55.89% correct answers, 2.36 expectation, identical results


Ranks 20 5-digits. Spread is not taken into account. On average the results are the same, but the grid counts for 2 minutes and the forest for less than a second.

Looks like I can't do without a session of black magic, although I haven't tried the zigzag yet.

 
Rorschach:

Net: 56.58% of correct answers, 2.63 expectation

Forest: 55.89% correct answers, 2.36 expectation

Ranks 20 5-digits. Spread is not taken into account. On average the results are the same, but the grid counts for 2 minutes and the forest for less than a second.

Looks like I need a session of black magic, although I haven't tried the zigzag yet.

What do you spend your time learning without spreads for? Just for pretty charts? Do something real.
 
elibrarius:
What do you spend your time studying without spreads for? Just for pretty charts? Do something real.
Subtract 5 pips, get with the spread. Without the spread, you can see that the grid has found something and you need to develop the idea further, not throw it away.
 
elibrarius:

You can't. That's why they are random, because they take random columns for training. Averaging then gives good results.
You can try to set the fraction of columns = 1. That is, all columns would be involved in building the tree, rather than a random 50% of all columns. All trees will be the same, so also set 1 tree in the forest. Total one forest with one tree is trained to 6, the other to 7 level of depth.
If you need more than 2 trees, then independently remove some columns from the set and train additional forests on all remaining columns.

Addendum: the number of rows involved in training should also be put =1, i.e. all so that the training is the same. So we remove everything random from the random forest.

If there's a fixed split rule, no randomness, that's probably how it works. Would you like to give it a try? I don't know how to scaffold :(

 
Maxim Dmitrievsky:
15 timeframes, signals on every bar. This is the only way to change the signals. It is not originally trained, it is started from a blank sheet to trade at once. I.e. it may not be re-learned in principle. It is retrained after each trade, keeps the memory of previous entries. Recurrent links can be added. Everything is in the manual, I just need to understand it. I will be working on it soon, I want to write an analogue in tensor-flow.

How is this memory implemented? Can you explain in simple terms?

 
Aleksey Vyazmikin:

If there's a fixed split rule, with no randomness, that's probably what will happen. Would you like to give it a try? I don't know how to build scaffolding :(

I checked - it is, at least in the alglib forest. Only rows and columns are randomized, if you set their coefficients = 1, all trees will be the same, that is, one tree is enough not to waste time on calculating its copy. Other packages may randomize something else...

I don't want to try it. A tree with a depth of 6 or 7 is enough for me. A tree with a depth of 6.5 [an analogy to your idea] isn't too interesting. I'm lazy too, of course.

 
mytarmailS:

And how is this memory implemented? Can you explain in simple terms?

I don't understand it myself.

 
elibrarius:

I checked - it is so, at least in algib forest. Only rows and columns are randomized, if their coefficients are set =1, then all trees will be the same, i.e. one tree is enough not to waste time on calculating its copy. Other packages may randomize something else...

I don't want to try it. A tree with a depth of 6 or 7 is enough for me. A tree with a depth of 6.5 [an analogy to your idea] isn't too interesting. And laziness, of course.

Got it. I just see the penultimate split as a subspace on which to build a mini model to study it. Of course the splits need to be smart, perhaps broken down by the statistics of the whole sample rather than the subsample. There should probably be 3-5 splits and no more before this process begins. The idea, though, is to reduce the impact of a particular split's random statistical advantage over other alternatives.

 
mytarmailS:

And how is this memory implemented? Can you explain in simple terms?

Switch to python, I'll give you examples, you can use them.

I don't see the point in discussing it on this forum since RL is not an entry level topic

Reason: