Machine learning in trading: theory, models, practice and algo-trading - page 2621

 
mytarmailS #:
For PythonPonyGE2 there is a package, but I do it on Pke, so I can not say what it is and how
I got the names wrong.
Grammatical evolution or symbolic regression both work
 
Valeriy Yastremskiy #:
A sequence of events/rules is effective, but each rule has dimensionality and a long sequence has curses.
The cool thing about this approach is that you're in control...
Set a condition that a rule must be repeated at least 200 times for example and you don't have a curse of dimensionality.
 
mytarmailS #:
What did I do with the names foolishly?
Grammatical evolution or symbolic regression both work.
Symbolic regression, yes.
 
The symbolic regression in the trade-off between bias and variance looks strongly biased towards increasing variance. This is certainly not a reason to abandon it, but there could be trouble because of the proximity of the price to the SB.
 
Aleksey Nikolayev #:
Symbolic regression in a trade-off between bias and variance looks strongly biased towards increasing variance. This is certainly not a reason to abandon it, but there could be trouble because of the proximity of the price to the SB.

It's just a framework on which you can create rules, there's nothing in my proposal about price, approximation, regression...

 
mytarmailS #:

No matter how many models there are, if they look at the last 10 candles, it's useless, even if it's GPT-3 with all the guts.

You've got a generator, you've got no power...

My 5 cents. - During training, the weight of non-repeating neurons (bars) is blurred. The influential weight stays with most frequently confirmed neurons. Thus with a fixed number of bars only significant ones have weights. Sort of a figure.

 
Dmytryi Voitukhov #:

My 5 cents. - During learning, the weight of non-repeating neurons (bars) is blurred. The influential weight stays with the most frequently confirmed neurons. Thus with a fixed number of bars only significant ones have weights. Sort of a figure.

3am, what are you doing Dimitri?)
 
Dmytryi Voitukhov #:

My 5 cents. - During learning, the weight of non-repeating neurons (bars) is blurred. The influential weight stays with the most frequently confirmed neurons. Thus with a fixed number of bars only significant ones have weights. Sort of a figure.

Similarly in a tree. 5-10 top splits out of e.g. 100 feints/bars will pick a few significant ones, and won't use the rest. If you split the tree all the way down, the last splits (and used features/bars) will change the overall result very slightly. I.e. the result is roughly the same as in NS, only it counts faster.
 
What if a person were to trade and give ML what is good and what is bad?
 
BillionerClub #:
What if one trades and gives ML what is good and what is bad?

It's a good idea, only I think it's important here:

- To build up a lot of statistics.

- For a person to trade one thing (one system).

- That the person remains objective and trades systematically.


In this case, I think, a good markup will be obtained, and therefore it is possible to get a normal benefit out of it.

Reason: