Machine learning in trading: theory, models, practice and algo-trading - page 245

 
Andrey Dik:
I say simplify, generalize, and you say complicate, detail.

This is how it is very often done. Information criteria are used, for example Akaika.

The point is this.

The complexity of the model is penalized. As a result of optimization, you choose a model that is not the most accurate, but simpler.

And now with an example.

We take a sample of more than 5000 bars of EURUSD on H1. We construct a random forest. Set the maximal number of trees = 500. Let's view a plot of fitting error depending on the number of trees. It appears that the fitting error changes very little after 100 trees. In addition, increasing the sample does not increase the number of trees. And trees are the coveted patterns. I.e. EURUSD on H1 contains about 100 patterns. But the model is almost always retrained.

If we coarsen the model, take and radically decrease the maximum number of trees, for example, 10, it can help to fight with overtraining.

 
SanSanych Fomenko:


If you coarsen the model, take and radically reduce the maximum number of trees, for example, 10, it may help to combat overtraining.

With forest, overtraining comes not from the number of trees, but from their unlimited depth (model complexity), the number reduces variation, and overtraining is baes
 
Theproblem is:
With the forest, overtraining comes not from the number of trees, but from their unlimited depth (model complexity), the number reduces the options, and overtraining is baes
it all comes down to a 50/50 coin ... a waste of time.
 

What I'm wondering is.

1. What does neural network training in algotrading come down to?

2. What parameters of market dynamics are used in training?

3. In addition to recognizing patterns on history, can the network "learn" the specific character of parameter's value change at the current moment and react on it?

(for example, a sharp wave-like rise or a smooth fall).

 

I think the nature of the current change in the parameter value is not recognized by the network... Unlikely.

And this looks promising for prediction. After all, it would be enough to remember the transitions of these changes in order to then predict them.

 
Itum:
it all comes down to a 50/50 coin ... a waste of time.

No, there is a small age. Let me put it simply: with the help of MO you can pull out much more than any other way, including "intuition", there are patterns in the market, with the help of MO they can be extracted most effectively.

 
Vizard_:

With a bit of difficulty yesterday I pierced 0.69(0.68999). That's it, I've had my fun.


Cool!

However, agree that 0.69 is the wall which the standard means can only scratch, I think it's not the depth of neuronet and not the number of trees in the forest, those who are below 0.6, there are some tricks that we do not know and this is actually very motivating not to relax buns))

 
toxic:

Cool!

However, agree that 0.69

What is that number? Where can I see it?
 
SanSanych Fomenko:
What is this figure? Where can I see it?
loglos on numer.ai
 

As someone very indirectly related to the MoD, I do not interfere in the discussion, but I will allow myself to express an opinion from my "bell tower".

A normal multilayer neural network consists of neurons. A neuron's activation function is a monotonic function that grows from -1 to 1 (or 0 to 1, whatever). At minimum input value this function gives minimum result, at maximum value it gives maximum result.

We train a neural network with N inputs on some segment (input cloud) in N-dimensional space. The neural network has no idea what to do with data which lies outside this cloud. But we give it this data as input and expect it to produce some result.


Konow Retag:

3. Besides recognizing patterns on history, can the network "recognize" a specific character of parameter value change at the current moment and react to it?

(for example a sharp wave-like rise, or a smooth descent).

I think the network does not recognize the nature of the current variation of the parameter value... Unlikely.

And this looks promising for prediction. After all, it would be enough to remember the transitions of these changes to be able to anticipate them later.

Let's say there was an abnormal situation in the market and there was a spike in price. Or the NS, trained on human faces, receives an image of multicolored glasses as input. Neurons become overexcited and their output signal goes far to the left or right branch of sigmoid. This means that at the NS output we get some, unpredictable, but very strong signal.

If there are such outliers in training sample (for example, 1% of all data), then backprop algorithm "loosens up" all weights, while NS has no time to "learn" anything, but its work is degraded on 99% of "normal" data.

It would be much better to teach the NS to defend itself against "abnormal" inputs, rather than "locking" it into impossible requirements. And here we have, IMHO, two options.

1. Introduce one or more layers of neurons with an activation function in the form of a Gaussian curve instead of a sigmoid. Such a neuron will produce a result close to 1 in the limited range of input values, and close to 0 on the rest of the numerical axis.

2. change the interpretation of NS outputs. Instead of usual binary interpretation (0 - no trade signal, 1 - there is a trade signal) the following one is suggested: 0 - no signal, 0.5 - there is a signal, 1.0 - no signal. In the case of unexpected price movements the output of NS will be close to one of the extreme values - 0 or 1, and it will not give any false signals.

Dear experts, is there any mention of such things in the MO literature and do you find it useful? Comments are welcome.

Reason: