Machine learning in trading: theory, models, practice and algo-trading - page 868

 
elibrarius:
Question on the subject.
It's considered that the NS may project any indicator inside itself.
Has anybody tried to carry out an experiment - may the NS reproduce MA for example? Or MACD or a digital filter?

Maybe not a problem. Lots of instances in many NS packages with designing such things.

But only if you design the TC, what it does inside itself remains a mystery. Just the coefficients.

 
The problem with this thread is that people here are not stupid and everyone has experience and formed an opinion that no one wants to change. And when you hear something that contradicts your ideas, you do not even give yourself a chance to think, maybe it's right. That's the problem. It is clear that there should be interest in the idea, etc.. But to say that it is not right without doing any tests ... In general, we are all too smart, so we are poor. And some of us are greedy too :-)
 
Yuriy Asaulenko:

Maybe not a problem. Lots of instances in many NS packages with the design of such things.

But only if you design the TC, what it does inside itself remains a mystery. Only the coefficients.

I don't remember anything about market indices. - I don't remember anything about market indices.
Although MA is too simple: for MA10 add 10 inputs with v=1, assign v=0 to other inputs and then pick k=10.
Digital filters are the same way, but v will not be equal to 1, but other values according to the filter formula. And theoretically the NS can find not something from the standard filters, but a unique one that best fits the market.
I.e. essentially 1 neuron is a digital filter (TF).
Several neurons will allow to get interactions of several TFs (deltas, sums). If you need 2nd order interactions (delta of delta) - you have to add 1 more hidden layer.

We can't get the product of 2x CP from 2 neurons in the output neuron - it's just addition. But it can be recalculated in a separate neuron, just v and k will be different.
All in all, for me this is a new way to look at NS as a TF.

 
elibrarius:
The packs usually have irises and medicine, etc. - I don't remember anything with market indices.
Although MA is too simple: for MA10 - add up 10 inputs with v=1, assign v=0 to other inputs and then pick up k=10.
Digital filters are just as simple v will not equal 1 but others according to the filter formula. And theoretically NS can find not something from standard filters, but unique, the most suitable to the market.
I.e. essentially 1 neuron is a digital filter (TF).
Several neurons will allow to get interactions of several TFs (deltas, sums). If you need 2nd order interactions (delta of delta), you have to add one more hidden layer.

Something similar I wrote earlier. Plus 2 layers to the NS, and it will make you any indicator-predictor by itself. And you don't have to bother with it.

 
Yuriy Asaulenko:

Something similar I wrote earlier. Plus 2 layers to the NS, and it will make you any indicator-predictor by itself. And you don't have to bother with it.

Then it turns out that selection of predictors is an unnecessary task that may even hinder, if input data are bars.
Selection is needed when we randomly feed a lot of standard and non-standard indicators (e.g., MA, CCI, RSI with different periods), but not a time series.
Of these indicators we should discard those that are not suitable. The NS with the time series at the input will automatically select indicators with the right coefficients.
 
elibrarius:
Then it turns out that the selection of predictors is an unnecessary thing, which can even interfere, if the input is data from bars.
Selection is needed when we randomly feed a lot of standard and non-standard indicators, but not a time series.
We should reject the ones that don't fit. The NS with the time series at the input will automatically select indicators with the appropriate coefficients.

The input of NS is a normalized time series. Say, the structure of NS -15-20-15-10-5-1 is already doing a good job.

To determine the longs and shorts we need two NS.

 
elibrarius:
Then it turns out that the selection of predictors is an unnecessary thing, which can even interfere, if the input is data from bars.
Selection is needed when we randomly feed a lot of standard and non-standard indicators, but not a time series.
We should reject the ones that don't fit. The NS with the time series at the input will automatically select indicators with the appropriate coefficients.

I input the time-series (the bare prices)

I have inputted indicators and increments (including those with exponential periods, etc.).

There is no difference, but there is a difference when you feed cosines of differences, tangents of differences and hyperbolic cos and tang... why this is so - I don't know, but the performance of TC is somewhat improved

 
Yuriy Asaulenko:

Mm-hmm. There is a normalized time series at the NS input. Let's say the NS structure -15-20-15-10-5-1 is already doing well.

To determine the longs and shorts we need two NS.

I've learned about 2 NS in practice, and it's the only way to use them. If I use 3 classes (buy, wait, sell), then the middle class occurs very quickly, especially if the output neuron is sigmoid or tangent.
But if regression... in theory 1 neuron at the output is needed.

 
Maxim Dmitrievsky:

fed time-series (naked prices) to the input

I input indicators and increments (including those with exponential periods, etc.)

There is no difference, but there is a difference when you feed cosines of differences, tangents of differences and hyperbolic cos and tang... why it's so - I don't know, but the TC performance is somewhat improved

Then 3-4 hidden layers are needed to build analog from bare prices. 1 layer for indicators + 1 layer for deltas + 1 layer for cosines and tangents. Have you tried it?
 
elibrarius:
Then we need 3-4 hidden layers to build analog of naked prices. 1 layer for indicators + 1 layer for deltas + 1 layer for cosines and tangents. Have you tried it?

No, I just use scaffolding so far (an ensemble of any number of models, on different features), the output is average

very quickly all works but

Reason: