Machine learning in trading: theory, models, practice and algo-trading - page 2301

 

There are signs that, oddly enough, worsen the generalizing ability (I speak for catbust in particular, probably applies to others). It would seem strange, because you just add new signs, and the model gives an error more than it was without them

For example, trained on several machines, then removed a few and the accuracy became higher

 
mytarmailS:

No, one layer is primitive, it's just one weight multiplication

That's your theory.

not mine.

 
Maxim Dmitrievsky:

There are signs that, oddly enough, worsen the generalizing ability (I speak for catbust in particular, probably applies to others). It would seem strange, because you just add new signs, and the model gives an error more than it was without them

For example, I trained on several machines, then I removed some of them and it became higher

I described this effect long time ago

https://www.mql5.com/ru/blogs/post/725189

Detected by completely retraining the model.

It is noise - which prevents you from working.

Еще про оценку предикторов
Еще про оценку предикторов
  • www.mql5.com
Пробую оценить важность предикторов для обученного леса, удаляя 1 из них и обучая лес снова. После чего из ошибки полного леса вычитаю ошибку леса c удаленным  предиктором. Если ошибка
 
elibrarius:

I described this effect a long time ago

https://www.mql5.com/ru/blogs/post/725189

Detected by a complete retraining of the model.

It is the noise - which interferes with the work.

Yes, but here you can see how the features interact. Too bad it's tied to a specific MO framework

because the importance can be underestimated by multicollinearity

Of course, manually fiddling when there are a lot of signs is not comelfo
 
mytarmailS:

No, one layer is primitive, it's just one weight multiplication

That's your theory.

Here I found it - Tsybenko's theorem.

The presented formula y = x1/x2. - is continuous and only two-dimensional.


https://www.mql5.com/ru/code/9002

Recommendations:

  • A network with three layers (numLayers=3: one input, one hidden, and one output) is usually sufficient in the vast majority of cases. According to Tsybenko's theorem, a network with one hidden layer is capable of approximating any continuous multidimensional function with any desired degree of accuracy. A network with two hidden layers is capable of approximating any discrete multidimensional function.
Ценовой прогноз с использованием нейронных сетей
Ценовой прогноз с использованием нейронных сетей
  • www.mql5.com
Индикатор, который использует нейронные сети для прогнозирования ближайших нескольких цен открытия. Сеть обучается методом обратного распространения ошибки. Обучение проходит автоматически, результат — самообученная сеть и самообучающийся индикатор.
 
elibrarius:

The presented formula y = x1/x2. - is continuous and only two-dimensional.

Is it discrete or continuous?

 
mytarmailS:

Is it discrete or continuous?

It's continuous. Does it have gaps and holes? Did you look at the picture with the examples?


 
elibrarius:

Continuous. Does it have gaps and holes? Did you look at the picture with the examples?

Yes....

A continuous function is afunction that changes without instantaneous "jumps" (calledgaps), that is, one whose small changes inthe argument result in small changes in the value of the function.The graph of a continuous function is a continuousline.

 
mytarmailS:

Yes....

A continuous function is afunction that changes without instantaneous "jumps" (calleddiscontinuities), that is, one whose small changes in the argument result in small changes in the value of the function.The graph of a continuous function is a continuousline.

At what point doesy = x1/x2 break?
 
elibrarius:
At what point isy = x1/x2 interrupted?

x2=0

Reason: