Machine learning in trading: theory, models, practice and algo-trading - page 1604

 
Dump it
 
Dmitry:

Oh, so we are talking about the difference in network performance in training and in the test?

There's bound to be a loss - you can't do without it

There are two tests, internal, when a part of dataset is allocated for checking, usually 0.2 and external, when just a piece is taken that neuro did not see. The results on the second one are the real market, if it is not so, it means that there is a mistake somewhere.
 
Evgeny Dyuka:
There are two tests, internal when part of dataset is selected for testing, usually 0,2 and external when just take a piece that neuro did not see. The results of the second one is a real market, if it is not, then there is a mistake somewhere.

I have to disappoint you, but in fact your "test to check" is part of a teachable sample.

 
Dimitri:
Get it out
I'm not going to dump it. Ready to save someone who is now banging his head against the wall trying to solve another specific problem. I went through this, the information I give can save a month of wandering in the dark.
 
Evgeny Dyuka:
I won't. Ready to save someone who is now banging his head against the wall trying to solve another specific problem. I went through this, the information I give can save a month of wandering in the dark.

Okay, spill your "information".

 
Dimitri:

Okay, spill your "information".

Just reasoning for the public is in the telegram channel, there you can trace the history of work. Here I would like to be more specific.
 
Evgeny Dyuka:
Just reasoning for the public is in the telegram channel, there you can trace the history of work. Here I would like to be substantive.

All right, spill the "subject information".

 

So far everything works very fast. I will connect it to the database, I'll see later.

 
Evgeny Dyuka:
There are two tests, internal when part of dataset is selected for validation, usually 0,2 and external when just a part is taken that neuron didn't see. The results of the second one is a real market, if not then there is a mistake somewhere.

Eugene Good day, thank you very much, at least for the fact that you are a practitioner and not another rubbishman of which there are 95%.... What you do(test on "third" sample) in terms of GMDH(method of group consideration of arguments) is called "criterion of predictive ability"http://www.machinelearning.ru/wiki/index.php?title=%D0%9C%D0%93%D0%A3%D0%90#.D0.9A.D1.80.D0.B8.D1.82.D0.B5.D1.80.D0.B8.D0.B9_absolute_noise-immune

Let's remind that the first publications about GMDH began somewhere from 1960th those "your know-how" idea with the teston the "third" sample is already 60 years old)))

But I'll notice that the approach never gets old, so I strongly recommend to read worksof A.G. Ivakhnenko...

For example MSUA regression just mocks the regression of modern random forest algorithm and all sorts of boostings ...


Now about the links in Telegram... I found nothing but signals there, but it`s interesting to read your approach and your way of thinking, Dmitry was right in saying, that one should publish here, though in an openly boorish form...

Метод группового учёта аргументов
  • www.machinelearning.ru
Метод группового учета аргументов, МГУА (Group Method of Data Handling, GMDH) — метод порождения и выбора регрессионных моделей оптимальной сложности. Под сложностью модели в МГУА понимается число параметров. Для порождения используется [базовая модель], подмножество элементов которой должно входить в искомую модель. Для выбора моделей...
 
mytarmailS:

Eugene Good day, thank you very much, at least for the fact that you are a practitioner and not another twaddle of which there are 95%.... What you do(test on "third" sample) in terms of GMDH(method of group consideration of arguments) is called "criterion of predictive ability"http://www.machinelearning.ru/wiki/index.php?title=%D0%9C%D0%93%D0%A3%D0%90#.D0.9A.D1.80.D0.B8.D1.82.D0.B5.D1.80.D0.B8.D0.B9_absolute_noise-immune

Let's remind that the first publications about GMDH began somewhere from 1960th those "your know-how" idea with the teston the "third" sample is already 60 years old)))

But I'll notice that the approach never gets old, so I strongly recommend to read worksof A.G. Ivakhnenko...

For example MSUA regression just mocks the regression of modern random forest algorithm and all sorts of boostings ...


Now about the links on Telegram... I found nothing but signals there, but it is interesting to read your approach and way of thinking, Dmitry was right, that it is necessary to publish here, though in an openly boorish form...

JPrediction just uses Ivakhnenko's method of group argument. Reshetov Yu. talked about it more than once... The method itself is labor-intensive in terms of machine hours because it shakes out the data thoroughly and doesn't require large samples to fit the current realities.

Who doesn't believe it, let him check it out :-)

Reason: