Machine learning in trading: theory, models, practice and algo-trading - page 1050

 
Alexander_K:

Let me remind you that Aleshenka and Koldun (who seem to be the only ones here who have any success in trading on neural networks)

Do you have any proof?

Alexander_K:

They spend a lot of time on preparing the input data.

It's kind of standard, even a newcomer to the MO says "garbage on the input is garbage on the output".

Alexander_K:

I deliberately, with my posts, provoke their feedback :))) Alas, keep this secret...

))) funny

 
mytarmailS:

Do you have any proof?

They almost always delete their posts, you only have to communicate with them online.

 
Alexander_K:

They almost always delete their posts, you only have to communicate with them online.

Ahh, that makes sense then...

 
Alexander_K:

They almost always delete their posts, you have to communicate with them only online.

They delete their posts so as not to embarrass themselves) and you support myths about them

 
Maxim Dmitrievsky:

They delete so as not to embarrass themselves) and you maintain myths about them

Maybe so, Max - no arguments :))

 
mytarmailS:

And Reshetov? Well, yes, he is familiar with "MSUA" he said once.

The very idea of enumerating predictors by creating models and then creating models from models of increasing complexity is, in my opinion, very correct.

But maybe it's a mistake to search not predictors, but solutions of trade systems in the environment?

I think it's too much to build models on top of each other, to stack or something else. Because if they really train garbage, it won't help, it's some fractions of percent improvement that means nothing.

There is no mistake, there are just no regularities )

By the way, when fiddling with software Reshetov, either from the name of the subdirectory of the library of the program, or where I came across

http://www.gmdh.net/gmdh.htm

So, one and the same. There just the libs in Java and his program too.

And then he has just a tandem of 2 classifiers - SVM and MLP, which are iteratively trained on transformed features. That's why it takes so long for everything to work.

Метод МГУА для социально экономического прогнозирования, математического моделирования, статистического анализа данных, аналитической оценки систем и программирования.
  • Григорий Ивахненко
  • www.gmdh.net
Метод Группового Учета Аргументов применяется в самых различных областях для анализа данных и отыскания знаний, прогнозирования и моделирования систем, оптимизации и распознавания образов. Индуктивные алгоритмы МГУА дают уникальную возможность автоматически находить взаимозависимости в данных, выбрать оптимальную структуру модели или сети, и...
 
Maxim Dmitrievsky:

But to build models on top of each other, stack or something else is, imho, already too much. Because if they really learn garbage, it will not help, it's some fractions of % improvement, which means nothing.

Do not make something more complicated from something more primitive, it's a natural principle, we are descended from sperm (or from a thought if you look even further)). And now look how you have grown)

So there is nothing wrong with complicating the model, in addition the complication is checked by external and internal criterion, the error inside and outside the sample is measured, if the error when complicating the model starts to grow the algorithm stops ... i haven't used it yet, but the method appeals to me very much

 
mytarmailS:

Not to make something more complex from more primitive, it is a principle of nature itself, we are derived from sperm)) or from thought if you look even further...)

So there is nothing wrong with complicating the model, in addition the complication is checked by external and internal criterion, i.e. error inside and outside the sample is measured... in short I have not applied myself yet, but the method is very appealing to me

we take a usual matrix with attributes, at each iteration we add a new compound attribute from all attributes, retrain, change this attribute for a more complex one through Kolmogorov polynomial, retrain five times... until the error drops

but in practice it will hardly ever happen on noisy data.

if error is still bad, then take all these polynomial signs and use them to create new signs :) but you need a very fast neural network or a linear model, otherwise you will have to wait for a year

or even easier - take kernelized SVM or deep NN and get the same result (by simply adding layers to neural network you can get exactly the same result as in transformed traits) i.e. miracles happen

It says that GMDH is the first analogue of deep NN

 
Maxim Dmitrievsky:

We take an ordinary matrix with features, at each iteration we add a new composite feature from all the features, retrain, change this feature to a more complex one through Kolmogorov polynomial, five retraining... until the error drops

but in practice it will hardly ever happen on noisy data.

if error is still bad, then take all these polynomial signs and use them to create new signs :) but you need a very fast neural network or a linear model, otherwise you will have to wait for a year

or even easier - take kernelized SVM or deep NN and get the same result (by simply adding layers to neural network you can get exactly the same result as in transformed traits) i.e. miracles happen

It says that GMDH is the first analogue of deep NN

I don't know if it's true or not)) I'll just add that the trader with the super robot used GMDH did not use polynomials but Fourier series (harmonics) and as we know the Fourier spectrum analysis is not applicable to financial markets because it is designed for periodic functions, but nevertheless it worked and how the hell it is) So, I don't know, I should try everything.

 
mytarmailS:

But nevertheless it worked for the man, and how it worked

Is there a continuation of the story?

According to my observations, if the trading system gives only positive results, then there will be a permanent loss - we are talking about TS with a fixed lot and stoplosses

Reason: