Machine learning in trading: theory, models, practice and algo-trading - page 2829

 
СанСаныч Фоменко #:

We use gradient descent algorithms, which is in general, not for neurons, and which has a huge beard. We google it and don't ask childish questions, having learnt how gradient descent overcomes different types of traps of local extrema. This is something people have been doing specifically for years.

You have been asked to test learning/optimisation on a few representative functions, this is good practice

If you think neural networks do it perfectly, you are probably wrong

 
Maxim Dmitrievsky #:

You have been suggested to test learning/optimisation on a few representative functions, this is good practice

If you think neural networks do it perfectly, you are probably wrong

yeah... frankly speaking, I am surprised that not everyone has adequately perceived simple truths - everything should be tested, and belief in dogmas voiced by someone means stagnation and degradation.

and gradient descent and bekprop are so ancient and wretched methods that it is strange that they can still be taken seriously by some people.

by the way, genetics is not the most robust algorithm nowadays. it should be included in the table for... well, just to make it clear that there's always room for improvement.

 
Andrey Dik #:

Yeah... frankly speaking, I am surprised that not everyone perceived adequately simple truths - everything should be checked, and belief in dogmas voiced by someone means stagnation and degradation.

and gradient descent and bekprop are such ancient and wretched methods that it is strange that they can still be taken seriously by anyone.

The perception there is specific, at the level of kargo-cult and belief in the divine R, bringing the gifts of civilisation.

 

On the one hand, learning is a special case of optimisation, but on the other hand, there are some peculiarities.

1) Optimisation in MO usually implies inherently optimisation in an infinite-dimensional function space. This comes in explicit form(gradient bousting, for example), but can also be implicit. This leads to incredible flexibility of models, but the clarity and simplicity that is found in finite-dimensional optimisation is lost. For example, any extremum on a finite-dimensional subspace may well turn out to be a saddle point on a subspace of higher dimensionality (which can be as high as you like).

2) A finite set of well-behaved loss functions is used. This avoids the problems arising from the first point. But if one wants to customise the loss function, it will be either impossible or very difficult.

 
Aleksey Nikolayev #:

On the one hand, learning...

Alexei, do you know anything about optimising a noisy function?
 
There is a complete search, and there is optimisation. It is needed to reduce the time to find an optimal solution. Since it is so, it is always a compromise. You can optimise with the stochastic gradient method and get a better result than through adam, but sacrifice time. And one has to choose. For some tasks, accuracy may be more important than speed, for example, to increase the expectation of TC.
Here it is interesting to look at visual results.
 
Andrey Dik #:

that's terrible.

The horror is that a person comes to articles about optimisation and not knowing the topic even by 20%, so he does not recognise common knowledge and is surprised by it....

The horror is that people with even less qualification pouring pus into their heads, considering it as knowledge from gurus, the output is a pack of intellectual cripples ...


And all kinds of non-gurus happily agree, because they are already crippled and pour pus themselves calling their egos articles....

that's the real horror!!!

 
You ought to chew your snot before you make pathetic speeches, ptuschik.

It's not that good.
 
mytarmailS #:

The horror is that a person comes to articles about optimisation and not knowing the topic even 20%, so he does not recognise common knowledge and is surprised by it....

The horror is that people with even less qualification pouring pus into their heads, consider it knowledge from gurus, the output is a pack of intellectual cripples....

you'd better keep quiet, you'll look much smarter or at least better mannered.
 
all my snot is already on you, you brainless pantywaist )) ahahh
Reason: