Machine learning in trading: theory, models, practice and algo-trading - page 2829

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
We use gradient descent algorithms, which is in general, not for neurons, and which has a huge beard. We google it and don't ask childish questions, having learnt how gradient descent overcomes different types of traps of local extrema. This is something people have been doing specifically for years.
You have been asked to test learning/optimisation on a few representative functions, this is good practice
If you think neural networks do it perfectly, you are probably wrong
You have been suggested to test learning/optimisation on a few representative functions, this is good practice
If you think neural networks do it perfectly, you are probably wrong
yeah... frankly speaking, I am surprised that not everyone has adequately perceived simple truths - everything should be tested, and belief in dogmas voiced by someone means stagnation and degradation.
and gradient descent and bekprop are so ancient and wretched methods that it is strange that they can still be taken seriously by some people.
by the way, genetics is not the most robust algorithm nowadays. it should be included in the table for... well, just to make it clear that there's always room for improvement.
Yeah... frankly speaking, I am surprised that not everyone perceived adequately simple truths - everything should be checked, and belief in dogmas voiced by someone means stagnation and degradation.
and gradient descent and bekprop are such ancient and wretched methods that it is strange that they can still be taken seriously by anyone.
The perception there is specific, at the level of kargo-cult and belief in the divine R, bringing the gifts of civilisation.
On the one hand, learning is a special case of optimisation, but on the other hand, there are some peculiarities.
1) Optimisation in MO usually implies inherently optimisation in an infinite-dimensional function space. This comes in explicit form(gradient bousting, for example), but can also be implicit. This leads to incredible flexibility of models, but the clarity and simplicity that is found in finite-dimensional optimisation is lost. For example, any extremum on a finite-dimensional subspace may well turn out to be a saddle point on a subspace of higher dimensionality (which can be as high as you like).
2) A finite set of well-behaved loss functions is used. This avoids the problems arising from the first point. But if one wants to customise the loss function, it will be either impossible or very difficult.
On the one hand, learning...
that's terrible.
The horror is that a person comes to articles about optimisation and not knowing the topic even by 20%, so he does not recognise common knowledge and is surprised by it....
The horror is that people with even less qualification pouring pus into their heads, considering it as knowledge from gurus, the output is a pack of intellectual cripples ...
And all kinds of non-gurus happily agree, because they are already crippled and pour pus themselves calling their egos articles....
that's the real horror!!!
The horror is that a person comes to articles about optimisation and not knowing the topic even 20%, so he does not recognise common knowledge and is surprised by it....
The horror is that people with even less qualification pouring pus into their heads, consider it knowledge from gurus, the output is a pack of intellectual cripples....