Discussion of article "Probability theory and mathematical statistics with examples (part I): Fundamentals and elementary theory" - page 8
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Sixth-generation mathematician, way ahead of him.
At the domestic level (as I understand it), he writes about cyclicality/clustering of volatility and the importance of older tf.
Interesting that he tried to calculate this mathematically, not just an abstract description.
That's what prevents them (Ataman, Ilyinsky) from making an introduction, in simple language, so that you don't have to reread each line 10 times.
There were hints about the first of them that he was quite successful in managing very large sums of money. So he obviously wrote all this just for his own pleasure, which is usually lost if you have to chew everything up. And there may not have been enough free time.
Ilyinsky talks about very advanced, but quite standard financial maths, which requires at least two years of matfack to understand. How to cram it into a couple of lectures? I don't know, I would probably limit myself to a demonstration of the yuima package from R.
There were hints about the first one that he was quite successful in managing very large sums of money. So he was obviously writing all this just for his own pleasure, which is usually lost if you have to chew everything up. And there may not have been enough free time.
Ilyinsky talks about very advanced, but quite standard financial maths, which requires at least two years of matfack to understand. How to cram it into a couple of lectures? I don't know, I would probably limit myself to a demonstration of the yuima package from R.
I like the way Abbakumov tells
I liked the way Abbakumov tells it
For basic matstat Python is not bad, but for more or less advanced things R is better.
Ilyinsky talks about very advanced, but quite standard financial mathematics, which requires at least two courses of a mathematics major to understand. How to cram it into a couple of lectures? I don't know, I would probably limit myself to a demonstration of the yuima package from R.
This package has a nice GUI.
This pact has a nice GUI.
That's basically what I had in mind. As it is written about it on their title page:"No coding required", "Suitable for beginners, students"
Interesting passage, maybe it has something to do with the topic of trading. "Value" is probably a wrong translation, the context is more appropriate "value".
Many random functions have a surprising property: the probabilitythat the eigenvalues of the Hesse matrix are positiveincreases as we approach regions of low value. In our coin flip analogy , this means that the probability of flipping an eagle n times in a row is higher if we are at a low-value critical point. It also means that local lows with low value are much more likely than those with high value. Critical points with highvalue are much more likely to be saddle points. And critical points with very high cost are much more likely to be local maxima. This is true for many classes of random functions. And for neural networks?Baldi and Hornik (1989) proved theoretically that small autoencoders without nonlinearities have global minima and saddle points, but do not have local minima with cost higher than in the globalminimum .
Interesting passage, maybe it has something to do with the topic of trading. "Value" is probably a mistranslation, the context is more like "value".
Many random functions have a surprising property: the probabilitythat the eigenvalues of the Hesse matrix are positiveincreases as we approach regions of low value. In our coin flip analogy , this means that the probability of flipping an eagle n times in a row is higher if we are at a low-value critical point. It also means that local lows with low value are much more likely than those with high value. Critical points with highvalue are much more likely to be saddle points. And critical points with very high cost are much more likely to be local maxima. This is true for many classes of random functions. But for neural networks?Baldi and Hornik (1989) proved theoretically that small autoencoders without nonlinearities have global minima and saddle points, but do not have local minima with cost higher than in the globalminimum .
Something about deep learning problems here.
Cost is a common name for the value of the function being optimised.
I'm working on it. Just very slowly. )
I'm working on it. Just very slowly. )