Discussion of article "Probability theory and mathematical statistics with examples (part I): Fundamentals and elementary theory" - page 8

 
Rorschach:

Sixth-generation mathematician, way ahead of him.

At the domestic level (as I understand it), he writes about cyclicality/clustering of volatility and the importance of older tf.

Interesting that he tried to calculate this mathematically, not just an abstract description.

That's what prevents them (Ataman, Ilyinsky) from making an introduction, in simple language, so that you don't have to reread each line 10 times.

There were hints about the first of them that he was quite successful in managing very large sums of money. So he obviously wrote all this just for his own pleasure, which is usually lost if you have to chew everything up. And there may not have been enough free time.

Ilyinsky talks about very advanced, but quite standard financial maths, which requires at least two years of matfack to understand. How to cram it into a couple of lectures? I don't know, I would probably limit myself to a demonstration of the yuima package from R.

The YUIMA Project
The YUIMA Project
  • yuimaproject.com
The YUIMA Software performs various central statistical analyses such as quasi maximum likelihood estimation, adaptive Bayes estimation, structural change point analysis, hypotheses testing, asynchronous covariance estimation, lead-lag estimation, LASSO model selection, and so...
 
Aleksey Nikolayev:

There were hints about the first one that he was quite successful in managing very large sums of money. So he was obviously writing all this just for his own pleasure, which is usually lost if you have to chew everything up. And there may not have been enough free time.

Ilyinsky talks about very advanced, but quite standard financial maths, which requires at least two years of matfack to understand. How to cram it into a couple of lectures? I don't know, I would probably limit myself to a demonstration of the yuima package from R.

I like the way Abbakumov tells

 
Rorschach:

I liked the way Abbakumov tells it

For basic matstat Python is not bad, but for more or less advanced things R is better.

 
Aleksey Nikolayev:


Ilyinsky talks about very advanced, but quite standard financial mathematics, which requires at least two courses of a mathematics major to understand. How to cram it into a couple of lectures? I don't know, I would probably limit myself to a demonstration of the yuima package from R.

This package has a nice GUI.

Files:
 
Vladimir Perervenko:

This pact has a nice GUI.

That's basically what I had in mind. As it is written about it on their title page:"No coding required", "Suitable for beginners, students"

 

Interesting passage, maybe it has something to do with the topic of trading. "Value" is probably a wrong translation, the context is more appropriate "value".

Many random functions have a surprising property: the probabilitythat the eigenvalues of the Hesse matrix are positiveincreases as we approach regions of low value. In our coin flip analogy , this means that the probability of flipping an eagle n times in a row is higher if we are at a low-value critical point. It also means that local lows with low value are much more likely than those with high value. Critical points with highvalue are much more likely to be saddle points. And critical points with very high cost are much more likely to be local maxima. This is true for many classes of random functions. And for neural networks?Baldi and Hornik (1989) proved theoretically that small autoencoders without nonlinearities have global minima and saddle points, but do not have local minima with cost higher than in the globalminimum .

 
Rorschach:

Interesting passage, maybe it has something to do with the topic of trading. "Value" is probably a mistranslation, the context is more like "value".

Many random functions have a surprising property: the probabilitythat the eigenvalues of the Hesse matrix are positiveincreases as we approach regions of low value. In our coin flip analogy , this means that the probability of flipping an eagle n times in a row is higher if we are at a low-value critical point. It also means that local lows with low value are much more likely than those with high value. Critical points with highvalue are much more likely to be saddle points. And critical points with very high cost are much more likely to be local maxima. This is true for many classes of random functions. But for neural networks?Baldi and Hornik (1989) proved theoretically that small autoencoders without nonlinearities have global minima and saddle points, but do not have local minima with cost higher than in the globalminimum .

Something about deep learning problems here.

Cost is a common name for the value of the function being optimised.

 
It was a good story. I am waiting for the next part.
 
SASAN PARVIZ:
It was a good story. I am waiting for the next part.

I'm working on it. Just very slowly. )

 
Aleksey Nikolayev:

I'm working on it. Just very slowly. )

Thank you again.