Machine learning in trading: theory, models, practice and algo-trading - page 634

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I don't know, I don't do this kind of datasatanism, maybe more knowledgeable Satanists will answer :D
On the subject of econometrics. In general, this discipline appeared comparatively recently. I learned about it around 2006. I was sent a book in English, and I translated it, but you know what a pain to read a book translated by a translator in 2006. This book is specialized to the market for spokespeople. Dude who gave it to me at the time managed a fairly large fund and lived in Europe, I think in Paris, but not the point. Now I was surprised by the fact that typing the query into Google, I got a lot of links, but only one in Russian. Do we not want to accept this discipline in principle????
V. P. Nosko
Econometrics
is in my library
V. P. Nosko
Econometrics
is in my library.
No... that one was just for stock spokes, now I will try to find.....
No... that one was specifically for stock spiking, I'll try to find it now.....
Man, econometrics is one science, not several.)
I can't upload it, it doesn't fit the format. I think you can download it anyway.....
Analysis of Financial Time Series 2005
I can't upload it, it does not fit the format. I think you can download it anyway.....
Analysis of Financial Time Series 2005
http://www.lcs.poli.usp.br/~ablima/livros/Analysis%20of%20financial%20time%20series%20Tsay.pdf
it?
tsay univercity of tsay.
even in this ancient 10 year old book there are models that have never been used here and are unlikely to be used :D what is there to talk about, what progress... you need to create a research institute and do conferences
I suggest Australia
I selected only those inputs that have negative entropy and are close to zero. Learning with enviable consistency began to come to the same model parameters.
Surprisingly, at some point, when adding one value, the entropy becomes abruptly negative. What could this be related to????
If we assume that a positive value is a measure of uncertainty, while a negative one is a measure of order, then we select readings of the network with minimal entropy value, however I believe that too large an index in the negative zone is also not good. That is why there are two variants, either to choose a network with the smallest entropy, or the one with entropy closer to zero....
I am waiting for comments to this post, and most importantly, an explanation of why this may be. Hypothesis theories, etc. I would appreciate it. Thanks!!!!
Mikhail, absolutely dubey, nevertheless stubbornly moving towards his goal. :))))
Once again, non-entropy is a measure of ordering, a measure ofstructure complexity.
If you want to know everything about the future at a certain point in time, i.e. to predict, you have to reduce the process to a Markovian one. Make it so that the nonentropy --> 0.
If you can't reduce non-Markovian process to Markovian by introducing pseudo-states, you just have to watch the value of non-entropy and work only when it --> 0. As soon as it starts to increase, you stop making forecasts, because you are in a very complex structure with "memory".
Again, a question. There are 8 models of NS. At the current signal the entropies of the NS outputs are
5.875787568 -5.702601649 5.066989592 9.377441857 7.41065367 1.401022575 4.579082852 5.119647925
Which one should I choose? The red one, because it has negative entropy, or the blue one, which is closer to zero. I will say that these two models look in different directions, but we know that time will show who was right.... In the end, one or the other will win. Who's thinking about it?
Mikhail, absolutely dubious, nevertheless stubbornly moves toward his goal. :))))
Once again - non-entropy is a measure of ordering, a measure ofstructure complexity.
If you want to know everything about the future at a certain point in time, i.e. to predict, you have to reduce the process to a Markovian one. Make it so that the nonentropy --> 0.
If you can't reduce a non-Markovian process to a Markovian one by introducing pseudo-states, then you just have to watch the value of nonentropy and work only when it --> 0. As soon as it starts to increase, you stop making predictions, because you have hit a very complex structure with "memory".
With your help, maybe we'll figure it out :-) That is, if I understood correctly, then you need to select those inputs, which are near zero on one and the other side. Is it so?