Bayesian regression - Has anyone made an EA using this algorithm? - page 15

 
Also... you can short-circuit it in your head. Adaptation of Yusuf's curvature is done along a sloping segment (almost a straight segment, its convexity is negligible), and it is predicted to eventually move horizontally. Think about it! It turns out that the methodology is not applicable a second time, a one-off of this kind.
 
Yousufkhodja Sultonov:
Apparently, the market doesn't really care about the forecast per se, especially in the short term. In the long term, the forecast gives modest fruit in the form of 10-12% p.a., which many are not happy with.

What difference does it make whether it's long or short term? Just switch the timeframe.

10-12% given the amount of risk, not interesting at all.

 
Dmitry Fedoseev:
More... you can short-circuit this in your head. Adaptation of the Yusuf curvature is done on a sloping segment (almost a straight segment, its convexity is negligible), and the forecast is ultimately for a horizontal movement. Think about it! It turns out that the methodology is not applicable a second time, a one-off of this kind.

Then, enter all the data and the forecast for 2015 does not change. Take a look:


 
Yousufkhodja Sultonov:
... and in the case of (18), you don't have to do anything, it will adjust itself in the best possible way. You just don't have the courage to admit that a better model than (18) has not yet been invented in every sense.

What does the Nobel committee say about (18)? Or don't they have the courage to admit it?

 
Dmitry Fedoseev:

What difference does it make whether it's long or short term? Just switch the timeframe.

10-12% given the amount of risk, not interesting at all.

The risks are much lower as, the profit factor is in the region of 3 to 6.
 
Yury Reshetov:

What does the Nobel committee say about (18)?

Yura, there is no time to think about it, they will come to their senses in about 100 years. No one, unfortunately, takes it seriously and does not study it. However, posterity should appreciate it.
 
Dmitry Fedoseev:

And what and how can there be confusion here?

What plausibility?

The likelihood:

a) the coefficients of the model

b) the model itself

assuming that the coefficients are distributed so-and-so, e.g., coefficient #1 has a mean of 0.5, st.dev. 0,1. This assumption is superimposed on the results of the coefficients, so there is a difference from OLS. There is a notion of ridge regression, where restrictions are imposed on the possible values of the coefficients; this, in my understanding, is of the same order of magnitude.

And normality of errors, well, it has to be there. There is a generalized linear regression, which I don't know anything about, somehow all the assumptions there are bypassed.

UPD: when estimating t-statistics for epsilon (coefficient) value, sigma estimation is used on model residuals. If the distribution of the residuals is highly skewed somewhere, not symmetric (ideally it should be normal), then the significance of the coefficient will no longer be valid. In other words, the model parameters cannot be trusted. Therefore, the errors are assumed to be normally distributed.

 
Alexey Burnakov:

The likelihoods:

1. a) the coefficients of the model

b) the model itself

2. under the assumption that the coefficients are distributed so-and-so, e.g. coefficient 1 has a mean of 0.5, st.dev. 0,1. This assumption is superimposed on the results of the coefficient calculations, so there is a difference from OLS. There is a notion of ridge regression, where restrictions are imposed on possible values of coefficients; this is, as I understand it, from the same point of view.

3. And normality of errors, well, it has to be there. There is a generalized linear regression, which I don't know anything about, somehow all the assumptions there are bypassed.

4. UPD: when estimating t-statistics for epsilon (coefficient) value, sigma estimation is used on model residuals. If the distribution of the residuals is highly skewed somewhere, not symmetric (ideally it should be normal), then the significance of the coefficient will no longer be valid. In other words, the model parameters cannot be trusted. Therefore, the errors are assumed to be normally distributed.

1. So we get "maximising the likelihood of model coefficients" or "maximising the likelihood of the model". Does it say that right there?

2. What do the coefficients and the distribution have to do with it? Why count the mean of the coefficients at all?

3. What makes you think that the errors are normal? The symmetry of the distribution is sufficient. It will only affect the sensitivity at the beginning of the trends.

4. Do you really think in these categories and really understand what you are writing about?

 
Yousufkhodja Sultonov:
Yura, there is no time to think about it, they will come to their senses in 100 years. No one, unfortunately, takes it seriously and does not study it. However, descendants should appreciate it.
There is nothing to take it seriously. In fact, the problem is solved at the level of the term paper of a 4th year student of some department associated with automation.
 
Dmitry Fedoseev:

1. So we get "Maximising the likelihood of model coefficients" or "maximising the likelihood of the model". Is that what it says?

2. What do the coefficients and the distribution have to do with it? Why count the mean of the coefficients at all?

3. What makes you think that the errors are normal? The symmetry of the distribution is sufficient. It will only affect the sensitivity at the beginning of the trends.

4. Do you really think in such categories and really understand what you are writing about?

1. The likelihood is maximal at : further go the long formulas. We can say that we get the minimum value of the mean squared residuals, or we can say that we have maximised the likelihood.

2. there may be something you don't understand. the coefficient b1 is what? The mathematical expectation of the sample values of coefficient b1, which is t-distributed in the absence of knowledge of the parameters of coefficient b1 on the general population. Linear regression (ordinary least squares) gives an estimate of E(b) and sigma(b), the standard error of coefficient b1. What you see in the output of the model is all these estimates. Then there is an estimate of how significantly different E(b) is from 0, the t-statistic and the associated probability.

3. I can't say anything about trends. Symmetry is important - fact. Sigma on the residuals is also important. The coefficient of kurtosis is also important.

4. I've been reading a lot about regression recently, so what I've written above I understand. I report to my customers on regression results and have to understand something. Although I prefer non-parametric methods.

Reason: