Machine learning in trading: theory, models, practice and algo-trading - page 741

 
Mihail Marchukajtes:

It's just that your questions are at a beginner's level.....

Well, here we go. I'm talking about me... Moose, first one in two weeks... But I'm not discouraged and keep working on the TS.

Of course, I have no experience, so I am curious to ask what I do not understand.

 
Aleksey Vyazmikin:

Of course, I have no experience, so I am curious to ask what I do not understand.

Try to understand neural network technology and machine learning on your own. There and we will talk...

 
Mihail Marchukajtes:

Try to figure out neural network and machine learning technology on your own. Then we'll talk...

I could use a teacher and mentor in this matter...

 
toxic:

Because the variation is small if on 30 observations lern and 30 test gets accuracy of 90% then you can take a chance if there is no choice, but the market at >95% is noise, so points need thousands of times more to get a prediction at least comparable in modulo with the error.


PS: the central limit theorem is the basis of statistics and its progeny MO, it's like F = ma in mechanics, you are so disrespectful to it for nothing...

Where did you see this execution of the limit theorem on non-stationary random variables?

 

Can't find a job? Multiply your time by your power!
(A collection of universal tips.)


;)))))

 
toxic:

Here's another heresy about the "non-stationarity problem"...

The return is stationary and almost Gaussian, if you straighten by volatility, and that's all we need, the price itself, which is not stationary, does not participate in the calculations.

Study GARCH and don't post another heresy about the "non-stationarity problem" here for returns. There are hundreds of GARCH models trying to account for the nuances of non-stationarity in returns, though all of these people don't have that aplomb.

 
SanSanych Fomenko:

...

Why?

Because in the theoretical creation of a cure, a great deal of effort is spent on justifying the effect of the cure on the disease.

The only thing that makes us different is that we pile everything together. Look at this thread: 99% about perseptrons and almost nothing about datamining.

And where have you seen the creators of drugs here? Only consumers, you've stopped taking random woods, now you drink arcch garch - patients, though...

 

I tried to study it, even started over several times. But each time I ran into a wall of insurmountable statistical and econometric terms and never got around to it.

But I still learned something important. Arima and Garch spend a lot of time modeling the internal states of a time series, from which the price is then derived. That is, there are dozens of global processes going on in the world, and the price is some combination of them. So instead of modeling the time series itself, it is better to try to model these hidden processes, and model the interaction of these processes to get the time series we need.

Garch and Arim have some built-in ideas about these hidden processes (seasonality, trend, etc.), but they are limited by the formulas put into these models decades ago. We can try to create our own indicators, that would describe these internal market conditions, and there are less limitations than in garch. But it's also easy to make a mistake, it's a very difficult task.

 
It does not:


Return is stationary and almost Gaussian, if you straighten by volatility, and that's all we need, the price itself, which is not stationary, does not participate in the calculations.

Do you straighten volatility on history or on the arrival of a new tick? It is clear that by shifting, for example, a muv by half a period ago and deducting from the basic quotes you can get almost Gauss in the residue. But in order to know what is going on with the volatility at the most interesting place - at the right edge, we should know the future half of the muv period. And where to get them?


 
Dr. Trader:

I tried to study it, even started over several times. But each time I ran into a wall of insurmountable statistical and econometric terms and never got around to it.

But I still learned something important. Arima and Garch spend a lot of time modeling the internal states of a time series, from which the price is then derived. That is, there are dozens of global processes going on in the world, and the price is some combination of them. So instead of modeling the time series itself, it is better to try to model these hidden processes, and model the interaction of these processes to get the time series we need.

Garch and Arim have some built-in ideas about these hidden processes (seasonality, trend, etc.), but they are limited by the formulas put into these models decades ago. We can try to create our own indicators, that would describe these internal market conditions, and there are less limitations than in garch. But it's easy to be mistaken too, it's a very difficult task.

GARCH and MO are not competitors, they completely complement each other, that's what I'm doing now: trying to combine the old MO - trend and adding GARCH for determining entry points. I've written before that I have an EA that gave me the amount of money I needed in a year of trading. It consisted of both RF and adaptive wipes (refined lawyers). But that pair did not solve the non-stationarity problems.

Globally, I distinguish between two kinds of models:

  • One that takes into account the statistical characteristics of the time series is GARCH, an extremely developed trend, essentially a general line along with cointegration. A huge number of publications. For example, as a characteristic of the level of publications. Different GARCH models are being studied on all the stocks in the S&P 500 index, i.e. 500 stocks. I am not aware of anything similar in MO.
  • Classifications that like the old TA mechanically look for patterns.

Everyone on this thread is clinging to MO for some reason. On what basis? The classification is based on some relationship between the target variable and its predictors.

Well, first of all, any speculation about the relationship is instantly bogged down here, as it happened with mutual information

Second, who has proven that if there is such an influence of the predictors on the target variable, that influence will not change over time? I've already written many times on the basis of a real trading Expert Advisor that out of 27 predictors found beforehand they are selected on every bar and 5 to 15 of them are left, and this list is constantly changing within 27 predictors. I.e. amount of influence of predictors on the target variable changes in time and rather quickly.


Therefore the idea of the Expert Advisor is as follows:

  • predict the future direction of price movement on the high bar using the classification
  • then use a pseudo-stationary time series to predict the appropriate entry direction using GARCH
Reason: