Machine learning in trading: theory, models, practice and algo-trading - page 2473

 
Alexander Ivanov #:

Good afternoon!

A bright mind said: - A neural network is a neural network, but the foundation breaks everything.

So we're wasting our time? Or are we?

Please explain to me an ignoramus.

there is a thehanalysis which is worse or better but works with any foundation, a neural network is engaged in thehanalysis and therefore can predict when the foundation does not affect much
 
Vasiliy Sokolov #:

Econometrics is a great science. But it doesn't say anything, it doesn't predict anything. It states a fact. For example, you can make a Bayesian classifier, torture it long and hard, and then econometrically and scientifically conclude that the price is martingale, that the best strategy would be to buy while simultaneously selling.

If the aim was to determine the price martingale, you can stop here. If not, examine other factors.

 
Alexander Ivanov #:

Good afternoon!

A bright mind said: - A neural network is a neural network, but the foundation breaks everything.

So we're wasting our time? Or are we?

Please explain to me an ignoramus.

The foundation does not break anything, and neither do neural networks, nor the glass, nor econometrics, etc.

It's the market balance that breaks everything.

and nothing else

 
Mikhail Mishanin #:

What are you looking for? From your articles is not clear at all, tell me.

I simply record a video and upload to YouTube, it is easier to show clearly than to write tons of text. More precisely, if I say I have already found one of the possible solutions, many people will like it. As for the rest of the articles, there is more theory, which I hope someone will find useful. Unfortunately I can't do it all by myself. I have a lot of ideas, a lot of knowledge, but I'm not strong enough to implement them, so I managed to make only one solution, and that's to be tested in the dynamics, but I don't have the resources for that... At the moment I am experiencing only a lack of computing power, the rest is all that is needed. In short we need servers, the more and more powerful the better. Here is a video:

https://youtu.be/NLA0u172oTw

 
Alexander Ivanov #:

Good afternoon!

A bright mind said: - A neural network is a neural network, but the foundation breaks everything.

So we're wasting our time? Or are we?

Please explain to me an ignoramus.

Now the teams earn - that is, the trader loses at the level of organization of the business.

Everyone should do their own thing.

Long term, the trader can trade, but the lower to the level of noise, the more you need an organized team. And it is nonsense for one trader to earn and not to share with anybody.

You need an adequate risk manager.

Adequate traders

Adequate quants

The girls who process the clientele of the fund.


Maybe there are some in the unorganized markets, but at the moment all are not efficient, they disappear quickly.

 
Evgeny Dyuka #:
there is a thehanalysis which is worse or better but works with any foundation, a neural network is engaged in thehanalysis and therefore can predict when the foundation does not affect much
+1

Time-Management is not cancelled by any automation... once in 2 weeks (when FOMC meets) - don't rely on TA, only FA (and some prerequisites in DB)... it meets by the way (makes decisions on current affairs of its BP Balance of Payment) and says - at different times... so - as heard and understood at the moment he speaks, it does not matter, and anyway the retail-trader (as the most uninformed) understands only "a-after"... but all the nuances of timing (who enters the market when and for what purpose) should be understood at least in general terms...

 
I see, thanks for the answers :))
 
Evgeniy Ilin #:

I'll just record a video and put it on YouTube, it's easier to show clearly than to write tons of text.

The problem is that with new data models of this type immediately go out of order and neither crossvalidation nor anything else helps them...

And it doesn't matter what reference functions to use

whether it's polynomial approximation (like yours) or

filter cascade or

automatically generated filter cascade (multilayer neural network) or

approximation cascade from linear regression (MGUA) or

approximation by ordinary harmonics, etc..

In essence it does not matter what reference functions should be used for approximation (approximation), roughly it is all the same, the problem is different

Either the data are wrong or we teach the wrong thing, since the result is the same with all methods...

 
JeeyCi #:
+1

Time-Management is not cancelled by any automation... once every 2 weeks (when FOMC meets) - you should not rely on TA, only FA (and some prerequisites in DB)... it meets by the way (makes decisions on current affairs of its BP Balance of Payment) and says - at different times... so - as heard and understood at the moment he speaks, it does not matter, and anyway the retail-trader (as the most uninformed one) will only understand "a-after"... but all the nuances of timing (who enters the market when and for what purpose) must be imagined at least in general terms...

в... You.... yourself.... trading....?

FOMC.... sits.... not.... two.... times.... в.... month, but....8..... times.... в.... year....

 
mytarmailS #:

That I made a full automaton, it's cool, but the problem is that on new data models of this type immediately go out of business and neither crossvalidation nor anything else helps them.

And it doesn't matter what reference functions to use

whether it's polynomial approximation (like yours) or

filter cascade or

automatically generated filter cascade (multilayer neural network) or

approximation cascade from linear regression (MGUA) or

approximation by ordinary harmonics, etc..

In essence it does not matter what reference functions should be used for approximation (approximation), roughly it is all the same, the problem is different

Either the data is wrong, or we teach it wrong, since all methods have the same result...

There is some truth here, but I checked my model, the main thing is to know what forward is being calculated. The problem is in retraining, to avoid retraining it is necessary to strive for the maximum ratio of the number of analyzed data to the final set of criteria, in other words there is data compression, for example you can analyze data on the parabola chart and take several thousand points and reduce everything to three coefficients A*X^2 + B*X + C. That is where the data compression quality is higher, that is where the forward is. Retraining can be controlled by introducing correct scalar indicators of its quality that take into account this data compression. In my case it's done in an easier way - we take a fixed number of coefficients and take the sample size as big as possible, it's less efficient but it works.

Reason: