Machine learning in trading: theory, models, practice and algo-trading - page 415

 
Mihail Marchukajtes:

Well, since you're nastivate, I'll tell you one idea about the collection for processing data. It is really very difficult to train a model with high level of generalization on a large enough area, because the market is a living organism and blah, blah, blah. The longer the training period, the worse the model performs, but longer. Task: Make a long-running model. Split or method two, however for those who use a committee of two networks.

We have three states "Yes", "No" and "Don't know" when the nets show in different directions.

We train the network on the whole section, in our case 452 entries. The network learned this set at 55-60%, let us assume that the responses "Don't know" on the training sample were 50%, respectively 226 signals the network could not learn. Okay, now we build a new model ONLY on the "Don't know" states, that is, we try to build the model on such quasi states, which misled the first model. The result is about the same out of 226 only half will be recognized, the rest will get the "Don't Know" state, then build the model again. the result is 113, then 56, then 28, then 14. At 14 records are not known to any of the previous models Jprediction Optimizer will usually count up to 100% generalizability.

As a result, we have a "Pattern System" that recognizes the entire market in a three-month area.

Here's another way, besides "Context of the Day" How you can break the market into subspaces and perform training by getting exactly the "Pattern System" Here's an example....

An interesting approach. I'll have to experiment... Has it been tested yet, at least on the demo? It would be interesting to see the signal.

This is all out of sample since 05.29 on 15 minutes. It's already the third week. But if it does not gain more, then in principle it is a penny of an approach, but I believe ....... :-)

In my opinion it would be acceptable to retrain NS once a week on weekends. So, I think it is possible to use. You should not set tasks to trade for 3 weeks, 3 months, a year in profit without retraining. And retraining once a week your NS will always be adjusted to the latest market patterns.

 
elibrarius:

That's an interesting approach. I'll have to experiment... Have you started it up yet, at least on the demo? It would be interesting to see the signal.

In my opinion, it would be acceptable to retrain NS once a week on weekends. So I think we can use it... I don't need to set tasks to trade for 3 weeks, 3 months, a year in profit without retraining. And retraining once a week your NS will always be adjusted to the latest market patterns.

I have made one for my tester, the last one with many trades is the work of three models.

I have made some scripts that work according to the algorithm of the EA, I will run them manually when I get a signal and then we will see...

 
When the LASSO fails???
When the LASSO fails???
  • insightr
  • www.r-bloggers.com
The LASSO has two important uses, the first is forecasting and the second is variable selection. We are going to talk about the second. The variable selection objective is to recover the correct set of variables that generate the data or at least the best approximation given the candidate variables. The LASSO has attracted a lot of attention...
 
Mihail Marchukajtes:

Well, since you're nastivate, I'll tell you one idea about the collection for processing data. It is really very difficult to train a model with high level of generalization on a large enough area, because the market is a living organism and blah, blah, blah. The longer the training period, the worse the model performs, but longer. Task: Make a long-running model. Split or method two, however for those who use a committee of two networks.

We have three states "Yes", "No" and "Don't know" when the nets show in different directions.

We train the network on the whole section, in our case 452 entries. The network learned this set at 55-60%, let's assume that the answers "I don't know" on the training sample were 50%, respectively 226 signals the network could not learn. Okay, now we build a new model ONLY on the "Don't know" states, that is, we try to build the model on such quasi states, which misled the first model. The result is about the same out of 226 only half will be recognized, the rest will get the "Don't Know" state, then build the model again. the result is 113, then 56, then 28, then 14. At 14 records are not known to any of the previous models Jprediction Optimizer will usually count up to 100% generalizability.

As a result, we have a "Pattern System" that recognizes the entire market in a three-month area.

Here's another way, besides "Context of the Day" How you can break the market into subspaces and perform training by getting exactly the "Pattern System" Here's an example....

------------------------------------------------------------

This method is called "boosting" -Boosting is a procedure of sequential composition of machine learning algorithms, when each successive algorithm tries to compensate disadvantages of the composition of all previous algorithms.Boosting is a greedy algorithm for constructing a composition of algorithms.

The most famous recent application is XGBoost.

Good luck

 
SanSanych Fomenko:
Predictor selection with LASSO

Yes, this algorithm does not need to remove highly correlated variables. It handles them well on its own.

I used it a long time ago, when I was interested in regression.

Good luck

 

Dr. Trader:

Yes, not so much, with such results, you benefit from the paradigm, "the main thing is not to win, but participation", I actually lost my account, probably not consciously, from the shame, too, of us like Wizard gave hope, but also went out of the way, could not withstand the pressure of competition. Perhaps it is worth recognizing that there are experts much better than us (hundreds, thousands of times ...), they studied at Harvard and use supercomputers.

 
I'mnot:

The specialists are much better than us (hundreds, thousands of times...), they studied at Harvard and use supercomputers.

Not by a thousand... if you look at the award for first place the champion is only 1000 / 2.09 = 478.47 times better than me

 
Dr. Trader:

Well, not in thousands... if you look at the award for first place, the champion is better than me by only 1000 / 2.09 = 478.47 times

One payout in 478.47 times, but on the accumulated earnings there are dudes for $ 10000 that is more than 3000 times better than you, one hope for Wizard, but I think he is ashamed to discuss this unpleasant topic.

 

Boring...... :-( You train, you train, you choose models, you work so to speak. But what to do when the model is built and it remains only to work on it.......?

 
Mihail Marchukajtes:

Boring...... :-( You train, you train, you choose models, you work so to speak. But what to do when the model is built and it remains only to work on it.......?

It rarely happens, it's better to go somewhere for a vacation until your brain protests and starts demanding to write code, for example, Thailand, Indonesia or the Maldives

Reason: