Machine learning in trading: theory, models, practice and algo-trading - page 45

 
Andrey Dik:

Okay, that's a very good trading performance on history! Congratulations.

Has anyone has seen an openly distributed or on sale Forex trading system, where forward is yielding profit in five years? I can just post this product here or in code base as a proof. I'm not interested in flat curves over a couple of years at hundreds of percent from the marketplace. Since they're all hard-fitting. I do my own thing and avoid fitting. And enough improvements can be made to this system to improve FS by a factor of one and a half or two.

And anyway, this thread is about machine learning, which implies creating untrained products. And percentage counting is another matter.

I will try another method to train the machine and I think it may still improve.
 
Alexey Burnakov:
How to make several predictors from the ranges of one predictor? I don't understand it.

Oh it's very simple) clustering...

1) Let's take each predictor and cluster it into, say, 50 clusters (moreover clustering can and should be done in two types 1) clustering "as it is" to cluster the predictor according to numerical values and the second type 2) clustering normalized predictor to cluster it as an image) together we will get everything as human vision, we will know not only numerical "real" predictor values but also the image - curves, slopes

2) We create a table where columns are clusters, 50 clusters ---> 50 columns ---> 50 predictors, we check their importance by some algorithm and we see that out of 50 predictors only 1-5 ones are important, we leave them

3) take the next predictor, cluster them and repeat steps 1 and 2

Theoretically such selection inside the predictor should increase the recognition quality by orders of magnitude...

But there are some disadvantages

1) expensive calculations

2) if every predictor is split one by one and its contents will be evaluated separately from the contents of other predictors then it will be impossible to evaluate the correlation between the predictors it has to be solved somehow

 
mytarmailS:

Oh it's very simple) clustering...

1) We take each predictor and cluster it into, say, 50 clusters (clustering can and should be done in two types 1) clustering "as it is" to cluster the predictor by numerical values and the second type 2) clustering normalized predictor to cluster it as an image) in complex we get everything like human vision, we will know not only numerical "real" values of predictor but also image - curves, inclinations

2) We create a table where columns are clusters, 50 clusters ---> 50 columns ---> 50 predictors, we check their importance by some algorithm and we see that out of 50 predictors only 1-5 ones are important, we leave them

3) take the next predictor, cluster them and repeat steps 1 and 2

Theoretically such selection inside the predictor should increase the recognition quality by orders of magnitude...

But there are some disadvantages

1) expensive calculations

2) if every predictor is broken down one by one and its contents will be evaluated separately from the contents of other predictors then it will be impossible to evaluate the correlation between the predictors it has to be solved somehow

You can try it that way. In general, there is such a method. You build a point diagram of the output-predictor. Ideally, there will be a good dependence. But if on some segment (usually in the tails) the dependence is blurred, these observations are excluded.
 
Alexey Burnakov:
So you can try . In general, there is such a method. Construct a point chart of the output-predictor. Ideally, there will be a good dependence. But if on some segment (usually in the tails) the dependence is blurred, these observations are excluded.

What is this method called?

is there one in the r-ke?

how to solve problem no.2 ?

i would like to discuss, it could be very effective
 
mytarmailS:

What is this method called?

is there one in the r-ke?

how to solve problem no.2 ?

want a discussion, it really can be very efective
2. The solution is this. The variable is brought to a discrete form. Suppose there are 50 levels. Then we create 49 new variables and encode levels in them. Then we apply for example linear regression and look at the importance.
 
Alexey Burnakov:

By the way, is anyone interested in this or not, I don't get it. Do you need a trained robot that passes validation in 5 years with a profit?

Such a

I'm back from vacation. I can prepare the files and post them, and whoever needs it, can improve it for themselves.

I'm interested in how you created the robot point by point, if it's not difficult...

1) Selected the features according to your method

2) You made the model

That's all?

 
mytarmailS:

I'm interested in how you created the robot point by point, if it's not difficult...

1) Selected the features according to your method

2) You made the model...

and that's it?

This is a general scheme that works all the time.

I cull the features through the importance after the GBM run. And I try different numbers of selections. The machine is trained through GBM and I have tried different fitness functions. Crossvalidation is used. Its parameters also vary. And there are some more nuances.

In general, I got the result showing that more complex is not always better. On EURUSD the model uses only 5 predictors and only two crossvalidation fouls.

 
Very interesting neural networkhttp://gekkoquant.com/2016/05/08/evolving-neural-networks-through-augmenting-topologies-part-3-of-4/ Do you think it is possible to make it trade itself and learn from its mistakes? And if so, how, I invite to discussion
Evolving Neural Networks through Augmenting Topologies – Part 3 of 4
  • 2016.05.09
  • GekkoQuant
  • gekkoquant.com
This part of the NEAT tutorial will show how to use the RNeat package (not yet on CRAN) to solve the classic pole balance problem. The simulation requires the implementation of 5 functions: processInitialStateFunc – This specifies the initial state of the system, for the pole balance problem the state is the cart location, cart velocity, cart...
 
mytarmailS:
Very interesting neural networkhttp://gekkoquant.com/2016/05/08/evolving-neural-networks-through-augmenting-topologies-part-3-of-4/ Do you think it is possible to make it trade itself and learn from its mistakes? And if so, how, I invite to discuss.

If the developer says that the network can replace the reinforcement learning algorithm, that's promising.

Experimentation is needed. But it's an interesting topic.

 
Vladimir Perervenko:

If the developer says that the network can replace the reinforcement learning algorithm, that's promising.

Experimentation is needed. But it's an interesting topic.

I agree, it's interesting... But there is almost nothing really clear to me, starting from the ideology and ending with the code itself, there is a lot and many operators I don't even know

If someone could explain it all, even with elementary examples, how to use it in trading, it would be a good experiment for such inexperienced people as me

Reason: