Machine learning in trading: theory, models, practice and algo-trading - page 1243

 
Maxim Dmitrievsky:

well, traine 14 test 40

Well in general accuracy on the test rushes from 57 to 61%, on random trains\test - 80\20, such a difference on such a small dataset is quite logical.

 
forexman77:

Yes, yes. You can pull a beauty out of a toad by the ears, too)

Frankly, I do not understand what is the problem and the difficulty of working in the channel? Perhaps you play for 1 hour and above? - then there are no channels there, but only the apparent reflection of the apparent moon.

ZS Speaking of MO. If we assume that noise prevails in forex, then the formulation of the MO problem should be different).

 
Grail:

Well, the accuracy on the test ranges from 57 to 61%, on random trains it is 80\20, such a difference on such a small dataset is quite logical.

thanks, well, i got it... i'll look at more models in python and maybe make a bigger set, i'll check

 

dataset is 10 times larger.

catbust:

Alglib:

2018.12.27 11:44:10.475 Core 2 2018.12.26 23:59:59 0.10990

2018.12.27 11:44:10.475 Core 2 2018.12.26 23:59:59 0.49840

What I've noticed is that alglib is heavily overtrained in one go, but on the test it shows relatively similar errors with boost. Boost smoothes out both errors nicely, and if you wait, the trayn graph goes into space too, while the test stays dangling. Just boost is better controlled, and there's no means to stop branching early in algib

well this is randoms in general, this will not earn of course, and the boost will not save


Files:
 

Maxim Dmitrievsky:

and there is no means to stop branching trees early in alglib

The code is open, you can tweak it. Or you can write your own tree based on the basic one from Alglib

Wizard wrote something of his own... not a ladder, not a mesh, but an unknown animal))
 
elibrarius:

The code is open - and you can tweak. Or write your own tree on the basis of basic one from Alglib

Wizard wrote something of his own... (and not a forest or a network, but an unknown animal))

Ehhhhh....

Once again, Wizard just knows how to prepare the input data for the neural network. Input data! And from there, it makes no difference what to use.

Here's an example.

I remember there was a Doc here. Now he's gone... The smartest, most efficient guy. Do you think he was disappointed and gave it all up? I don't think so.

I worked with him a lot on the LS. He's done billions of studies in receiving/processing tick quotes. He took different sources of quotes, thinned them, thinned them, even bent them...

The problem is that he is not in my mailing list or on the forum.

I think he finally found a way to prepare the inputs for the NS, as well as Koldun - and then he is free to do as he pleases, to sit here on the forum or not.

But, he did a tremendous job - I'm a witness to that.

 
Well, I didn't come here for five months either, because I had a good part-time job. Now I'm free, I'm back here... Maybe Doc will come back. Maybe he found something permanent.
 
Maxim Dmitrievsky:

dataset is 10 times larger.

catbust:

Alglib:

2018.12.27 11:44:10.475 Core 2 2018.12.26 23:59:59 0.10990

2018.12.27 11:44:10.475 Core 2 2018.12.26 23:59:59 0.49840

What I've noticed is that alglib is heavily overtrained in one go, but on the test it shows relatively similar errors with boost. Boost smoothes out both errors nicely, and if you wait, the trayn graph goes into space too, while the test stays dangling. Just boost is better controlled, and there's no means to stop branching early in algib

well it's random in general, it won't make money of course, and boost won't save


Well, yes, 52-53, not exactly Random, but it won't work either.

 
Alexander_K2:

Ehhhhh....

Once again, Wizard just knows how to prepare the input data for the neural network. Input data! And from there, it makes no difference what to use.

Here's an example.

I remember there was a Doc here. Now he's gone... The smartest, most efficient guy. Do you think he was disappointed and gave it all up? I don't think so.

I worked with him a lot on the LS. He did billions of studies in receiving/processing tick quotes. He took different sources of quotes, thinned them, thinned them, even bent them...

The problem is that he is not in my mailing list or on the forum.

I think he finally found a way to prepare the inputs for the NS, as well as Koldun - and then he is free to do as he pleases, to sit here on the forum or not.

But, he's done a tremendous job - I'm a witness to that.

I witness it. Doc seems to have emptied all bitcoins he got from numerai and took offense against the market, he apparently "got hooked".

 
Alexander_K2:

Ehhhhh....

Once again, Wizard just knows how to prepare the input data for the neural network. Input data! And from there, it makes no difference what to use.

Here's an example.

I remember there was a Doc here. Now he's gone... The smartest, most efficient guy. Do you think he was disappointed and gave it all up? I don't think so.

I worked with him a lot on the LS. He's done billions of studies in receiving/processing tick quotes. He took different sources of quotes, thinned them, thinned them, even bent them...

The problem is that he is not in my mailing list or on the forum.

I think he finally found a way to prepare the inputs for the NS, as well as Koldun - and then he is free to do as he pleases, to sit here on the forum or not.

But, he has done a tremendous job - I'm a witness to that.

Do you understand what you're saying? - He has done billions of studies in the field of receiving/processing tick quotes. He took different sources of quotes, thinned them, didn't thin them, bent them into a bend... Ring any bells? That's right, the Monkey and his glasses, - he moves them in and out: now he presses them to his neck, now he puts them on his tail, now he smells them, now he licks them . Good advertisement for your gurus.)

And as for the guru, for the foreseeable past we had only one - SanSanych, and that for lack of a better. He at least dragged a lot of people to R, already the case.

As to the very theme of MO, its very existence for many years shows the complete futility of the approach: MO as a trading system. Studying the elephant's excerpts, there is not much to say about the elephant itself, much less predict it. And quotes, nothing else.

At the moment indicator-logic systems, with less expenses, show quite real results. So maybe we should use MO in such systems. Let's say, forest-trees - already prepared teachable logic for such systems, than to bother to write it all myself. In general, MO as applied solutions for existing systems is quite a working topic.

Reason: