Machine learning in trading: theory, models, practice and algo-trading - page 2052

 
Maxim Dmitrievsky:

It's not a moron, that's for sure) the test is simple, apparently obsolete

If it is like in Idiocracy, it will also turn out to be complicated.)

 
Rorschach:

If it will be like in Idiocracy, it will also turn out to be difficult))

quite, I can remember a lot of acquaintances or local participants, who will not pass

 
Aleksey Vyazmikin:

If it gives a prediction for a bar or more, it is not enough to make a decision, it is not much better than the MA. While we are interested in deviations from this notional MA for earnings.

As far as you do, so far it will be.

I picked the rnn with the tester this morning - it's retraining on increments. On the tray is good, not on the tester is bad.

What the hell do you need it for if the results are not better than the forest? ) Catbust, by the way, is able to show a more interesting picture. In general, it's a cool thing, but it still doesn't work.
 

Neural Network Porn... I'm about to upload part 2 with conclusions, about RNN (GRU)

ч.2. Normal test at the end of the video. Before that, no way I wanted to train properly


And a little recap of how about the same thing works with catbust, where the result is better:


 
Maxim Dmitrievsky:

Neural Network Porn... I'm about to upload part 2 with conclusions, about RNN (GRU)

ч.2. Normal test at the end of the video. Before that, no way I wanted to train properly


And a little recap of how about the same works with catbust, where the result is better:


You don't use Numba for acceleration?

 
Rorschach:

You don't use Numba to speed things up?

The loops themselves work fast without it, so far I don't need it.

If possible, vectorization is everywhere, everything is fast there

Z.I. obvious flaw - you can't teach by logloss or cross-entropy, you need to screw in an accuracy metric at least. It's only now that I realized it. Because of this, the results are not very good, most likely.

Did the invitation come from ods? Maybe there are other ways, I will have to ask
 
Maxim Dmitrievsky:

the loops themselves work quickly without it, so far no need

If possible, vectorization is everywhere, everything is fast there

Z.I. obvious flaw - you can't teach by logloss or cross-entropy, you need to bolt on the accuracy metric at least. It's only now that I realized it. Because of this, the results are not very good, most likely.

did you get an invitation from ods? Maybe there are other ways, I will have to ask

Sometimes numpy arrays take longer to be counted than python lists. I've also noticed that wrapping the code into a function gives me a speedup.

I don't have time yet, I don't know when my next visit to NS will be.

 
Rorschach:

At I sometimes find numpy arrays take longer to compute than python lists. I also noticed that wrapping the code into a function gives a speedup.

Haven't gotten around to it yet, kind of no rush, don't know when the next run in NS will be.

that's weird. You must be counting element by element instead of vector by vector.

nampai as on the pluses should fly, with a small overhead
 
Maxim Dmitrievsky:

this is strange. You must be counting element by element, not vector by vector.

Nampai as in pluses should fly, with a little overhead.

Yes, if I have to do it in a loop.

I counted something like 500 times for numba, but it's not exact. You have to put the code into a function and put @njit there. @vectorize works at @njit speed, so there's no point in fiddling with it, only if you count on gpu. But there is even more fuss with gpu, arrays cannot be declared inside the function, the code should preferably be without loops, otherwise it will be very long
 

And this is how the catbust was trained on the same data (in 5 seconds)

52: learn: 0.7964708 test: 0.7848837 best: 0.7860866 (27) total: 604ms remaining: 5.09s

Source dataset:

Trained model (the second half of the trade is the test sample):


Not always, of course, depending on the sampling (and it's random, i.e., it needs oversampling). Sometimes like this:

34: learn: 0.5985972 test: 0.5915832 best: 0.5927856 (9) total: 437ms remaining: 5.81s



Reason: