Machine learning in trading: theory, models, practice and algo-trading - page 1329

 
Yuriy Asaulenko:

You can do the same thing without a teacher. I don't see any difference in this.

Imagine a bunch of neurons learning and solving a problem that is solved by a couple or three if statements... The NS brains are just full of this crap, and instead of thinking about beautiful things....))

I get it, it's called a priori knowledge, built-in expertise, but you don't cross-check it with a model because you're 100% sure

I don't have any knowledge of a random process except that it's random and a couple of other minor beliefs
 
Maxim Dmitrievsky:

I get it, it's called a priori knowledge, built-in expert judgement, but you don't double-check it with a model because you're 100% sure

Yes, that's exactly right. It's practically axiomatic - what's there to verify. We know a priori a part of the solution - don't we meditate on the market for nothing).

 
Maxim Dmitrievsky:

Again, we are talking about different approaches

You have training with a teacher, because you initially lay down the priors, I have no teacher.

I remember. Different approaches, of course. Once again, in this (with a teacher) I do not see any contraindications. Everything is feasible, if the desire, if it appears of course.

Unless you have an RNN without a teacher, here everything is more complicated, and just do not know, I did not use. By the way, what do you use? Might have talked, but dig into the topic...

 
Yuriy Asaulenko:

I remember. Different approaches, of course. Once again, I don't see any contraindications in this (with a teacher). Everything is feasible, if you have the desire, of course.

Unless you have an RNN without a teacher, here everything is more complicated, and just do not know, I did not use. By the way, what do you use? Maybe talked, but shovel the topic...

plenty of things, not enough RNN yet )) I'll do it later

there are articles on the basics, but have already gone beyond, of course

 
Maxim Dmitrievsky:

I have a lot of things, I still need more )) I will do it later

There are articles on the basics, but of course I've already gone further.

At a crossroads - you'll go right... etc. Tensorflow, very good functionality, but they say, very burdensome. Only read the docs so far. Have you used it?

 
Yuriy Asaulenko:

At the crossroads - you will go right... etc. Tensorflow, very good functionality, but said to be very cumbersome. I've only read the docs so far. You haven't used it?

heavy in the sense of? tf is low level, it's put on top of theano, use tf.theano, then everything is easier

i've seen different examples, but i haven't done any development yet

Version 2 is on its way, already available on the site, it simplifies the creation of models
 
Maxim Dmitrievsky:

is tf a low-level, over theano, using tf.theano, then everything is easier

I've looked at different examples, but I haven't done any development yet.

In terms of speed. I think, maybe, on scikit-learn for now stop, CHEZ. MLPs are not bad there.

 
Yuriy Asaulenko:

In terms of speed. I think maybe I should stop at scikit-learn for now, CHEZ. MLPs aren't bad there.

I don't know, I don't think so.

there are plenty of packages, i try to study only the most popular and evolving ones.

sklearn is a kind of hodgepodge.

tf is more of a builder of their own architecture.

 
Yuriy Asaulenko:

The NS do not like scaling very much. Trained in the price range - 100-120, the price goes beyond - that's it, abort. I simply divide everything related to the price by the price itself, subtract one, and then use coefficients to drive the variables into the desired dynamic range.

So in both cases we need to preprocess the data to an acceptable metric. I use my ATR from the upper TF and price positioning in it. I get such dominoes with notches by levels, the price is assigned a Fibonacci level number.

 

Finished processing the models, seed 201 through 401 - everything else is unchanged.

Table with balance score results

Table with metric scores

Table with the number of models meeting the selection criteria on the independent sample


Table with the number of models that meet the selection criterion on all three samples

Graphs of models (gifs mostly)

30%

40%

50%

60%

It seems that the trend is largely unchanged for all indicators, below are the tables of deltas - is-has, to compare changes

For the metric indicators the difference is minimal at all

From the data collected, we can conclude that the trend has generally continued.

It seems to me that models manage to catch some evident regularity, which appears with frequent periodicity and at different sample sizes (at least this piece is always in the window), and this regularity is exploited by the model.

For myself I concluded that it is quite possible to distribute from 30% to 70% of the sample from all data in favor of the validation section in search of interesting patterns, but it seems that the optimal one is still 30%.