Machine learning in trading: theory, models, practice and algo-trading - page 429

 
Maxim Dmitrievsky:

Forgot to put (c) :))
Fiddler is not needed (c).
The other day I came to this conclusion. Take my word for it, it's a reasonable one. M error=0.1 (not TC)
It's hard to write from a cell phone).
Ps see. S. Heikin and Bishop.
 
Vladimir Perervenko:

No, they're not. Below is an explanation (from an article I can't seem to finish :(

Introduction

Today both approaches are actively used in practice. Comparative experiments [ ] of the two approaches do not reveal significant advantages of one over the other but there is one thing nevertheless. Neural networks with pre-training require much less training examples and computational resources with almost equal results. For some fields this is a very important advantage.

Good luck

Well that's sorted out :) Yes, especially nice is the speed of learning (plus the quality of models), I will experiment later with perimeters from your articles, as will finish the implementation of their ideas, the topic is very interesting and sometimes profitable, as in my modest model yesterday, for example :) (the lot is small because it is still testing)

For the time being it is overfeeding very strongly and does not work for a long interval without retraining, but for 2-3 month intervals it trains (fits?) almost perfectly, and a good probability it works a week after training, I simply retrain every week. Honestly, I've never got such curves before (by close prices, not ticks) until I got acquainted with MO in the tester. It works efficiently at almost all currency pairs and indices (I haven't tried exchange ones yet because I have little contract history and do not want to bother with gluing), i.e. I can make low-risk portfolios.

The main task at the moment is to increase stability on test samples by adding non-linear interrelations, which is a non-trivial task, but solvable to a certain extent (as it seems).


 
Yuriy Asaulenko:
I do not need the violinist (c).
I came to this conclusion the other day. Take my word for it, a reasonable one. M error=0.1 (not TC)
It's hard to write from a cell phone.)
Ps see. S.Haykin and Bishop

I'll read it later, it's necessary to memorize.

 
Maxim Dmitrievsky:

Well sorted out with this :) Yes, especially nice is the speed of learning (plus the quality of models), I will experiment later with perimeters from your articles, as will finish the implementation of their ideas, the topic is very interesting and sometimes profitable, as on my modest model yesterday, for example :) (the lot is small because it is still testing)

For the time being it is overfeeding very strongly and does not work for a long interval without retraining, but for 2-3 month intervals it trains (fits?) almost perfectly, and a good probability it works a week after training, I simply retrain every week. Honestly, I've never got such curves before (by close prices, not ticks) until I got acquainted with MO in the tester. And it works fine on almost all currency pairs and indices (I haven't tried exchange ones yet because I have little contract history and do not want to bother with glues), i.e. it allows making low-risk portfolios.

The main task at the moment is to increase stability on test samples by adding non-linear interrelations, so to say, the task is nontrivial, but solvable to a certain extent (as I see it).


Optimization often finds very good results... But it's not that important.
You seem to be running Reshetov's RNN on the real with TrendLinearReg - does it still work or was the idea not working?
 
SanSanych Fomenko:

Lately, I have returned to the GARCHs with which I was previously familiar. What has extremely surprised me after several years of fascination with machine learning is the huge number of publications on the application of GARCH to financial time series, including currencies.


Is there something similar for deep networks?

I don't do regression. I just watch the news in this area. My favorite of recent developments is the prophet package.

Deep nets are designed for classification.

Good luck

 
elibrarius:
Optimization often finds very good results... but that's not so important.
You seem to have run Reshetov's RNN with TrendLinearReg on the real - does it still work or the idea didn't work?

Changed the predictors, at first I wanted to throw ... then I think no, such a cow needs itself for now) The framework of the MO on Reshetov remained, everything else is redone, I added MLP - did not like it, it takes a long time to count, now I'll add rand. forrest + a few more ideas that are in the process of appearing... I.e. in general I want a committee or what would be one ns training the other, something like that always want the original

But Reg. angle is good on its own, both on input and output, if also on logarithmic charts... that's a good predictor

 
Maxim Dmitrievsky:
I changed the predictors, at first I wanted to throw them away... then I think not, I need such a cow myself for now) I've changed the MO frame on Reshetov, I redid everything else, I added MLP - I didn't like it, it takes too long to count, now I'm going to add Rand Forest + a few more ideas, that are in the process.
There are not many inputs on the Reshetov network... 3-6, if the same number of inputs to MLP, it should also count quickly.
 
elibrarius:
There aren't many inputs on the Reshetov network ... 3-6, if you feed the same amount to the MLP, it should also count quickly.


Yes, but for MLP there is a problem with Outputs... while Reshetov's one is set up to probabilities based on oscillator extremums, i.e. it is enough to correctly detrend the market, do some transformations and feed it as a stationary series

PLUS this alglibovy mLp each time differently trained on the same set, 1 time ran one showed, the second time - another, and so in the cycle for several iterations (5-7) will give different values, how to work with this I do not know. That's why I started to add more inputs (up to 15) and it started to learn slowly. I used softmax. I also tried ensembles - it took a long time. And in the end, from experiments in azure machine learning, you can see that RF always gives less error than almost all simple MO models, while MLP gives the biggest error as a rule... Maybe I just don't know how to cook it, but it looks like it's actually worse and slower, which I found confirmation of here from San Sanych

That is, if we are going to choose from simple classifiers, we will obviously choose RF, next comes DNN and other latest niche, RNN and LSTM. We go from simple to complex :)

 
Maxim Dmitrievsky:

But a reg. angle is good on its own as an input and output, if also on logarithmic graphs... a good predictor

What are the reg. angle and logarithmic plots?

 
elibrarius:

What are regression angle and logarithmic charts?

TrendLineregr shows the angle of slope of the regression line for a given number of bars - it is a good indicator as a predictor and as a target, too, replacing the zigzag. I.e. it actually removes the noise component of quotes (in my opinion).

And logo charts are taken not as clean charts but as a logarithm of prices

And in general the same Garch Figarach and Arima are the regression analysis, nothing more interesting has been invented yet, that's why if people use it then they should also use it this or that way