Machine learning in trading: theory, models, practice and algo-trading - page 428

 
Maxim Dmitrievsky:

I see, I think in practice no one here and did not compare :) I will seek information, that in the end not to be fooled if it turns out that the diplerning does not give advantages over the woods. And since the component part there is an MLP, it may well be that it does not...

By the way, diplerning is anything that has more than 2 layers, MLP with 2 hidden layers is also diplerning. I was referring to deep nets, which Vladimir described in the article in the link above.

TOTALLY WRONG. Where do you get this information?

Although they write that predictors are the most important because models work approximately the same... but this is theory, in practice it turns out that model selection is also very important, for example a compromise between speed and quality, because NS is usually long...

DNN It's very fast, tested.

I want to use a native without any lefthandy software or direct connection to P server from MT5, but a native is better. You need 1 time to rewrite with C++ neural network, which you need on mql, and that's all.

How do you check it? It works for me

Oh, I forgot to add my opinion.

IMHO based on practice

Good luck

 
Vladimir Perervenko:

Deep learning (also known as deep structured learning or hierarchical learning) is the application to learning tasks of artificial neural networks (ANNs) that contain more than one hidden layers. Deep learning is part of a broader family of machine learning methods based on learning data representations, as opposed to task specific algorithms. Learning can be supervised, partially supervised or unsupervised.


About diplerning with autoencoders, yes, it is quick, but I haven't got to them yet, so I had a logical question - is there an advantage over RF

p.s. Does it also fly in optimizer? Or in a cloud?

https://en.wikipedia.org/wiki/Deep_learning

 
Maxim Dmitrievsky:

Deep learning (also known as deep structured learning or hierarchical learning) is the application to learning tasks of artificial neural networks (ANNs) that contain more than one hidden layers. Deep learning is part of a broader family of machine learning methods based on learning data representations, as opposed to task specific algorithms. Learning can be supervised, partially supervised or unsupervised.


About diplerning with autoencoders yes, fast, but I haven't gotten to them yet, so it was a logical question - are there any advantages over RF

p.s. Does it also fly in the optimizer? And if it's in the cloud?

https://en.wikipedia.org/wiki/Deep_learning

1. Where did you find this definition? Are you serious? I'll find links to serious sources when I have time.

2. The main advantage of DNN with pre-learning is transfer learning. It's much faster, more accurate and ... Use the darch package.

3. Any optimization must be done in R. Faster, more transparent and flexible.

Good luck

 
Vladimir Perervenko:

1. Where did you find this definition? Are you serious? I'll find links to serious sources when I have time.

2. The main advantage of DNN with pre-learning is transfer learning. It's much faster, more accurate and ... Use the darch package.

3. Any optimization must be done in R. Faster, more transparent and flexible.

Good luck

At the end of this lesson, you'll understand how a simple deep learning model called the multilayer perceptron (MLP) works, and learn how to build it in Keras, getting a decent degree of accuracy on MNIST. In the next lesson, we'll break down methods for solving more complex image classification problems (such as CIFAR-10).
(Artificial) Neurons.

Although the term "deep learning" can be understood in a broader sense, it is mostly applied in the field of(artificial) neural networks.


https://habrahabr.ru/company/wunderfund/blog/314242/

And here.


They may all be lying, I'm not aware of it )

IMHO, of course

Глубокое обучение для новичков: распознаем рукописные цифры
Глубокое обучение для новичков: распознаем рукописные цифры
  • habrahabr.ru
Представляем первую статью в серии, задуманной, чтобы помочь быстро разобраться в технологии глубокого обучения; мы будем двигаться от базовых принципов к нетривиальным особенностям с целью получить достойную производительность на двух наборах данных: MNIST (классификация рукописных цифр) и CIFAR-10 (классификация небольших изображений по...
 
What's the use of predictors? A time series is a predictor, only the NS needs to be a little deeper.
(From my cell phone)
 
Yuriy Asaulenko:
What's the use of predictors? A time series is a predictor.
(From a cell phone)

They forgot to put (with) :))
 
Maxim Dmitrievsky:

Forgot to put (c) :))
Who did you quote?)
 
Yuriy Asaulenko:
Who did you quote?)
Well, himself. Like a badge of authorship )
 
Maxim Dmitrievsky:
At the end of this lesson, you will understand how a simple deep learning model called the multilayer perceptron (MLP) works, and learn how to build it in Keras, getting a decent degree of accuracy on MNIST. In the next lesson, we'll break down methods for solving more complex image classification problems (such as CIFAR-10).
(Artificial) neurons.

Although the term "deep learning" can be understood in a broader sense, it is mostly applied in the field of (artificial) neural networks.


https://habrahabr.ru/company/wunderfund/blog/314242/

And lo and behold.


Maybe they're all lying, I'm not aware of it )

IMHO, of course.

No they don't. Below is an explanation (from an article I never finished :(

Introduction

Main directions of research and applications

At present time in the research and application of deep neural networks (we talk only about multi-layer full-link neural networks - MLP) two main currents have formed, which differ in the approach to initialization of neuron weights in hidden layers.

First: It is well known that neural networks are extremely sensitive to the way of initialization of neurons in hidden layers, especially when the number of hidden layers is more than 3. The initial push to solve this problem was proposed by Professor G.Hynton. The essence of the proposal was that the weights of neurons in hidden layers of the neural network would be initiated by weights obtained during learning without a teacher of automatic associative networks composed of RBM (constrained Boltzmann machine) or AE (autoencoder). These Stacked RBM (SRBM) and Stacked AE (SAE) networks are trained in a certain way on a large unlabeled data set. The purpose of such training is to reveal hidden structures (representations, images) and dependencies in the data. Initializing MLP neurons with weights obtained during pre-training places MLPs in the solution space closest to the optimal one. This makes it possible for the subsequent fine-tuning (training) of MLPs to apply less marked data with fewer training epochs. For many practical applications (especially in processing "big data"), these are critical advantages.

Second: A group of scientists (Benjio et al.) focused their main efforts on the development and research of specific methods for the initial initialization of hidden neurons, special activation functions, stabilization and learning methods. Successes in this direction are mainly connected with the rapid development of deep convolutional and recurrent neural networks (DCNN, RNN), which have shown amazing results in image recognition, text analysis and classification and translation of live speech from one language to another. Ideas and methods developed for these neural networks have been applied to MLP with no less success.

Today both directions are actively used in practice. Comparative experiments [ ] of these two approaches have not revealed significant advantages of one approach over the other, but one still exists. Neural networks with pre-training require much less examples for training and computational resources with almost equal results. For some fields this is a very important advantage.

Good luck

 
Vladimir Perervenko:

Today both directions are actively used in practice. Comparative experiments [ ] of these two approaches have not revealed significant advantages of one approach over the other, but there is one advantage. Neural networks with pre-training require much less examples for training and computational resources with almost equal results. For some fields this is a very important advantage.

Good luck

Lately I've been going back to the GARCHs that I was previously familiar with. What has extremely surprised me after several years of fascination with machine learning is the sheer number of publications on the application of GARCH to financial time series, including currencies.


Do you have something similar for deep networks?

Reason: