very very very good article
many thanks for your beautiful contribution
never try the R package before, i try the mql4 expert, at first everything ok, but 3 minutes after the launch (during teaching phase), rterm crash !
i will investigate why ... if you have some ideas ?
theoretically a very good article,
but practically useless because the application is not running, and advice is not to be expected...
theoretically a very good article,
but practically useless because the application is not running, and advice is not to be expected...
I would like to correct my opinion,
after downloading RStudio, a third-party interface for R, and donwloading all needed packages its running without Problems.
RStudio has the advantage that all dependencies are installed automatically.
Nice Work!I would like to correct my opinion,
after downloading RStudio, a third-party interface for R, and donwloading all needed packages its running without Problems.
RStudio has the advantage that all dependencies are installed automatically.
Nice Work!Hi,
Rterm falls mostly because of lack of necessary libraries and affiliated libraries.
There will be questions, ask.
Good luck
Great article, I would use the MT5. How do I run the MT5? Someone could share the source code?

- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
New article Deep neural network with Stacked RBM. Self-training, self-control has been published:
This article is a continuation of previous articles on deep neural network and predictor selection. Here we will cover features of a neural network initiated by Stacked RBM, and its implementation in the "darch" package.
Structure of a deep neural network initialized by Stacked RBM (DN_SRBM)
I recall that DN_SRBM consists of n-number of RBM that equals the number of hidden layers of neural network and, basically, the neural network itself. Training comprises two stages.
The first stage involves PRE-TRAINING. Every RBM is systematically trained without a supervisor on the input set (without target). After this weight of hidden layers, RBM are transferred to relevant hidden layers of neural network.
The second stage involves FINE-TUNING, where neural network is trained with a supervisor. Detailed information about it was provided in the previous article, so we don't have to repeat ourselves here. I will simply mention that unlike the "deepnet" package that we have used in the previous article, the "darch" package helps us to implement wider opportunities in building and tuning the model. More details will be provided when creating the model. Fig. 1 shows the structure and the training process of DN_SRBM
Fig. 1. Structure of DN SRBM
Author: Vladimir Perervenko