How to Installation and launching ?
DEEP NEURAL NETWORK WITH STACKED RBM. SELF-TRAINING, SELF-CONTROL
).

- 2016.04.26
- Vladimir Perervenko
- www.mql5.com
Forward testing the models with optimal parameters
Let us check how long the optimal parameters of DNN will produce results with acceptable quality for the tests of "future" quotes values. The test will be performed in the environment remaining after the previous optimizations and testing as follows.
Use a moving window of 1350 bars, train = 1000, test = 350 (for validation — the first 250 samples, for testing — the last 100 samples) with step 100 to go through the data after the first (4000 + 100) bars used for pretraining. Make 10 steps "forward". At each step, two models will be trained and tested:
- one — using the pretrained DNN, i.e., perform a fine-tuning on a new range at each step;
- second — additionally training the DNN.opt, obtained after optimization at the fine-tuning stage, on a new range.
#---prepare---- evalq({ step <- 1:10 dt <- PrepareData(Data, Open, High, Low, Close, Volume) DTforv <- foreach(i = step, .packages = "dplyr" ) %do% { SplitData(dt, 4000, 1000, 350, 10, start = i*100) %>% CappingData(., impute = T, fill = T, dither = F, pre.outl = pre.outl)%>% NormData(., preproc = preproc) -> DTn foreach(i = 1:4) %do% { DTn[[i]] %>% dplyr::select(-c(v.rstl, v.pcci)) } -> DTn list(pretrain = DTn[[1]], train = DTn[[2]], val = DTn[[3]], test = DTn[[4]]) -> DTn list( pretrain = list( x = DTn$pretrain %>% dplyr::select(-c(Data, Class)) %>% as.data.frame(), y = DTn$pretrain$Class %>% as.data.frame() ), train = list( x = DTn$train %>% dplyr::select(-c(Data, Class)) %>% as.data.frame(), y = DTn$train$Class %>% as.data.frame() ), test = list( x = DTn$val %>% dplyr::select(-c(Data, Class)) %>% as.data.frame(), y = DTn$val$Class %>% as.data.frame() ), test1 = list( x = DTn$test %>% dplyr::select(-c(Data, Class)) %>% as.data.frame(), y = DTn$test$Class %>% as.vector() ) ) } }, env)
Perform the first part of the forward test using the pretrained DNN and optimal hyperparameters, obtained from the training variant SRBM + upperLayer + BP.
#----#---SRBM + upperLayer + BP---- evalq({ #--BestParams-------------------------- best.par <- OPT_Res3$Best_Par %>% unname # n1, n2, fact1, fact2, dr1, dr2, Lr.rbm , Lr.top, Lr.fine n1 = best.par[1]; n2 = best.par[2] fact1 = best.par[3]; fact2 = best.par[4] dr1 = best.par[5]; dr2 = best.par[6] Lr.rbm = best.par[7] Lr.top = best.par[8] Lr.fine = best.par[9] Ln <- c(0, 2*n1, 2*n2, 0) foreach(i = step, .packages = "darch" ) %do% { DTforv[[i]] -> X if(i==1) Res3$Dnn -> Dnn #----train/test------- fineTuneBP(Ln, fact1, fact2, dr1, dr2, Dnn, Lr.fine) -> Dnn.opt predict(Dnn.opt, newdata = X$test$x %>% tail(100) , type = "class") -> Ypred yTest <- X$test$y[ ,1] %>% tail(100) #numIncorrect <- sum(Ypred != yTest) #Score <- 1 - round(numIncorrect/nrow(xTest), 2) Evaluate(actual = yTest, predicted = Ypred)$Metrics[ ,2:5] %>% round(3) } -> Score3_dnn }, env)
The second stage of the forward test using Dnn.opt obtained during optimization:
evalq({ foreach(i = step, .packages = "darch" ) %do% { DTforv[[i]] -> X if(i==1) {Res3$Dnn.opt -> Dnn} #----train/test------- fineTuneBP(Ln, fact1, fact2, dr1, dr2, Dnn, Lr.fine) -> Dnn.opt predict(Dnn.opt, newdata = X$test$x %>% tail(100) , type = "class") -> Ypred yTest <- X$test$y[ ,1] %>% tail(100) #numIncorrect <- sum(Ypred != yTest) #Score <- 1 - round(numIncorrect/nrow(xTest), 2) Evaluate(actual = yTest, predicted = Ypred)$Metrics[ ,2:5] %>% round(3) } -> Score3_dnnOpt }, env)
Compare the testing results, placing them in a table:
env$Score3_dnn env$Score3_dnnOpt
iter | Score3_dnn | Score3_dnnOpt |
---|---|---|
Accuracy Precision Recall F1 | Accuracy Precision Recall F1 | |
1 | -1 0.76 0.737 0.667 0.7 1 0.76 0.774 0.828 0.8 | -1 0.77 0.732 0.714 0.723 1 0.77 0.797 0.810 0.803 |
2 | -1 0.79 0.88 0.746 0.807 1 0.79 0.70 0.854 0.769 | -1 0.78 0.836 0.78 0.807 1 0.78 0.711 0.78 0.744 |
3 | -1 0.69 0.807 0.697 0.748 1 0.69 0.535 0.676 0.597 | -1 0.67 0.824 0.636 0.718 1 0.67 0.510 0.735 0.602 |
4 | -1 0.71 0.738 0.633 0.681 1 0.71 0.690 0.784 0.734 | -1 0.68 0.681 0.653 0.667 1 0.68 0.679 0.706 0.692 |
5 | -1 0.56 0.595 0.481 0.532 1 0.56 0.534 0.646 0.585 | -1 0.55 0.578 0.500 0.536 1 0.55 0.527 0.604 0.563 |
6 | -1 0.61 0.515 0.829 0.636 1 0.61 0.794 0.458 0.581 | -1 0.66 0.564 0.756 0.646 1 0.66 0.778 0.593 0.673 |
7 | -1 0.67 0.55 0.595 0.571 1 0.67 0.75 0.714 0.732 | -1 0.73 0.679 0.514 0.585 1 0.73 0.750 0.857 0.800 |
8 | -1 0.65 0.889 0.623 0.733 1 0.65 0.370 0.739 0.493 | -1 0.68 0.869 0.688 0.768 1 0.68 0.385 0.652 0.484 |
9 | -1 0.55 0.818 0.562 0.667 1 0.55 0.222 0.500 0.308 | -1 0.54 0.815 0.55 0.657 1 0.54 0.217 0.50 0.303 |
10 | -1 0.71 0.786 0.797 0.791 1 0.71 0.533 0.516 0.525 | -1 0.71 0.786 0.797 0.791 1 0.71 0.533 0.516 0.525 |
The table shows that the first two steps produce good results. The quality is actually the same at the first two steps of both variants, and then it falls. Therefore, it can be assumed that after optimization and testing, DNN will maintain the quality of classification at the level of the test set on at least 200-250 following bars.
There are many other combinations for additional training of models on forward tests mentioned in the previous articleand numerous adjustable hyperparameters.
Hi, What's the question?
New article Deep Neural Networks (Part V). Bayesian optimization of DNN hyperparameters has been published:
Author: Vladimir Perervenko

- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
New article Deep Neural Networks (Part V). Bayesian optimization of DNN hyperparameters has been published:
The article considers the possibility to apply Bayesian optimization to hyperparameters of deep neural networks, obtained by various training variants. The classification quality of a DNN with the optimal hyperparameters in different training variants is compared. Depth of effectiveness of the DNN optimal hyperparameters has been checked in forward tests. The possible directions for improving the classification quality have been determined.
The result is good. Let us plot a graph of training history:
plot(env$Res1$Dnn.opt, type = "class")
Fig.2. History of DNN training by variant SRBM + RP
As it can be seen from the figure, the error on the validation set is less than error on the training set. This means that the model is not overfitted and has a good generalizing ability. The red vertical line indicates the results of the model that is deemed the best and returned as a result after training.
For other three training variants, only the results of calculations and history graphs without further details will be provided. Everything is calculated similarly.
Author: Vladimir Perervenko