Discussion of article "Deep Neural Networks (Part VI). Ensemble of neural network classifiers: bagging" - page 2

 
elibrarius:

Version

Works. It gives the same network weights each time it is run. I compared the second network. I output env$Ens[2] and then compared it by plugin in notepad++.

It didn't work with multithreading:

Error in setMKLthreads(2) :can't find function "setMKLthreads"

What is this function? It is not in the code of articles 4 and 6. How to connect it?

PS: It would have been more convenient if you had posted the R session with all functions and source data.

I have posted quotes, all functions and executable scripts. Sequentially execute by copying either from the article or from GIThub.

You can see the "setMKLthreads" function

 
I forgot to ask you if you have MRO 3.4.3 installed ?
 
Vladimir Perervenko:
I forgot to ask you if you have MRO 3.4.3 installed ?
I have (R-3.4.3 for Windows) installed from here https://cloud.r-project.org/
 
elibrarius:
I have (R-3.4.3 for Windows) installed from here https://cloud.r-project.org/

Just comment out the lines with the thread count setting. Intel MKL library does not come with pure R.

 
Vladimir Perervenko:

Just comment out the lines that set the number of threads. Intel MKL library does not work with pure R.

That's what I did. Ran the optimisation twice to check, got the same result

numFeature r nh fact Value
1 11 8 19 4 0.768
2 8 8 18 4 0.754
3 11 8 15 4 0.753
4 11 9 13 8 0.750
5 12 8 15 4 0.750
6 9 8 39 4 0.748
7 10 8 6 3 0.745
8 11 8 20 6 0.743
9 10 8 14 3 0.743
10 8 9 40 7 0.743

A bit worse than yours, but I think it's just a less successful combination of the HGC.

 
Vladimir Perervenko:

Intel MKL library does not work with pure R.

I wanted to download MKL. They asked me to register, - I did, and they displayed the following message:
Thank you for registering for Intel® Performance Libraries.
Please check your email for instructions to download your product.Note that this may take up to two business days.

In 20 minutes I still haven't received a download link. Are they serious about 2 days?

 
elibrarius:

so I did. Ran the optimisation twice to check, got the same result

numFeature r nh fact Value
1 11 8 19 4 4 0.768
2 8 8 18 4 0.754
3 11 8 15 4 0.753
4 11 9 13 8 0.750
5 12 8 15 4 0.750
6 9 8 39 4 0.748
7 10 8 6 3 0.745
8 11 8 20 6 0.743
9 10 8 14 3 0.743
10 8 9 40 7 0.743

A bit worse than yours, but I think it's just a less successful combination of DSTs.

I always use the doRNG package when using foreach (very stable DST).

This should not be the case. Each new run of the optimisation should produce different results!

 

I've run the optimisation now and got

 Best Parameters Found: 
Round = 18      numFeature = 8.0000     r = 1.0000      nh = 34.0000    fact = 10.0000  Value = 0.7700 
> evalq({
+   OPT_Res %$% History %>% dplyr::arrange(desc(Value)) %>% head(10) %>%
+     dplyr::select(-Round) -> best.init
+   best.init
+ }, env)
   numFeature  r nh fact Value
1           8  1 34   10 0.770
2           7  1 15   10 0.766
3          11  2 15   10 0.765
4           9  1 36   10 0.765
5           3  7 13    5 0.761
6           7  8  8   10 0.748
7          11  6 29   10 0.748
8           3 10 49    1 0.748
9           7  7 23   10 0.746
10          3  1  1   10 0.745

If you run the optimisation with the resulting top 10 parameters, you get more options. Like this

#---Optim  Ensemble-----
evalq(
  OPT_Res <- BayesianOptimization(fitnes, bounds = bonds,
                                  init_grid_dt = best.init, init_points = 10, 
                                  n_iter = 20, acq = "ucb", kappa = 2.576, 
                                  eps = 0.0, verbose = TRUE)
  , envir = env)

You can continue as many times as you want.

Good luck

 
Vladimir Perervenko:

I always apply the doRNG package when using foreach (very stable GCH).

This should not be the case. Each new run of the optimisation should produce different results!

I tried your variant 2 times - I got different results.
It seems to me that reproducibility/repeatability with restarts is even better.

 
elibrarius:

I tried your variant 2 times - I got different results.
It seems to me that reproducibility/repeatability with restarts is even better.

Can you feel the difference now? Just read the article carefully. I specifically highlighted this feature of Bayesian optimisation.

Good luck with your experiments