New article Deep Neural Networks (Part VIII). Increasing the classification quality of bagging ensembles has been published:
The article considers three methods which can be used to increase the classification quality of bagging ensembles, and their efficiency is estimated. The effects of optimization of the ELM neural network hyperparameters and postprocessing parameters are evaluated.
The figure below provides a simplified scheme of all calculations: it shows the stages, the used scripts and data structures.
Fig. 11. Structure and sequence of the main calculations in the article.
Author: Vladimir Perervenko
Discussion and questions on the code can be done in the branch
Figure 11 shows the structural scheme of calculations. Above each stage is the name of the script. Under each stage is the name of the resulting data structure. What data do you want to use?
If you want to use the averaged continuous predictions of the seven best ensembles, then they are in the structure
k = c(origin/repaired/removed/relabeled)
j = c( half, mean, med, both)
If you need the predictions of the seven best in binary form, then they are in the structure