Discussion of article "Neural networks made easy (Part 20): Autoencoders"

 

New article Neural networks made easy (Part 20): Autoencoders has been published:

We continue to study unsupervised learning algorithms. Some readers might have questions regarding the relevance of recent publications to the topic of neural networks. In this new article, we get back to studying neural networks.

In the general case, the autoencoder is a neural network consisting of two encoder and decoder blocks. The encoder source data layer and the decoder result layer contain the same number of elements. Between them, there is a hidden layer which is usually smaller than the source data. During the learning process, the neurons of this layer form a latent (hidden) state which can describe the source data in a compressed form.


This is similar to the data compression problem we solved using the Principal Component Analysis method. However, there are differences in the approaches which we will discuss later.

As mentioned above, an autoencoder is a neural network. It is trained by the backpropagation method. The trick is that since we use unlabeled data, we first train the model to compress the data using an encoder to the size of the latent state. Then, in the decoder, the model will restore the data to the original state with minimal loss of information.

Author: Dmitriy Gizlyk