Unleashing the Power of Autoencoders: Applications and Use Cases

Premanand S Last Updated : 04 Dec, 2023
10 min read

Introduction

Extracting important insights from complicated datasets is the key to success in the era of data-driven decision-making. Enter autoencoders, deep learning‘s hidden heroes. These interesting neural networks can compress, reconstruct, and extract important information from data. Autoencoders have transformed the field of machine learning by revealing hidden patterns, lowering dimensionality, identifying abnormalities, and even producing new content. Join us as we explore the realm of autoencoders using encoders and decoders, debunk their inner workings, investigate their diverse applications, and experience the revolutionary impact they may have on your data analysis endeavors.

Learn More: A Gentle Introduction to Autoencoders for Data Science Enthusiasts

This article was published as a part of the Data Science Blogathon.

Layman Explanation of Autoencoders

Consider a photographer taking a high-resolution photo of a location and then making a lower-resolution thumbnail of that photo to comprehend this better. The thumbnail may not have as much detail as the original shot, but it still provides an excellent depiction of the situation. Similarly, an autoencoder compresses a high-dimensional dataset into a lower-dimensional representation that can be utilized for anomaly identification or data visualization.

Image compression is one application where autoencoders might be helpful. By training an autoencoder on a large dataset of images, the model can learn to identify the essential elements of the image and compress it into a smaller representation while retaining high image quality. This can be handy when storage space or network bandwidth is limited.

So now, Autoencoders is an artificial neural network that learns unsupervised. They are typically used for dimensionality reduction, feature learning, and data compression. Autoencoders are neural networks that learn a compressed dataset representation and then use it to retrieve the original data with little information loss.

An encoder translates the input data to a lower-dimensional representation, while a decoder converts the lower-dimensional representation back to the original input space. The encoder and decoder are trained concurrently to minimize reconstruction error using a loss function such as mean squared error.

Autoencoders are helpful when working with high-dimensional data such as images, music, or text. They can minimize the dimensionality of the data while keeping its vital qualities by learning a compressed version of it. Anomaly detection is another prominent application for autoencoders. Because autoencoders can learn to reconstruct standard data with minimal loss, any data point with a high reconstruction error can be classified as an anomaly.

Architecture of Autoencoder

An autoencoder’s architecture comprises two components: the encoder and the decoder. The encoder
turns the input data into a lower-dimensional representation, which the decoder uses to reconstruct the original input data as precisely as possible. Training the encoder and decoder simultaneously unsupervised, meaning the network does not need labeled data to learn the mapping between input and
output. Here’s a step-by-step breakdown of the autoencoder architecture:

Latent Space: The latent space is the encoder’s learn lower-dimensional input data representation. It is frequently significantly smaller than the input data and captures the data’s most important properties.

Decoder: The compressed representation (latent space) is fed into the decoder, reconstructing the
original input data. The decoder, like the encoder, comprises numerous layers of neural networks. The decoder’s last layer outputs rebuilt data, which should be as near to the original input data as feasible.

Loss Function: To evaluate the reconstruction’s quality, we can use a loss function, such as MSE or binary cross-entropy. The loss function computes and trains the network to minimize the
difference between the input and reconstructed data. Using backpropagation during training to update the encoder and decoder, which adjusts the network’s weights and biases to minimize the loss function.

Training: We can simultaneously train the encoder and decoder to teach the complete network end-to-end. The training aims to learn a compressed representation of the input data that
captures the essential features while minimizing reconstruction error.

Applications of Autoencoder

Image and Audio Compression: Autoencoders can compress huge images or audio files while
maintaining most of the vital information. An autoencoder is trained to recover the original picture or audio file from a compressed representation.

Anomaly Detection: One can detect anomalies or outliers in datasets using autoencoders. Training the autoencoder on a dataset of normal data and any input that the autoencoder cannot accurately reconstruct is called an anomaly.

Dimensionality Reduction: Autoencoders can lower the dimensionality of high-dimensional datasets. We can accomplish this by teaching an autoencoder a lower-dimensional data representation that captures the most relevant features.

Data Generation: Employ autoencoders to generate new data similar to the training data. One can accomplish this by sampling from the autoencoder’s compressed representation and then utilizing the decoder to create new data.

Denoising: One can utilize autoencoders to reduce noise from data. We can accomplish this by teaching
an autoencoder to recover the original data from a noisy version.

Recommender System: Using autoencoders, we can use users’ preferences to generate personalized suggestions. We can accomplish this by training an autoencoder to learn a compressed representation of the user’s history of system interactions and then utilizing this representation to forecast the user’s preferences for new items.

Advantage of Autoencoder

  1. Firstly, autoencoders can learn to represent input data in compressed form. By compressing the data into a lower-dimensional latent space, they can successfully capture the most conspicuous characteristics of the input. These acquired qualities may be useful for subsequent classification, grouping, or anomaly detection tasks.
  2. Because we may train the autoencoders on unlabeled data, they are well suited for unsupervised learning circumstances where labeled data is rare or unavailable. Autoencoders can find underlying patterns or structures in data by learning to recreate the input data without explicit labeling.
  3. We can use autoencoders for data compression by encoding the input data into a lower-dimensional form. This is beneficial for storage and transmission since it reduces the required storage space or network bandwidth while allowing accurate reconstruction of the original data.
  4. Moreover, autoencoders can identify data anomalies or outliers. An autoencoder learns to consistently reconstruct normal data instances by training it on normal data patterns. Anomalies or outliers that deviate greatly from the learned patterns will have increased reconstruction errors, making them detectable.
  5. VAEs (variational autoencoders) are a type of autoencoder that can be used for generative modeling. VAEs can generate new data samples by sampling from a previously learned latent space distribution. This is useful for tasks such as image or text generation.

Disadvantages of Autoencoders

  1. Firstly, we can learn simple solutions via autoencoders, in which the model fails to capture relevant properties and instead memorizes or replicates the input data. As a result, generality is constrained, and real-world applications are restricted.
  2. Autoencoders may fail to capture complex data linkages when working with high-dimensional or structured data. They may be incapable of accurately capturing complex relationships, resulting in inadequate reconstruction or feature extraction.
  3. Furthermore, autoencoder training can be computationally time-consuming, especially for deep or intricate structures. Working with large datasets or with limited processing resources may make this difficult.
  4. Lastly, autoencoders frequently require substantial training data to learn meaningful representations. Inadequate data can lead to overfitting, which occurs when the model fails to generalize well to new data.

Implementation of Autoencoders

1. Importing Libraries

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
import matplotlib.pyplot as plt

2. Importing Datasets

(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()

3. Normalization

x_train = x_train.astype("float32") / 255.0
x_test = x_test.astype("float32") / 255.0

4. Reshaping the Data

x_train = np.reshape(x_train, (len(x_train), 28, 28, 1))
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1))

5. Encoding Architecture

encoder_inputs = keras.Input(shape=(28, 28, 1))
x = layers.Conv2D(16, 3, activation="relu", padding="same")(encoder_inputs)
x = layers.MaxPooling2D(2, padding="same")(x)
x = layers.Conv2D(8, 3, activation="relu", padding="same")(x)
x = layers.MaxPooling2D(2, padding="same")(x)
x = layers.Conv2D(8, 3, activation="relu", padding="same")(x)
encoder_outputs = layers.MaxPooling2D(2, padding="same")(x)

encoder = keras.Model(encoder_inputs, encoder_outputs, name="encoder")
encoder.summary()

6. Decoding Architecture

decoder_inputs = keras.Input(shape=(4, 4, 8))
x = layers.Conv2D(8, 3, activation="relu", padding="same")(decoder_inputs)
x = layers.UpSampling2D(2)(x)
x = layers.Conv2D(8, 3, activation="relu", padding="same")(x)
x = layers.UpSampling2D(2)(x)
x = layers.Conv2D(16, 3, activation="relu")(x)
x = layers.UpSampling2D(2)(x)
decoder_outputs = layers.Conv2D(1, 3, activation="sigmoid", padding="same")(x)

decoder = keras.Model(decoder_inputs, decoder_outputs, name="decoder")
decoder.summary()

7. Defining Autoencoder as a Sequential Model

autoencoder = keras.Sequential([encoder, decoder])
autoencoder.compile(optimizer="adam", loss="binary_crossentropy")

8. Training

autoencoder.fit(x_train, x_train, epochs=10, batch_size=128, validation_data=
(x_test, x_test))

9. Encoding and Decoding the Test Images

encoded_imgs = encoder.predict(x_test)
decoded_imgs = autoencoder.predict(x_test)
n = 10  # Number of images to display
plt.figure(figsize=(20, 4))
for i in range(n):
    # Display original image
    ax = plt.subplot(2, n, i + 1)
    plt.imshow(x_test[i].reshape(28, 28))
    plt.gray()
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)

    # Display reconstructed image
    ax = plt.subplot(2, n, i + 1 + n)
    plt.imshow(decoded_imgs[i].reshape(28, 28))
    plt.gray()
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)
plt.show()
Implementation of Autoencoders | Encoder | Decoder | Neural Network

Implementation of Autoencoder – Feature Extraction

Autoencoders will perform different functions, and one of the important functions is feature extraction, here will see how we can use autoencoders for extracting features,

1. Importing Libraries

import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import mnist
from keras.models import Model
from keras.layers import Input, Dense

2. Loading Dataset

(x_train, _), (x_test, _) = mnist.load_data()

3. Normalization

x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))

4. Autoencoder Architecture

#import input imag
input_img = Input(shape=(784,))
encoded = Dense(64, activation='relu')(input_img)
decoded = Dense(784, activation='sigmoid')(encoded)

5. Model

autoencoder = Model(input_img, decoded)

# Compile the model
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')

6. Training

autoencoder.fit(x_train, x_train, epochs=50, batch_size=256, shuffle=True, 
validation_data=(x_test, x_test))

7. Extracting Encoded Feature

encoder = Model(input_img, encoded)
encoded_imgs = encoder.predict(x_test)

8. Plotting Features

n = 10  # Number of images to display
plt.figure(figsize=(20, 4))
for i in range(n):
    # Display the original image
    ax = plt.subplot(2, n, i + 1)
    plt.imshow(x_test[i].reshape(28, 28))
    plt.gray()
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)

    # Display the encoded feature vector
    ax = plt.subplot(2, n, i + n + 1)
    plt.imshow(encoded_imgs[i].reshape(8, 8))
    plt.gray()
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)
plt.show()
Implementation of Autoencoder - Feature Extraction | Encoder | Decoder | Neural Network 
| Datasets

Implementation of Autoencoders – Dimensionality Reduction

1. Importing Libraries

import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras.datasets import mnist

2. Importing the Dataset

(x_train, y_train), (x_test, y_test) = mnist.load_data()

3. Normalization

x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.

4. Flattening

x_train_flat = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test_flat = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))

5. Autoencoder Architecture

#import c
input_dim = 784
encoding_dim = 32

input_layer = keras.Input(shape=(input_dim,))
encoder = keras.layers.Dense(encoding_dim, activation='relu')(input_layer)
decoder = keras.layers.Dense(input_dim, activation='sigmoid')(encoder)

autoencoder = keras.models.Model(inputs=input_layer, outputs=decoder)

# Compile autoencoder
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')

6. Training

history = autoencoder.fit(x_train_flat, x_train_flat,
                          epochs=50,
                          batch_size=256,
                          shuffle=True,
                          validation_data=(x_test_flat, x_test_flat))

7. Use an encoder to encode input data into a lower-dimensional representation

encoder_model = keras.models.Model(inputs=input_layer, outputs=encoder)
encoded_data = encoder_model.predict(x_test_flat)

8. Plot encoded data in 2D using the first two principal components

from sklearn.decomposition import PCA

pca = PCA(n_components=2)
encoded_pca = pca.fit_transform(encoded_data)

plt.scatter(encoded_pca[:, 0], encoded_pca[:, 1], c=y_test)
plt.colorbar()
plt.show()
Implementation of Autoencoders - Dimensionality Reduction | datasets

Implementation of Autoencoders – Classification

We all know that we go for any model architecture for classification or regression. Still, we do classification predominately. Here will see how we can use autoencoders.

1. Importing Libraries

from keras.layers import Input, Dense
from keras.models import Model

2. Importing the Dataset

from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()

3. Normalization

x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.

4. Flattening

input_dim = 784
x_train = x_train.reshape(-1, input_dim)
x_test = x_test.reshape(-1, input_dim)

5. Autoencoder Architecture

encoding_dim = 32
input_img = Input(shape=(input_dim,))
encoded = Dense(encoding_dim, activation='relu')(input_img)
decoded = Dense(input_dim, activation='sigmoid')(encoded)
autoencoder = Model(input_img, decoded)

# Compile autoencoder
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')

6. Training

autoencoder.fit(x_train, x_train,
                epochs=50,
                batch_size=256,
                shuffle=True,
                validation_data=(x_test, x_test))

7. Extract Compressed Representations of MNIST Images

encoder = Model(input_img, encoded)
x_train_encoded = encoder.predict(x_train)
x_test_encoded = encoder.predict(x_test)

8. Feedforward Classifier

clf_input_dim = encoding_dim
clf_output_dim = 10
clf_input = Input(shape=(clf_input_dim,))
clf_output = Dense(clf_output_dim, activation='softmax')(clf_input)
classifier = Model(clf_input, clf_output)

# Compile classifier
classifier.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

9. Train the Classifier

from keras.utils import to_categorical
y_train_categorical = to_categorical(y_train, num_classes=clf_output_dim)
y_test_categorical = to_categorical(y_test, num_classes=clf_output_dim)
classifier.fit(x_train_encoded, y_train_categorical,
               epochs=50,
               batch_size=256,
               shuffle=True,
               validation_data=(x_test_encoded, y_test_categorical))

Implementation of Autoencoders – Anomaly Detection

Anomaly detection is a technique for identifying patterns or events in data that are unusual or abnormal compared to most of the data.

Learn More: Complete Guide to Anomaly Detection with AutoEncoders using Tensorflow

1. Importing Libraries

import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras

2. Importing the Dataset

(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()

3. Normalization

x_train = x_train / 255.0
x_test = x_test / 255.0

4. Flatten

x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))

5. Defining Architecture

input_dim = x_train.shape[1]
encoding_dim = 32

input_layer = keras.layers.Input(shape=(input_dim,))
encoder = keras.layers.Dense(encoding_dim, activation='relu')(input_layer)
decoder = keras.layers.Dense(input_dim, activation='sigmoid')(encoder)

autoencoder = keras.models.Model(inputs=input_layer, outputs=decoder)

# Compile the autoencoder
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')

6. Training

autoencoder.fit(x_train, x_train, epochs=50, batch_size=256, shuffle=True, 
validation_data=(x_test, x_test))

# Use the trained autoencoder to reconstruct new data points
decoded_imgs = autoencoder. predict(x_test)

7. Calculate the Mean Squared Error (MSE) Between the Original and Reconstructed Data Points

mse = np.mean(np.power(x_test - decoded_imgs, 2), axis=1)

8. Plot the Reconstruction Error Distribution

plt.hist(mse, bins=50)
plt.xlabel('Reconstruction Error')
plt.ylabel('Frequency')
plt.show()

# Set a threshold for anomaly detection
threshold = np.max(mse)

# Find the indices of the anomalous data points
anomalies = np.where(mse > threshold)[0]

# Plot the anomalous data points
n = min(len(anomalies), 10)
plt.figure(figsize=(20, 4))
for i in range(n):
    ax = plt.subplot(2, n, i + 1)
    plt.imshow(x_test[anomalies[i]].reshape(28, 28))
    plt.gray()
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)

    ax = plt.subplot(2, n, i + 1 + n)
    plt.imshow(decoded_imgs[anomalies[i]].reshape(28, 28))
    plt.gray()
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)

plt.show()
Implementation of Autoencoders - Anomaly Detection | Decoder | Encoder | Neural Network | dataset

Conclusion

In conclusion, autoencoders are compelling neural networks that may be used for data compression, anomaly detection, and feature extraction tasks. Furthermore, one can use autoencoders for various tasks, including computer vision, speech recognition, and natural language processing. We can train the autoencoders using multiple optimization approaches and loss functions and improve their performance by altering hyperparameters. Overall, autoencoders are a valuable tool with the potential to revolutionize the way we process and analyze complex data.

Key Takeaways:

  • Autoencoders are neural networks that encode input data into a latent space representation before decoding it to recreate the original input.
  • Using them to reduce dimensionality, extract features, compress data, and detect anomalies, among other things.
  • Autoencoders have advantages such as learning useful features, being applicable to various data types, and working with unsupervised data.
  • Lastly, autoencoders offer a versatile collection of methods for extracting meaningful information from data and can be a beneficial addition to a data scientist’s arsenal.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Frequently Asked Questions

Q1. What are autoencoders used for?

A. Autoencoders are neural network models primarily used for unsupervised learning tasks such as dimensionality reduction, data compression, and feature extraction. They learn to reconstruct the input data and capture its essential patterns, making them useful for anomaly detection and image-denoising tasks.

Q2. What are autoencoders also known as?

A. Autoencoders are also known as auto-associators or automatic encoders. These alternative names reflect their ability to associate or encode the input data into a compressed representation and subsequently decode or reconstruct the original input.

Q3. What are the 3 essential components of an autoencoder?

A. The three essential components of an autoencoder are:
Encoder: This component compresses the input data into a lower-dimensional representation or code known as the latent space.
Decoder: The decoder takes the compressed representation and reconstructs its original input data.
Loss Function: A loss function measures the difference between the input and the reconstructed output, guiding the autoencoder’s training process.

Q4. What are examples of autoencoders?

A. Examples of autoencoders include Variational Autoencoders (VAEs), Sparse Autoencoders, Denoising Autoencoders, and Contractive Autoencoders. Each type has its own specific characteristics and is suitable for different applications based on the desired outcome and data domain.

Premanand S is a dedicated academic with over a decade of research experience, specializing in Bio-signal Processing, Machine Learning, and Deep Learning. He completed his B.Tech in 2009 from Amrita Vishwa Vidyapeetham, Bangalore, and his M.E. in 2011 from Rajalakshmi Engineering College, Chennai, where his thesis focused on Deep Learning for ECG Signal Processing.

He is pursuing his Ph.D. at VIT-Chennai, with a tentative research title of "Deep Learning Approaches for Enhanced ECG Signal Processing and Arrhythmia Classification." His research aims to leverage cutting-edge deep learning techniques to improve the accuracy and efficiency of ECG signal analysis, contributing significantly to cardiac health monitoring.

A recipient of the prestigious TCS-RSP (Research Scholarship) in 2014, Cycle 9, Premanand has become a recognized figure in the academic community. He has delivered several invited talks on Data Science, Machine Learning, and Deep Learning at prominent institutions across India.

In his role as an Assistant Professor at VIT-Chennai, he continues to inspire the next generation of researchers while advancing the boundaries of knowledge in his field.

Responses From Readers

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details