4 Impressive GAN Libraries Every Data Scientist Should Know!

Shipra Saxena Last Updated : 23 May, 2023
9 min read

Introduction

Generative Adversarial Network (GAN) is currently considered one of the most exciting research areas in computer vision. Its prowess to process images is incomparable, and being a data scientist, not exploring it would be a blunder.  Even eminent people like Yann LeCun described GANs as ” the most interesting idea in machine learning in the last 10 years”.

Generative Adversarial Networks (GAN) Libraries

When I worked with GAN for the first time, I developed it from scratch using PyTorch, and it was indeed a tedious task. It becomes even more difficult when the output is not satisfactory, and you want to try another architecture as now you have to rewrite the code. But fortunately, researchers working in various technical giants have developed several GAN libraries to help you out in exploring and developing GAN-based applications.

In this article, we are going to see 4 interesting GAN libraries you should definitely know about. Also, I will be giving you an overview of GANs to begin with

I recommend you check out our comprehensive Computer Vision Program to get started in this field.

Learning Objectives:

  • Understand what a Generative Adversarial Network (GAN) is.
  • Get an overview and know the applications of 4 different GAN libraries.
  • Learn the implementation of these 4 prominent GAN libraries.

A Quick Overview of  GANs

GANs were introduced by Ian Goodfellow in 2014 and is a state-of-the-art deep learning method. It is a member of the Generative Model family that goes through adversarial training.

Generative Modeling is a powerful method where the network learns the distribution of the input data and tries to generate a new data point based on similar distribution. If we look at the examples of Generative models, we have Auto Encoders, Boltzmann machines, Generative adversarial networks, Bayesian networks, etc.

Architecture of GANs

Generative Adversarial Networks (GAN) Libraries : Architecture
Source: https://www.oreilly.com/

GANs consist of two neural networks, a Generator G and a Discriminator D. Further, these two models are involved in a zero-sum game during training.

The Generator network learns the distribution of the training data. And when we provide random noise as input, it generates some synthetic data trying to imitate the training samples.

Now here comes the discriminator model(D). It designates a label- Real or Fake to the data generated by G on the basis of the data distribution. This means the new image comes from the training images or it is some artificially generated image.

The case when D successfully recognizes the image as real or fake leads to an increase in the Generator’s loss. Similarly, when G succeeds in constructing good-quality images similar to the real ones and befools the D, it increases the discriminator’s loss. Also, the generator learns from the process and generates better and more realistic images in the next iteration.

Mainly it can be considered a two-player MIN-MAX game. Here the performance of both networks improves over time. Both networks go through multiple training iterations. With due course of time and several updations in model parameters like weights and biases, they reach the stable state, also known as Nash Equilibrium.

What is Nash Equilibrium?

Nash equilibrium is a stable state of a system involving the interaction of different participants, in which no participant can gain by a unilateral change of strategy if the strategies of the others remain unchanged.

Ultimately in this zero-sum game, we can successfully generate artificial or fake images that mostly look similar to the real training dataset.

Example:

Let’s see how useful GANs can be.

For instance, during the lockdown, you got a chance to go through your old photo album. In such a nerve-racking time, it is a good refresher to relive your memories. But since this album was lying in your cupboard for years, untouched, some photographs were damaged and that made you sad. And this is precisely when you decided to use GANs.

The image below was successfully restored with help of GANs, using a method called Image Inpainting.

Image reconstruction using Generative Adversarial Networks (GAN)
Source: https://conservancy.umn.edu/

Image Inpainting is the art of restoring damaged images by reconstructing the missing parts by utilizing the available background information. This technique is also used for removing unwanted objects from the given images.

This was just a quick review of GANs. If you want to know more about it, I will suggest you read the following articles.

Now we will see some interesting GAN libraries.

TF-GAN

Tensorflow GANs, also known as TF- GAN, is an open-source lightweight Python library. Google AI researchers developed it for the easy and effective implementation of GANs.

TF-GAN provides a well-developed infrastructure to train and evaluate the Generative Adversarial Network along with effectively tested loss functions and evaluation metrics. The library consists of various modules to implement the model. It provides simple function calls that a user can apply to his own data without writing the code from scratch.

It is easy to install and use, just like other packages, such as NumPy and pandas, as it provides the PyPi package. Use the following code:

#Installing the library

pip install tensorflow-gan

#importing the library

import tenorflow_gan as tfgan

How to Generate Images from MNIST Dataset Using TF-GAN?

  1. Set up the input.

    images = mnist_data_provider.provide_data(FLAGS.batch_size)
    noise = tf.random_normal([FLAGS.batch_size, FLAGS.noise_dims])

  2. Build the generator and discriminator.

    gan_model = tfgan.gan_model(
    generator_fn=mnist.unconditional_generator, # you define
    discriminator_fn=mnist.unconditional_discriminator, # you define
    real_data=images,
    generator_inputs=noise)

  3. Build the GAN loss.

    gan_loss = tfgan.gan_loss( gan_model, generator_loss_fn=tfgan_losses.wasserstein_generator_loss, discriminator_loss_fn=tfgan_losses.wasserstein_discriminator_loss)

  4. Create the train ops, which calculate gradients and apply updates to weights.

    train_ops = tfgan.gan_train_ops( gan_model, gan_loss, generator_optimizer=tf.train.AdamOptimizer(gen_lr, 0.5), discriminator_optimizer=tf.train.AdamOptimizer(dis_lr, 0.5))

  5. Run the train ops in the alternating training scheme.

    tfgan.gan_train(
    train_ops,
    hooks=[tf.train.StopAtStepHook(num_steps=FLAGS.max_number_of_steps)],
    logdir=FLAGS.train_log_dir)

Benefits of Using the TF-GAN Library:

  1. One important thing about the library is that TF-GAN is currently compatible with Tensorflow-2.0, i.e., the latest version of TensorFlow. Also, you can efficiently use it with other frameworks.
  2. Training a Generative adversarial model is a heavy processing task that used to take weeks. TF-GAN supports Cloud TPU. Hence makes the training process complete in a few hours. To learn more about how to use TF-GANs on TPU, you can see this tutorial by the authors of the library.
  3. If you need to compare the results of multiple papers, TF-GAN provides you with standard metrics that facilitate the user to efficiently and easily compare different research papers without any statistical bias.

The following are some projects implemented with TF-GAN:

Further, to learn more about this exciting GAN library used by Google researchers read the official document.

Torch-GAN

Torch-GAN is a PyTorch-based framework for writing short and easy-to-understand codes for developing GANs. This package consists of various Generative adversarial networks along with utilities required for their implementation.

Generally, GANs share a standard design having multiple components like the Generator model, Discriminator model, Loss function, and evaluation metrics. While Torch GAN imitates the design of GANs through simple API and allows customizing the components when required.

 Overview of Torch GAN Design
Source: https://arxiv.org/pdf/1909.03410.pdf

This GAN Library facilitates the interaction among the components of GAN through a highly versatile trainer which automatically adopts user-defined models and losses.

Installing the library is simple using pip. You just need to use the following command below, and it is done.

pip3 install torchgan

Implementing the Torch-GAN Models

At the core of the design, we have a trainer module responsible for flexibility and ease of use. The user needs to provide the required specifications, i.e., the architecture of the generator and discriminator models, along with the optimizer associated. The user also needs to provide the loss functions and evaluation metrics.

The library provides the freedom to choose the specifications either from the wide range available or custom variants of their own. In the following image, we can see the implementation of DC-GAN in just 10 lines of code. Isn’t it amazing?

Torch GAN Implementation
Source: https://arxiv.org/pdf/1909.03410.pdf

Benefits of Using the TorchGAN Library:

  1. The wide range of GAN architecture it supports. You name the architecture, and you will find the TorchGAN implementation of the same. For example, Vanilla GAN, DCGAN, CycleGan, Conditional GAN, Generative Multi-adversarial networks, and many more.
  2. Another important feature of the framework is its extensibility and flexibility. Torch-GAN is a comprehensible package. We can use it efficiently with inbuilt or user-defined functionalities.
  3. Also, it provides efficient performance visualization through a Logger object. It supports console logging as well as visualization of performance using TensorBoard and Vizdom.

If you want to dig deeper, don’t forget to read the official documentation of TorchGAN.

Mimicry

With the increased research in the field, we can see several implementations of GANs. It is difficult to compare multiple implementations developed using different frameworks, trained under varying conditions, and evaluated using different metrics. Such a comparison is an unavoidable task for researchers. Hence, this was the main motivation behind the development of Mimicry.

Mimicry

Mimicry is a lightweight PyTorch library for the reproducibility of GANs. It provides common functionalities required for training and evaluating a Gan model. That allows the researchers to concentrate on model implementation instead of repeatedly writing the same boilerplate code.

This GAN library provides the standard implementation of various GAN architectures like DCGAN, Wasserstein GAN with Gradient Penalty (WGAN-GP), Self-supervised GAN (SSGAN), etc. Later it compares the baseline performance of multiple GAN models with the same model size, trained under similar conditions.

Just like the other two libraries, we can easily install Mimicry using pip, which is ready to use.

pip install torch-mimicry

How to Implement SNGAN Using Mimicry?

Step 1: Import torch

import torch.optim as optim
import torch_mimicry as mmc
from torch_mimicry.nets import sngan

Step 2: Data handling objects

device = torch.device('cuda:0' if torch.cuda.is_available() else "cpu")
dataset = mmc.datasets.load_dataset(root='./datasets', name='cifar10')
dataloader = torch.utils.data.DataLoader(
        dataset, batch_size=64, shuffle=True, num_workers=4)

Step 3: Define models and optimizers

netG = sngan.SNGANGenerator32().to(device)
netD = sngan.SNGANDiscriminator32().to(device)
optD = optim.Adam(netD.parameters(), 2e-4, betas=(0.0, 0.9))
optG = optim.Adam(netG.parameters(), 2e-4, betas=(0.0, 0.9))

Step 4: Start training

trainer = mmc.training.Trainer(
    netD=netD,
    netG=netG,
    optD=optD,
    optG=optG,
    n_dis=5,
    num_steps=100000,
    lr_decay='linear',
    dataloader=dataloader,
    log_dir='./log/example',
    device=device)
trainer.train()

Another important feature of mimicry is it provides Tensorboard support for performance visualization. Hence, you can create a loss and probability curve for monitoring training. It can display randomly generated images for check variety.

Mimicry is an interesting development aimed at aiding researchers. I will personally suggest you read the Mimicry paper.

IBM GAN Toolkit

So far, we have seen some very efficient and state-of-the-art GAN libraries. There are many more GAN libraries like Keras-GAN, PyTorch-GAN, PyGAN, etc. When we observe closely, we see some common things among these GAN libraries. They are code-intensive. If you want to use any of them, then you must be well-versed in:

  • The knowledge and implementation of GANs.
  • Fluent in Python
  • How to use the particular framework

It is a bit difficult to know everything for a software programmer. To solve the issue, here we have a user-friendly GANs tool – the IBM GAN Toolkit.

The GAN toolkit provides a highly flexible, no-code variant of implementing GAN models. Additionally, it gives a high level of abstraction for implementing the GAN model. Here, the user just needs to give the model details using a config file or command-line argument. Then the framework will take care of everything else. I personally found it very interesting.

The following steps will help you with installation:

  1. Firstly, we clone the code.
    $ git clone https://github.com/IBM/gan-toolkit $ cd gan-toolkit
  2. Then install all the requirements.
    $ pip install -r requirements.txt

    Now it’s ready to use. Finally, to train the network, we have to give a config file in JSON format as follows

    {          "generator":{             "choice":"gan"         },         "discriminator":{             "choice":"gan"         },         "data_path":"datasets/dataset1.p",         "metric_evaluate":"MMD"     }
    $ python main.py --config my_gan.json

The toolkit implements multiple GAN architectures like vanilla GAN, DC-GAN, Conditional- GAN, and more.

Advantages of the GAN Toolkit:

  1. It facilitates a no-code way of implementing state-of-the-art computer vision technology. Only a simple JSON file is required to define a GAN architecture. There is no need to write the training code, as the framework will take care of it.
  2. It provides multi-library support, i.e., PyTorch, Keras, and TensorFlow as well.
  3. Also, in the GAN toolkit, we have the freedom to mix and match the components from different models easily. For example, you can use the generator model from DC-GAN, the discriminator from C-GAN, and the training process from vanilla gan.

Now just read the document and play around with GANs in your own way.

Conclusion

GANs are an active field of research. We see regular updates almost weekly on the next GAN version. You can check out the work researchers have done here. In this article, we discussed the 4 most important GAN Libraries that can be easily implemented in Python. On a daily basis, we are seeing tremendous developments and seeing new applications of GANs.

The following article explains some of these amazing applications:

Key Takeaways:

  • Generative Modeling is a powerful method where the network learns the input data distribution and tries to generate a new data point based on similar distribution.
  • GANs consist of two neural networks, a Generator and a Discriminator.
  • Nash equilibrium is a stable state of a system where no participant can change the strategy independently.
  • The 4 most commonly used GAN libraries are TF-GAN, Torch-GAN, Mimicry, and IBM GAN.
  • Out of these, IBM GAN is more of a no-code library as the framework writes the code for you.

Frequently Asked Questions

Q1. What library is used in GAN?

A. TF-GAN, Torch-GAN, Mimicry, IBM GAN, Keras-GAN, PyTorch-GAN, and PyGAN, are some of the most prominent and commonly used libraries in GAN.

Q2. What are the different types of GAN?

A. Vanilla GAN, DCGAN, CycleGan, Conditional GAN, and Generative Multi-adversarial networks are some of the many different types of GAN architectures.

Q3. Who invented GAN?

A. Generative Adversarial Networks, or GANs, were first introduced by Ian Goodfellow in 2014.

Shipra is a Data Science enthusiast, Exploring Machine learning and Deep learning algorithms. She is also interested in Big data technologies. She believes learning is a continuous process so keep moving.

Responses From Readers

Clear

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details