Revolutionizing Creative Sketch Generation with DCGAN

Janvi Kumari Last Updated : 24 Jun, 2024
7 min read

Introduction

The domain of artificial intelligence has witnessed significant growth and expansion into creative sectors like sketching and doodling. In sketching, conventional AI approaches have mainly concentrated on imitating ordinary and real-life sketches. Still, recent developments in Generative Adversarial Networks (GANs) present an innovative perspective towards creative sketch production. The study explores the details of implementing DCGAN along with the Quick, Draw! Dataset: its techniques and how it can affect human creativity by acting as an inspiration for others involved in their creative projects.

DCGAN

Overview

  • The paper highlights AI advancements in sketching, focusing on the innovative role of GANs in creating creative sketches.
  • It explains DCGAN’s architecture, emphasizing the generator and discriminator’s role in producing high-quality images.
  • The study showcases DCGAN’s implementation with the Quick, Draw! Dataset, demonstrating its impact on enhancing human creativity.
  • Performance metrics like FID and CS are discussed to evaluate DCGAN’s ability to generate diverse and recognizable sketches.
  • Prospects of DCGAN in interactive sketching tools are explored, aiding artists and fostering human-machine collaborative creativity.

What is Creative Sketching?

Sketching has been an important form of visual communication since prehistoric times and has become a popular creative tool today. The introduction of touchscreen devices has further expanded their scope. The role of intelligence in this field is only to understand and create true art. However, creative art involves unique characters and emotional responses and presents more complex subject matter. This is where DCGAN shines.

Understanding DCGAN

DCGAN, or Deep Convolutional Generative Adversarial Network, is a GAN specifically designed to create high-quality images. It works with two main factors: generator and discrimination

  • Generator
  • Discriminator
DCGAN
Source: ResearchGate

The image depicts the architecture of a Deep Convolutional Generative Adversarial Network (DCGAN). It shows the structure of the generator and discriminator networks, highlighting the layers and operations involved in generating and discriminating images.

Generator Architecture

The generator transforms a low-dimensional random noise vector into a high-dimensional image. The process involves upsampling and convolutional layers with ReLU activation functions.

  • Input Layer:
    • The input to the generator is a random noise vector, typically of size 100.
  • Dense Layer:
    • The noise vector is passed through a dense (fully connected) layer to expand its dimensionality, resulting in a tensor of shape 512×4×4.
  • Upsampling and Convolutional Layers:
    • The generator uses a series of upsampling layers (often implemented as transposed convolutions or deconvolutions) to increase the tensor’s spatial dimensions.
    • Each upsampling step is followed by a convolutional layer with ReLU activation and batch normalization to refine the features.
    • The spatial dimensions double at each step while the number of feature maps decreases.
    • The layers expand as follows:
      • 512×4×4
      • 256×8×8
      • 128×16×16
      • 64×32×32
      • 32×64×64
      • 2×128×128

Discriminator Architecture

The discriminator aims to differentiate between real and fake images by downsampling the input images and applying convolutional layers with Leaky ReLU activations.

  • Input Layer:
    • The input to the discriminator is an image, typically of size 128×128×2.
  • Convolutional Layers:
    • The discriminator uses a series of convolutional layers to reduce the input image’s spatial dimensions while increasing the depth of feature maps.
    • A Leaky ReLU activation function and dropout for regularization follow each convolutional step.
    • The spatial dimensions halve at each step while the number of feature maps increases.
    • The layers are reduced as follows:
      • 2×128×128
      • 32×64×64
      • 64×32×32
      • 128×16×16
      • 256×8×8
      • 512×4×4
  • Dense Layer and Output:
    • The final tensor is flattened and passed through a dense layer to produce a single value.
    • The output is a probability, with 0 indicating a fake image and 1 indicating a real image.

Key Components

  • Upsampling + ReLU (Generator):
    • The left sections in the generator represent upsampling operations followed by ReLU activations, which expand the spatial dimensions and increase the image’s resolution.
  • Convolution + Leaky ReLU (Discriminator):
    • The right sections in the discriminator represent convolutional operations followed by Leaky ReLU activations, which downsample the image and extract features to determine authenticity.

Training and Inference with Quick, Draw! Data

To showcase DCGAN’s capabilities, we utilized the Quick, Draw! dataset, which contains millions of doodles across various categories. In this example, we focused on the “flower” category.

Loading the Quick, Draw! Data

First, we loaded and preprocessed the Quick, Draw! flower dataset:

import numpy as np
import requests
from io import BytesIO

# Load Quick, Draw! Data
quickdraw_url = 'https://storage.googleapis.com/quickdraw_dataset/full/numpy_bitmap/flower.npy'
response = requests.get(quickdraw_url)
data = np.load(BytesIO(response.content))
data = (data.astype(np.float32) / 127.5) - 1.0  # Normalize to [-1, 1]
data = data.reshape(-1, 28, 28, 1)
DCGAN

This code downloads the Quick, Draw! dataset, normalizes the pixel values to the range [-1, 1], and reshapes it for use in the model.

Defining the DCGAN Architecture

Next, we defined the DCGAN architecture, including the generator and discriminator models:

DCGAN Class Initialization

class DCGAN():
    def __init__(self):
        self.img_shape = (28, 28, 1)
        self.latent_dim = 100
        self.optimizer = tf.keras.optimizers.legacy.Adam(0.0002, 0.5)

        # Build and compile the discriminator
        self.discriminator = self.build_discriminator()
        self.discriminator.compile(loss='binary_crossentropy', optimizer=self.optimizer)

        # Build and compile the generator
        self.generator = self.build_generator()
        self.generator.compile(loss='binary_crossentropy', optimizer=self.optimizer)

        # Build the combined model
        self.gan = self.build_GAN()

This initializes the DCGAN class, defining the image shape, latent dimension, and optimizer. It also builds and compiles the generator and discriminator models.

Building the GAN

def build_GAN(self):
    self.discriminator.trainable = False
    gan_input = Input(shape=(self.latent_dim,))
    img = self.generator(gan_input)
    gan_output = self.discriminator(img)
    gan = Model(gan_input, gan_output, name='GAN')
    gan.compile(loss='binary_crossentropy', optimizer=self.optimizer)
    return gan

This method constructs the combined GAN model, which stacks the generator and discriminator and compiles them.

Building the Generator

def build_generator(self):
    generator = Sequential()
    generator.add(Dense(128 * 7 * 7, activatioDCGAN architecturen="relu", input_dim=self.latent_dim))
    generator.add(Reshape((7, 7, 128)))
    generator.add(BatchNormalization(momentum=0.8))
    generator.add(UpSampling2D())
    generator.add(Conv2D(128, kernel_size=3, padding="same"))
    generator.add(LeakyReLU(0.2))
    generator.add(BatchNormalization(momentum=0.8))
    generator.add(UpSampling2D())
    generator.add(Conv2D(64, kernel_size=3, padding="same"))
    generator.add(LeakyReLU(0.2))
    generator.add(BatchNormalization(momentum=0.8))
    generator.add(Conv2D(1, kernel_size=3, padding='same', activation="tanh"))
    return Model(Input(shape=(self.latent_dim,)), generator(Input(shape=(self.latent_dim,))), 
    name='Generator')

This method constructs the generator model, transforming random noise into a synthetic image.

Building the Discriminator

def build_discriminator(self):
    discriminator = Sequential()
    discriminator.add(Conv2D(64, kernel_size=(5, 5), strides=(2, 2), padding='same', 
    input_shape=self.img_shape, 
    kernel_initializer=RandomNormal(stddev=0.02)))
    discriminator.add(LeakyReLU(0.2))
    discriminator.add(Dropout(0.2))
    discriminator.add(Conv2D(128, kernel_size=(5, 5), strides=(2, 2), padding='same'))
    discriminator.add(LeakyReLU(0.2))
    discriminator.add(Dropout(0.2))
    discriminator.add(Flatten())
    discriminator.add(Dense(1, activation='sigmoid'))
    return Model(Input(shape=self.img_shape), discriminator(Input(shape=self.img_shape)), 
    name='Discriminator')

This method constructs the discriminator model, differentiating between real and synthetic images.

Training the DCGAN

def train(self, X_train, epochs, batch_size=128, sample_interval=50):
    real = np.ones((batch_size, 1))
    fake = np.zeros((batch_size, 1))
    for epoch in range(epochs):
        for _ in range(X_train.shape[0] // batch_size):
            idx = np.random.randint(0, X_train.shape[0], batch_size)
            imgs = X_train[idx]
            noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
            gen_imgs = self.generator.predict(noise)
            
            d_loss_real = self.discriminator.train_on_batch(imgs, real)
            d_loss_fake = self.discriminator.train_on_batch(gen_imgs, fake)
            d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
            
            noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
            g_loss = self.gan.train_on_batch(noise, real)
        
        if epoch % sample_interval == 0:
            self.sample_images(epoch)

This method trains the DCGAN by alternating between training the discriminator and the generator. It periodically generates sample images to visualize the generator’s progress.

Sampling Images

def sample_images(self, epoch):
    noise = np.random.normal(0, 1, (100, self.latent_dim))
    gen_imgs = self.generator.predict(noise)
    gen_imgs = 0.5 * gen_imgs + 0.5
    
    fig, axs = plt.subplots(10, 10, figsize=(10, 10))
    cnt = 0
    for i in range(10):
        for j in range(10):
            axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap='gray')
            axs[i, j].axis('off')
            cnt += 1
    plt.show()

This method generates and displays a grid of images the generator produces at each sampling interval during training.

Create and Train the DCGAN

gan = DCGAN()
gan.train(data, epochs=5, batch_size=128, sample_interval=5)
DCGAN
DCGAN

1st Epoch: We can see the flowers don’t look good enough

DCGAN

After training many epochs, it gets considerably better!

DCGAN

The loss over epochs is shown. The generator loss seems to be diverging. However, we visually inspected the generated samples on each epoch, and the results were improving.

DCGAN

Evaluating DCGAN

To evaluate the DCGAN’s performance, we compared it with other sketch generation models. We used metrics such as Fréchet Inception Distance (FID), generation diversity (GD), characteristic score (CS), and semantic diversity score (SDS).

  • Fréchet Inception Distance (FID): DCGAN achieved competitive FID scores, indicating high quality in the generated sketches.
  • Generation Diversity (GD): The model maintained a high level of diversity in its outputs.
  • Characteristic Score (CS): This score measures how often a generated sketch is recognizable as the intended object, with DCGAN performing well.
  • Semantic Diversity Score (SDS): This metric captures the various sketches generated, showcasing DCGAN’s creative potential.

Conclusion

DCGAN’s ability to generate unique, high-quality sketches has significant implications for various applications. It can be integrated into interactive sketching tools, providing users with creative suggestions and helping artists overcome creative blocks. The model’s approach opens new avenues for exploring human-machine collaborative creative processes.

In summary, DCGAN(Deep Convolutional Generative Adversarial Network) represents a significant advance in AI design. It sets a new standard for AI-driven creativity by using innovative training methods and focusing on creating distinctive, beautiful images. As artificial intelligence continues to evolve, models such as DCGAN will undoubtedly play an important role in developing and improving human reasoning ability.

Frequently Asked Questions

Q1. What are the applications of DCGAN in creative sketching?

Ans. DCGAN can be integrated into interactive sketching tools to provide creative suggestions, help artists overcome creative blocks, and enhance human-machine collaborative creative processes.

Q2. What are some common challenges faced when training DCGAN models?

Ans. Common challenges include training instability, mode collapse (where the generator produces limited varieties of images), and the need for large amounts of data and computational resources.

Q3: What advancements can we expect in the future for DCGAN and similar technologies?

Ans. Future advancements may include more sophisticated models with higher image quality, greater control over the generated content, improved training stability, and broader applications in various creative and industrial fields.

Q4. What are the potential applications of DCGAN in creative sketching?

Ans. DCGAN can be integrated into interactive sketching tools to provide creative suggestions, help artists overcome creative blocks, and enhance human-machine collaborative creative processes.

Hi, I am Janvi, a passionate data science enthusiast currently working at Analytics Vidhya. My journey into the world of data began with a deep curiosity about how we can extract meaningful insights from complex datasets.

Responses From Readers

Clear

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details