Mastering Diffusion Models: A Guide to Image Generation with Stable Diffusion

Ritika Last Updated : 29 Sep, 2023
5 min read

Introduction

Diffusion models, rooted in probabilistic generative modeling, are powerful tools for data generation. Initially in machine learning research, their history dates back to the mid-2010s when Denoising Autoencoders were developed. Today, they have gained prominence for their ability to generate high-quality images from text by modeling the denoising process. Current usage is in image synthesis, text generation, anomaly detection, finding utility in art, natural language processing, and cybersecurity. The future scope of diffusion models holds the potential for revolutionizing content creation, improving language understanding, making them a pivotal part of AI technologies, and solving real-world challenges. In this article, we will understand the basics of the diffusion model. Our focus will be on latent diffusion models related to text-to-image generation. We will learn to use image generation with the diffusion model in Python the Stable Diffusion model by Dream Studio. So let’s get started!

Learning Objectives

In this article, we will learn about

  • Get an understanding of Diffusion models and their basics
  • We will know about the architecture of Diffusion Models
  • Get to know about the open-source diffusion model Stable Diffusion.
  • We will learn to use Stable Diffusion for image generation using text in Python

This article was published as a part of the Data Science Blogathon.

Overview of Diffusion Models

Diffusion models belong to the class of generative models, meaning they can generate data similar to the one on which they are trained. In essence, the diffusion models destroy training data by adding noise and then learning to recover the training data by removing the noise. In the process, it learns the parameters of the neural network. We can then use this trained model and generate new data similar to training data by randomly sampling noise through the learned denoising process.  This concept is similar to Variational Autoencoders (VAEs) in which we try to optimize a cost function by first projecting the data onto the latent space and then recovering it back to the starting state. In diffusion models, the system aims to model a series of noise distributions in a Markov Chain and “decodes” the data by undoing/denoising the data in a hierarchical fashion.

Do you know the Basics of Diffusion Models?

A diffusion denoising process modeling basically involves 2 major steps  – the forward diffusion process (adding noise) and reverse diffusion process (removing noise). Let us try to understand each step one by one.

Forward Diffusion

The below are the steps for forward diffusion:

  • The image(x0) is slowly corrupted iteratively in a Markov chain manner by adding scaled Gaussian noise.
  • This process is done for some T time steps where we get xT.
  • No model is involved during this step
  • After this stage of Forward diffusion we have an image xT which is have Gaussian distribution. We have converted the data distribution into standard normal distribution with uniform variance.

Backward/ Reverse Distribution

  • In this process we undo the forward diffusion and our objective is to remove the noise iteratively using a neural network model.
  • The model’s task is to predict the noise added in image xt in time step t to image xt-1 . The model thus, predicts the amount of noise added in each time step to each sequence of images.
 Depiction of Forward and Backward Diffusion | Image Generation with Stable Diffusion
Depiction of Forward and Backward Diffusion

What is Stable Diffusion Framework?

Many open-source contributors collaborated to create the Stable Diffusion model, which is one of the most popular and efficient diffusion models available. It runs seamlessly on limited compute resources. It’s architecture consists of 4  components :-

1. Variational Autoencoders (VAE): Utilise it to decode pictures and translate them from latent space into pixel space. The latent space is a condensed representation of a picture that highlights its key elements. Working with latent embeddings is computationally lot cheaper and compress the latent spaces (have significantly lower dimensionality).

2. Text encoder and Tokenizer: To encode the user specific text prompt which is to generate the image.

3.  The U-Net Model: Latent image representations are denoised using it. Like an autoencoder, a U-Net has a contracting path and an expanding path. A U-Net does, however, have skip connections. These aid in the information propagation from the prior layers, which helps to solve the issue of disappearing gradients. Additionally, since we ultimately lose information in the contractive path, it aids in maintaining the finer details.

How to Use Stable Diffusion in Python for Image Generation?

In the below python implementation we will use the stable diffusion model to generate images.

1. Installing Libraries

!pip install transformers diffusers accelerate
!pip install xformers

2. Importing Libraries

from diffusers import StableDiffusionPipeline
import torch

3. Loading Stable Diffusion Model

Here we load the specific stable diffusion model in model_id below which is on Hugging face library.

model_id = "dreamlike-art/dreamlike-photoreal-2.0"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")

4. Generate Prompts for Image

Here we generate 3 prompts for images we create 2 images of Alice in Wonderland with different styles and a third image of chesire cat.

prompts = ["Alice in Wonderland, Ultra HD, realistic, futuristic, detailed, octane render, photoshopped, photorealistic, soft, pastel, Aesthetic, Magical background",
          "Anime style Alice in Wonderland, 90's vintage style, digital art, ultra HD, 8k, photoshopped, sharp focus, surrealism, akira style, detailed line art",
          "Beautiful, abstract art of Chesire cat of Alice in wonderland, 3D, highly detailed, 8K, aesthetic"]


images = []

5. Save Images in the folder

for i, prompt in enumerate(prompts):
    image = pipe(prompt).images[0]
    image.save(f'picture_{i}.jpg')
    images.append(image)

Output Generated Images

Image Generation with Stable Diffusion | Python
Output | Image Generation with Stable Diffusion | Python
Image Generation with Stable Diffusion | Python

Conclusion

In the realm of AI, researchers are currently exploring the powerful potential of diffusion models for wider application across various domains. Product designers and illustrators are experimenting with these models to quickly generate innovative prototype designs. Furthermore, several other robust models exist for generating more detailed images and can find utility in various photography tasks. Experts believe that these models will have a pivotal role in generating video content for influencers in the future.

Key Takeaways

  • We understood the basic concepts behind diffusion models and their working principle.
  • Stable diffusion is an important open source model and we learnt about its internal architecture.
  • We learned how to run a stable diffusion model in Python to generate images using it with prompts.

Frequently Asked Questions

Q1. What are the available different diffusion models ?

A. There are a number of powerful diffusion models available like DALLE 2 by Open AI , Imagen by Google , Midjourney and Stable Diffusion by StabilityAI.

Q2. Which are the free diffusion models?

A. Stable Diffusion by StabilityAI is only free open source available currently.

Q3. Apart from diffusion models what other models there for image generation?

A. There are various generative models for image generation they are GANs, VAEs, Deep Flow based models.

Q4. Is there any GUI website to use Stable Diffusion Models?

A. Stability AI allows user to experiment and generate images on the website by signing up on their page https://beta.dreamstudio.ai/generate . Initially, it offers free credits to its new users, and then it charges for every image generation.

Q5. Apart from texts can we use another image as input  reference to generate another image?

A. Yes, apart from texts, we can also upload another image as a reference or edit the image by giving a prompt to remove specific objects from image or color the black and white image, etc. This service is by the RunawayML platform Image2Image

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

I am a professional working as data scientist after finishing my MBA in Business Analytics and Finance. A keen learner who loves to explore and understand and simplify stuff! I am currently learning about advanced ML and NLP techniques and reading up on various topics related to it including research papers .

Responses From Readers

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details