Illustrated Stable Diffusion

The Illustrated Stable Diffusion – Jay Alammar – Visualizing machine learning one concept at a time.

2022 Google Imagen paper: https://arxiv.org/abs/2205.11487

2021 stable diffusion paper (latent diffusion model): https://arxiv.org/abs/2112.10752  

2006 diffusion paper (denoising diffusion probabilistic model): https://arxiv.org/abs/2006.11239 

Summary

In short we need a transfomer network as an encoder&decoder, and a CV network as a noise predictor;

for training, we train the transformer to encode text (tag or caption) and associated image to similar embedding results; and train the CV network to recover pixel or latent distribution from noise-added inputs;

for inference, we give 1. text/prompts 2. initial randomized latent representation of image as inputs, then we add attention layer that's responsible for fusing text embeddings to latent image tensor into the ResNet Conv. layers of the CV network.

The Illustrated Stable Diffusion

Translations: Chinese, Vietnamese.

(V2 Nov 2022: Updated images for more precise description of forward diffusion. A few more images in this version)

AI image generation is the most recent AI capability blowing people’s minds (mine included). The ability to create striking visuals from text descriptions has a magical quality to it and points clearly to a shift in how humans create art. The release of Stable Diffusion is a clear milestone in this development because it made a high-performance model available to the masses (performance in terms of image quality, as well as speed and relatively low resource/memory requirements).

https://youtu.be/MXmacOUJUaw

After experimenting with AI image generation, you may start to wonder how it works.

This is a gentle introduction to how Stable Diffusion works.

==> for images that cannot be copied directly, a link is given at its stead

Illustrated Stable Diffusion_第1张图片

http://jalammar.github.io/images/stable-diffusion/stable-diffusion-text-to-image.png

Illustrated Stable Diffusion_第2张图片

Stable Diffusion is versatile in that it can be used in a number of different ways. Let’s focus at first on image generation from text only (text2img). The image above shows an example text input and the resulting generated image (The actual complete prompt is here). Aside from text to image, another main way of using it is by making it alter images (so inputs are text + image).

Illustrated Stable Diffusion_第3张图片

 http://jalammar.github.io/images/stable-diffusion/stable-diffusion-img2img-image-to-image.png

Let’s start to look under the hood because that helps explain the components, how they interact, and what the image generation options/parameters mean.

The Components of Stable Diffusion

Stable Diffusion is a system made up of several components and models. It is not one monolithic model.

As we look under the hood, the first observation we can make is that there’s a text-understanding component that translates the text information into a numeric representation that captures the ideas in the text.

Illustrated Stable Diffusion_第4张图片

We’re starting with a high-level view and we’ll get into more machine learning details later in this article. However, we can say that this text encoder is a special Transformer language model (technically: the text encoder of a CLIP model). It takes the input text and outputs a list of numbers representing each word/token in the text (a vector per token).

That information is then presented to the Image Generator, which is composed of a couple of components itself.

Illustrated Stable Diffusion_第5张图片

The image generator goes through two stages:

1- Image information creator

This component is the secret sauce of Stable Diffusion. It’s where a lot of the performance gain over previous models is achieved.

This component runs for multiple steps to generate image information. This is the steps parameter in Stable Diffusion interfaces and libraries which often defaults to 50 or 100.

The image information creator works completely in the image information space (or latent space). We’ll talk more about what that means later in the post. This property makes it faster than previous diffusion models that worked in pixel space. ==>better abstraction; In technical terms, this component is made up of a UNet neural network and a scheduling algorithm.

The word “diffusion” describes what happens in this component. It is the step by step processing of information that leads to a high-quality image being generated in the end (by the next component, the image decoder).

Illustrated Stable Diffusion_第6张图片

2- Image Decoder

The image decoder paints a picture from the information it got from the information creator. It runs only once at the end of the process to produce the final pixel image.

Illustrated Stable Diffusion_第7张图片

With this we come to see the three main components (each with its own neural network) that make up Stable Diffusion:

  • ClipText for text encoding.
    Input: text.
    Output: 77 token embeddings vectors, each in 768 dimensions.

  • UNet + Scheduler to gradually process/diffuse information in the information (latent) space.
    Input: text embeddings and a starting multi-dimensional array (structured lists of numbers, also called a tensor) made up of noise.
    Output: A processed information array

  • Autoencoder Decoder that paints the final image using the processed information array.
    Input: The processed information array (dimensions: (4,64,64))
    Output: The resulting image (dimensions: (3, 512, 512) which are (red/green/blue, width, height))

http://jalammar.github.io/images/stable-diffusion/stable-diffusion-components-and-tensors.png

What is Diffusion Anyway?

Diffusion is the process that takes place inside the pink “image information creator” component. Having the token embeddings that represent the input text, and a random starting image information array (these are also called latents), the process produces an information array that the image decoder uses to paint the final image.

Illustrated Stable Diffusion_第8张图片

This process happens in a step-by-step fashion. Each step adds more relevant information. To get an intuition of the process, we can inspect the random latents array, and see that it translates to visual noise. Visual inspection in this case is passing it through the image decoder.

Illustrated Stable Diffusion_第9张图片

Diffusion happens in multiple steps, each step operates on an input latents array, and produces another latents array that better resembles the input text and all the visual information the model picked up from all images the model was trained on.

Illustrated Stable Diffusion_第10张图片

We can visualize a set of these latents to see what information gets added at each step.

Illustrated Stable Diffusion_第11张图片

The process is quite breathtaking to look at.

http://jalammar.github.io/images/stable-diffusion/diffusion-steps-all-loop.webm

Something especially fascinating happens between steps 2 and 4 in this case. It’s as if the outline emerges from the noise.

http://jalammar.github.io/images/stable-diffusion/stable-diffusion-steps-2-4.webm

How diffusion works

The central idea of generating images with diffusion models relies on the fact that we have powerful computer vision models. Given a large enough dataset, these models can learn complex operations. Diffusion models approach image generation by framing the problem as following:

Say we have an image, we generate some noise, and add it to the image.

http://jalammar.github.io/images/stable-diffusion/stable-diffusion-forward-diffusion-training-example.png

This can now be considered a training example. We can use this same formula to create lots of training examples to train the central component of our image generation model. 

While this example shows a few noise amount values from image (amount 0, no noise) to total noise (amount 4, total noise), we can easily control how much noise to add to the image, and so we can spread it over tens of steps, creating tens of training examples per image for all the images in a training dataset.

Illustrated Stable Diffusion_第12张图片

With this dataset, we can train the noise predictor and end up with a great noise predictor that actually creates images when run in a certain configuration. A training step should look familiar if you’ve had ML exposure:

Let’s now see how this can generate images.

Painting images by removing noise

The trained noise predictor can take a noisy image, and the number of the denoising step, and is able to predict a slice of noise.

Illustrated Stable Diffusion_第13张图片

The sampled noise is predicted so that if we subtract it from the image, we get an image that’s closer to the images the model was trained on (not the exact images themselves, but the distribution - the world of pixel arrangements where the sky is usually blue and above the ground, people have two eyes, cats look a certain way – pointy ears and clearly unimpressed).

http://jalammar.github.io/images/stable-diffusion/stable-diffusion-denoising-step-2v2.png

If the training dataset was of aesthetically pleasing images (e.g., LAION Aesthetics, which Stable Diffusion was trained on), then the resulting image would tend to be aesthetically pleasing. If the we train it on images of logos, we end up with a logo-generating model.

http://jalammar.github.io/images/stable-diffusion/stable-diffusion-image-generation-v2.png

Illustrated Stable Diffusion_第14张图片

This concludes the description of image generation by diffusion models mostly as described in Denoising Diffusion Probabilistic Models. Now that you have this intuition of diffusion, you know the main components of not only Stable Diffusion, but also Dall-E 2 and Google’s Imagen.

==>essence: find learned pixel distribution out of randomness

Note that the diffusion process we described so far generates images without using any text data. So if we deploy this model, it would generate great looking images, but we’d have no way of controlling if it’s an image of a pyramid or a cat or anything else. In the next sections we’ll describe how text is incorporated in the process in order to control what type of image the model generates.

Speed Boost: Diffusion on Compressed (Latent) Data Instead of the Pixel Image

To speed up the image generation process, the Stable Diffusion paper runs the diffusion process not on the pixel images themselves, but on a compressed version of the image. The paper calls this “Departure to Latent Space”.

This compression (and later decompression/painting) is done via an autoencoder. The autoencoder compresses the image into the latent space using its encoder, then reconstructs it using only the compressed information using the decoder.

Illustrated Stable Diffusion_第15张图片

Now the forward diffusion process is done on the compressed latents. The slices of noise are of noise applied to those latents, not to the pixel image. And so the noise predictor is actually trained to predict noise in the compressed representation (the latent space).

==> higher level of abstraction could make the traning process harder to converge, and the generation less resembling the traning dataset, which could be a desirable outcome for creative scenarios

The forward process (using the autoencoder’s encoder) is how we generate the data to train the noise predictor. Once it’s trained, we can generate images by running the reverse process (using the autoencoder’s decoder).

These two flows are what’s shown in Figure 3 of the LDM/Stable Diffusion paper:

Illustrated Stable Diffusion_第16张图片

This figure additionally shows the “conditioning” components, which in this case is the text prompts describing what image the model should generate. So let’s dig into the text components.

The Text Encoder: A Transformer Language Model

A Transformer language model is used as the language understanding component that takes the text prompt and produces token embeddings. The released Stable Diffusion model uses ClipText (A GPT-based model), while the paper used BERT.

The choice of language model is shown by the Imagen paper to be an important one. Swapping in larger language models had more of an effect on generated image quality than larger image generation components.

Illustrated Stable Diffusion_第17张图片

Larger/better language models have a significant effect on the quality of image generation models. Source: Google Imagen paper by Saharia et. al.. Figure A.5.

==> here the Pareto Curve shows a slightly better CLIP Score greatly enhance fidelity; as for wha tis a Pareto Curve, see: 

https://en.wikipedia.org/wiki/Pareto_distribution

The Pareto distribution, named after the Italian civil engineer, economist, and sociologist Vilfredo Pareto,[2] is a power-law probability distribution that is used in description of social, quality control, scientific, geophysical, actuarial, and many other types of observable phenomena; the principle originally applied to describing the distribution of wealth in a society, fitting the trend that a large portion of wealth is held by a small fraction of the population.[3][4] The Pareto principle or "80-20 rule" stating that 80% of outcomes are due to 20% of causes was named in honour of Pareto, but the concepts are distinct, and only Pareto distributions with shape value (α) of log45 ≈ 1.16 precisely reflect it. Empirical observation has shown that this 80-20 distribution fits a wide range of cases, including natural phenomena[5] and human activities.[6][7]

Illustrated Stable Diffusion_第18张图片

 where xm is the (necessarily positive) minimum possible value of X, and α is a positive parameter. The Pareto Type I distribution is characterized by a scale parameter xm and a shape parameter α, which is known as the tail index. When this distribution is used to model the distribution of wealth, then the parameter α is called the Pareto index.

Illustrated Stable Diffusion_第19张图片

The early Stable Diffusion models just plugged in the pre-trained ClipText model released by OpenAI. It’s possible that future models may switch to the newly released and much larger OpenCLIP variants of CLIP (Nov2022 update: True enough, Stable Diffusion V2 uses OpenClip). This new batch includes text models of sizes up to 354M parameters, as opposed to the 63M parameters in ClipText.

How CLIP is trained

CLIP is trained on a dataset of images and their captions. Think of a dataset looking like this, only with 400 million images and their captions:

Illustrated Stable Diffusion_第20张图片


A dataset of images and their captions.

In actuality, CLIP was trained on images crawled from the web along with their “alt” tags.

CLIP is a combination of an image encoder and a text encoder. Its training process can be simplified to thinking of taking an image and its caption. We encode them both with the image and text encoders respectively.

Illustrated Stable Diffusion_第21张图片

We then compare the resulting embeddings using cosine similarity. When we begin the training process, the similarity will be low, even if the text describes the image correctly.

recall: cosine similarity measure the directional divergence between 2 vectors, i.e. it's just the cosine of the angle between vecA and vecB

Illustrated Stable Diffusion_第22张图片

Illustrated Stable Diffusion_第23张图片

We update the two models so that the next time we embed them, the resulting embeddings are similar.

Illustrated Stable Diffusion_第24张图片

By repeating this across the dataset and with large batch sizes, we end up with the encoders being able to produce embeddings where an image of a dog and the sentence “a picture of a dog” are similar. Just like in word2vec, the training process also needs to include negative examples of images and captions that don’t match, and the model needs to assign them low similarity scores.

Feeding Text Information Into The Image Generation Process

To make text a part of the image generation process, we have to adjust our noise predictor to use the text as an input.

Illustrated Stable Diffusion_第25张图片

Our dataset now includes the encoded text. Since we’re operating in the latent space, both the input images and predicted noise are in the latent space.

Illustrated Stable Diffusion_第26张图片

To get a better sense of how the text tokens are used in the Unet, let’s look deeper inside the Unet.

Layers of the Unet Noise predictor (without text)

Let’s first look at a diffusion Unet that does not use text. Its inputs and outputs would look like this:

Illustrated Stable Diffusion_第27张图片

Inside, we see that:

  • The Unet is a series of layers that work on transforming the latents array
  • Each layer operates on the output of the previous layer
  • Some of the outputs are fed (via residual connections) into the processing later in the network
  • The timestep is transformed into a time step embedding vector, and that’s what gets used in the layers

Illustrated Stable Diffusion_第28张图片

Layers of the Unet Noise predictor WITH text

Let’s now look how to alter this system to include attention to the text.

Illustrated Stable Diffusion_第29张图片

The main change to the system we need to add support for text inputs (technical term: text conditioning) is to add an attention layer between the ResNet blocks.

Illustrated Stable Diffusion_第30张图片

Note that the ResNet block doesn’t directly look at the text. But the attention layers merge those text representations in the latents. And now the next ResNet can utilize that incorporated text information in its processing.

Conclusion

I hope this gives you a good first intuition about how Stable Diffusion works. Lots of other concepts are involved, but I believe they’re easier to understand once you’re familiar with the building blocks above. The resources below are great next steps that I found useful. Please reach out to me on Twitter for any corrections or feedback.

Resources

  • I have a one-minute YouTube short on using Dream Studio to generate images with Stable Diffusion.
  • Stable Diffusion with Diffusers
  • The Annotated Diffusion Model
  • How does Stable Diffusion work? – Latent Diffusion Models EXPLAINED [Video]
  • Stable Diffusion - What, Why, How? [Video]
  • High-Resolution Image Synthesis with Latent Diffusion Models [The Stable Diffusion paper]
  • For a more in-depth look at the algorithms and math, see Lilian Weng’s What are Diffusion Models?
  • Watch the great Stable Diffusion videos from fast.ai

Acknowledgements

Thanks to Robin Rombach, Jeremy Howard, Hamel Husain, Dennis Soemers, Yan Sidyakin, Freddie Vargus, Anna Golubeva, and the Cohere For AI community for feedback on earlier versions of this article.

Contribute

Please help me make this article better. Possible ways:

  • Send any feedback or corrections on Twitter or as a Pull Request
  • Help make the article more accessible by suggesting captions and alt-text to the visuals (best as a pull request)
  • Translate it to another language and post it to your blog. Send me the link and I’ll add a link to it here. Translators of previous articles have always mentioned how much deeper they understood the concepts by going through the translation process.

Discuss

If you’re interested in discussing the overlap of image generation models with language models, feel free to post in the #images-and-words channel in the Cohere community on Discord. There, we discuss areas of overlap, including:

  • fine-tuning language models to produce good image generation prompts
  • Using LLMs to split the subject, and style components of an image captioning prompt
  • Image-to-prompt (via tools like Clip Interrogator)

Citation

If you found this work helpful for your research, please cite it as following:

@misc{alammar2022diffusion, 
  title={The Illustrated Stable Diffusion},
  author={Alammar, J},
  year={2022},
  url={https://jalammar.github.io/illustrated-stable-diffusion/}
}

Written on October 4, 2022

你可能感兴趣的:(Notes,Algorithm,stable,diffusion)