- AI, But Simple
- Posts
- Diffusion Models, Simply Explained
Diffusion Models, Simply Explained
AI, But Simple Issue #57

Hello from the AI, but simple team! If you enjoy our content, consider supporting us so we can keep doing what we do.
Our newsletter is no longer sustainable to run at no cost, so we’re relying on different measures to cover operational expenses. Thanks again for reading!
Diffusion Models, Simply Explained
AI, But Simple Issue #57
Diffusion models are generative deep learning models that have skyrocketed in popularity in recent years due to their exceptional performance, beating out other established generative models like the Generative Adversarial Network (GAN) and Variational Autoencoder (VAE).
Diffusion models are most known for their image generation abilities—they are used in popular generative models such as DALL-E or Stable Diffusion.

Even though diffusion models were introduced in 2015, they didn’t start taking off until early 2020, when Ho et al. released a paper proving how probabilistic diffusion models could be effectively applied to image synthesis.
That’s great, but how do they work, and what even is diffusion? Diffusion refers to the phenomenon of a “scattering” of particles, and it is widely studied in the field of thermodynamics.
An example of this is sunshine that passes through clouds, creating rays that scatter and spread out in different directions. Diffusion is occurring from the random motion of the light particles.