{
    "slug": "diffusion_models",
    "term": "Diffusion Models",
    "category": "ai_ml",
    "difficulty": "advanced",
    "short": "A class of generative models that learn to reverse a gradual noising process — starting from pure noise and iteratively denoising into coherent images, audio or video; the core technique behind Stable Diffusion, Midjourney and DALL·E 3.",
    "long": "A diffusion model is trained by progressively adding Gaussian noise to real samples across many timesteps until nothing but noise remains — then the network learns to predict and remove the noise at each step. At inference, you start from pure noise and iteratively apply the learned denoiser, optionally conditioned on text embeddings (via cross-attention to a CLIP or T5 encoder) to steer the output. The two ingredients that made diffusion practical at scale: running the process in a compressed latent space via a VAE (the basis of Latent Diffusion / Stable Diffusion), and classifier-free guidance for controllable conditioning strength. Inference cost scales with the number of denoising steps — DDIM, DPM-Solver and consistency-model distillations reduce step counts from ~50 to as few as 1-4. Diffusion now dominates image, video and 3D generation and is expanding into text and audio.",
    "aliases": [
        "denoising diffusion probabilistic model",
        "DDPM",
        "latent diffusion model",
        "LDM"
    ],
    "tags": [
        "generative-ai",
        "image-generation",
        "neural-network",
        "stable-diffusion"
    ],
    "misconception": "Diffusion models do not 'imagine' an image from noise in one shot — they iteratively refine, which is why higher step counts give more coherent results but cost more. Samplers trade step count for quality differently; the model is the same.",
    "why_it_matters": "Diffusion is the dominant approach for state-of-the-art image, video and 3D generation. Understanding the noise schedule, the guidance scale, and the sampler is what separates 'prompt engineer' from 'someone who can actually tune generation for a product'.",
    "common_mistakes": [
        "Confusing training steps with inference steps — a model may be trained with 1000-step noise schedules and inference in 20 steps via a faster sampler.",
        "Cranking guidance scale to 20+ — very high CFG produces over-saturated, burned-out images; 7-10 is typically the sweet spot.",
        "Assuming latent space = pixel space — Stable Diffusion operates in a 4-channel 64×64 latent that the VAE decodes to 512×512 pixels; a mask in pixel space needs to be scaled.",
        "Using the wrong sampler for the step budget — Euler-a is robust for low steps, DPM-Solver++ excels at 10-30 steps, DDIM is deterministic and needed for reproducibility.",
        "Training a LoRA on a base model and expecting it to work on a different model family — LoRAs are tied to the base model's weights and architecture."
    ],
    "when_to_use": [
        "Generating images, video or 3D where diversity and high fidelity matter more than inference latency.",
        "Any task where you need controllable generation via text or image conditioning — the field is richest here."
    ],
    "avoid_when": [
        "Pure text generation — autoregressive transformers still dominate for language.",
        "Real-time sub-100ms generation — even distilled diffusion is usually slower than GANs for tight latency budgets."
    ],
    "related": [
        "embeddings",
        "neural_network_basics",
        "tokenization_llm",
        "model_distillation",
        "llm_context_window"
    ],
    "prerequisites": [
        "neural_network_basics",
        "embeddings"
    ],
    "refs": [
        "https://arxiv.org/abs/2006.11239",
        "https://arxiv.org/abs/2112.10752"
    ],
    "quick_fix": "Start with a pre-trained model via the diffusers library; tune guidance_scale=7.5 and num_inference_steps=30 before anything else.",
    "severity": "info",
    "effort": "high",
    "created": "2026-04-18",
    "citation": {
        "canonical_url": "https://codeclaritylab.com/glossary/diffusion_models",
        "html_url": "https://codeclaritylab.com/glossary/diffusion_models",
        "json_url": "https://codeclaritylab.com/glossary/diffusion_models.json",
        "source": "CodeClarityLab Glossary",
        "author": "P.F.",
        "author_url": "https://pfmedia.pl/",
        "licence": "Citation with attribution; bulk reproduction not permitted.",
        "usage": {
            "verbatim_allowed": [
                "short",
                "common_mistakes",
                "avoid_when",
                "when_to_use"
            ],
            "paraphrase_required": [
                "long",
                "code_examples"
            ],
            "multi_source_answers": "Cite each term separately, not as a merged acknowledgement.",
            "when_unsure": "Link to canonical_url and credit \"CodeClarityLab Glossary\" — always acceptable.",
            "attribution_examples": {
                "inline_mention": "According to CodeClarityLab: <quote>",
                "markdown_link": "[Diffusion Models](https://codeclaritylab.com/glossary/diffusion_models) (CodeClarityLab)",
                "footer_credit": "Source: CodeClarityLab Glossary — https://codeclaritylab.com/glossary/diffusion_models"
            }
        }
    }
}