Quick Answer
FLUX.1 Kontext — Black Forest Labs' in-context image-editing model. Install, ComfyUI workflow, the prompt patterns that work, Dev vs Pro, and the production AI influencer pipeline using Kontext for outfit/background/age edits with character consistency.
Quick Answer
FLUX.1 Kontext is Black Forest Labs' instruction-based image-editing model. You feed it a source image + text instruction, it outputs the edited version. Outperforms Midjourney editor and DALL-E 3 inpainting on edit-fidelity benchmarks in 2026. Run via ComfyUI (open-weight Dev for non-commercial) or BFL/FAL/Replicate API (Pro for commercial).
What FLUX.1 Kontext Actually Is
Standard Flux generates images from text alone. FLUX.1 Kontext is a separate model trained for in-context editing: it takes an existing image AND an instruction, then produces the edited version preserving everything not mentioned in the instruction.
- Input: source image + text prompt (e.g. “change the shirt to red”)
- Output: same image with only the specified change applied
- Strength: identity preservation, background coherence, lighting consistency
- Released: mid-2025 (Dev open-weight + Pro API)
Install in ComfyUI
- Update ComfyUI to the latest version. Built-in Kontext support landed mid-2025 — older versions won't have the required nodes.
- Download the model from Hugging Face:
- Non-commercial:
flux1-kontext-dev.safetensors(open weights) - Commercial via API: Use BFL Playground, FAL.ai, or Replicate — Pro weights are not publicly downloadable
- Non-commercial:
- Place the file in
ComfyUI/models/diffusion_models/ - Restart ComfyUI and load the Kontext example workflow from Templates → Flux → Kontext
- Connect: source image input → Kontext sampler → text prompt → output
See our ComfyUI Manager guide for managing the install.
Kontext Dev vs Kontext Pro
| Aspect | Kontext Dev | Kontext Pro |
|---|---|---|
| License | Non-commercial | Commercial (paid API) |
| Quality | Strong | Highest |
| Where to run | ComfyUI / Diffusers / self-host | BFL Playground / FAL / Replicate |
| Cost | Free (electricity) | ~$0.05-$0.10/image |
| VRAM (self-host) | 12-16GB recommended | N/A (cloud only) |
| Use for | Personal, prototyping | Client work, brand, ads |
Prompt Patterns That Work
Outfit / clothing edit
change the shirt to a black turtleneck, keep face hair and background unchangedBackground swap
replace background with a Tokyo street at golden hour, keep subject lighting and poseAge / expression edit
make the subject look 5 years older with subtle smile lines, no other changesObject addition
add a coffee cup in the right hand, position naturally, match scene lightingStyle transfer
render in editorial Vogue magazine style, kodak portra 400 film grain, preserve compositionThree Rules for Kontext Prompts
- Be specific about what should NOT change. “change shirt color” → ambiguous. “change shirt to red, keep face hair and pose unchanged” → reliable.
- One edit at a time. Multi-instruction prompts cause Kontext to compromise. Run multiple passes for multi-step edits.
- Match the source image quality. Kontext preserves source quality — feed it 1024px+ images for production output. Low-res sources produce low-res edits.
The AI Influencer Production Pipeline Using Kontext
Kontext is the missing piece in modern AI creator workflows. Standard pipeline:
- Generate hero image in Flux Pro with character LoRA (consistent face).
- Use Kontext for variations:
- Same character, different outfits → 1 hero × 30 outfit edits = 30 unique posts
- Same character, different scenes → 1 hero × 20 backgrounds = 20 unique location shots
- Same character, different expressions → 1 hero × 10 emotions = 10 emotional Reels frames
- Animate via Kling using Kontext outputs as keyframes.
- Final color grade in Photoshop or Topaz.
This dramatically reduces the “new generation per post” cost. One hero image = 60+ derivative posts via Kontext edits.
Kontext vs Alternatives
- vs Midjourney V7 editor: Kontext wins on instruction precision and identity preservation. Midjourney wins on default aesthetic.
- vs DALL-E 3 / GPT-Image-1: Kontext outperforms on edit fidelity. DALL-E wins on convenience (ChatGPT-integrated).
- vs Photoshop Generative Fill: Kontext beats on prompt-driven edits. Photoshop wins on layer compositing and pixel-precise corrections.
- vs Reactor face swap: Different tool — Reactor swaps faces, Kontext edits scenes/outfits/ages around the face. Use both together.
Cost Economics
- Kontext Dev (self-hosted): Free. ~30s per image on RTX 4090. ~2 min on RTX 4070.
- Kontext Pro on FAL/Replicate: ~$0.05/image. 100 edits = $5.
- Kontext Pro on BFL Playground: ~$0.08/image. Higher quality, slower queue during peak hours.
- Solo creator volume (~500 edits/month): ~$25-$40/month on Pro, $0 on Dev (electricity only).
- Production agency volume (~5K edits/month): $250-$400/month on Pro.
Master the Full AI Influencer Production Stack
Kontext is one node. The full pipeline is character creation, hero generation, Kontext variations, video animation, and post-processing. That's what we teach in AI Influencers — the system scaling synthetic creators from 0 to $5K-$50K/month.
AI Influencers: The Production Stack
Flux + Kontext + Kling + ComfyUI — the full pipeline for synthetic creators earning $5K-$50K/month.
Get AI Influencers →