Advanced ComfyUI Workflows for Professional AI Art
Master advanced ComfyUI workflows to create professional-quality AI art. This comprehensive guide covers complex node setups, multi-model workflows, ControlNet integration, and optimization techniques for production-level results.
What Are Advanced ComfyUI Workflows?
Advanced ComfyUI workflows go beyond basic text-to-image generation. They involve multiple models, complex node chains, conditioning techniques, and optimization strategies that professional AI artists use to achieve consistent, high-quality results. This guide will teach you professional-level techniques used in production environments.
What You'll Master:
- ✓Multi-Model Workflows: Combine base models, refiner models, and upscalers
- ✓LoRA Integration: Layer multiple LoRAs for precise style control
- ✓ControlNet Mastery: Achieve pixel-perfect composition control
- ✓Advanced Sampling: Custom samplers, schedulers, and denoise techniques
- ✓Batch Processing: Generate hundreds of variations efficiently
Advanced SDXL Workflow: Base + Refiner
SDXL (Stable Diffusion XL) uses a two-stage process for maximum quality: a base model for initial generation and a refiner model for detail enhancement. Here's how to build a professional SDXL workflow.
Node Structure:
Load Checkpoint (SDXL Base 1.0)
↓
CLIP Text Encode (Positive) → "masterpiece, best quality, detailed..."
↓
CLIP Text Encode (Negative) → "ugly, blurry, low quality..."
↓
Empty Latent Image (1024x1024)
↓
KSampler (Base)
- Steps: 20
- CFG: 8.0
- Sampler: DPM++ 2M SDE Karras
- Scheduler: Karras
- Denoise: 1.0
↓
Load Checkpoint (SDXL Refiner 1.0)
↓
KSampler (Refiner)
- Steps: 10
- CFG: 8.0
- Sampler: DPM++ 2M SDE Karras
- Scheduler: Karras
- Denoise: 0.25 ← KEY: Low denoise for refinement only
↓
VAE Decode
↓
Save ImageKey Settings Explained:
- Base Steps (20): Does the heavy lifting of composition and structure
- Refiner Denoise (0.2-0.3): Only refines details, doesn't regenerate
- 1024x1024 Resolution: SDXL is trained at 1024px, performs best at native res
- CFG 8.0: Higher than SD 1.5 because SDXL handles it better
Advanced Tip - Split Conditioning:
For even better results, use different prompts for base and refiner:
- Base Prompt: Focus on composition, subject, scene ("portrait of woman, outdoor garden")
- Refiner Prompt: Focus on quality and details ("8k uhd, sharp focus, professional photography")
LoRA Stacking: Advanced Style Control
LoRAs (Low-Rank Adaptations) are small model modifiers that add specific styles, characters, or concepts. Advanced users stack multiple LoRAs to achieve precise creative control.
Multi-LoRA Workflow:
Load Checkpoint (Base Model)
↓
Load LoRA #1 (Style LoRA)
- Strength: 0.8
- Example: "Anime Style LoRA"
↓
Load LoRA #2 (Lighting LoRA)
- Strength: 0.6
- Example: "Cinematic Lighting"
↓
Load LoRA #3 (Detail LoRA)
- Strength: 0.4
- Example: "Detail Tweaker"
↓
CLIP Text Encode → Regular workflow continues...LoRA Strength Guidelines:
- 1.0 (Full Strength): Maximum effect, use for primary style LoRAs
- 0.6-0.8 (Strong): Significant influence, good for character/style LoRAs
- 0.3-0.5 (Moderate): Subtle enhancement, ideal for lighting/detail LoRAs
- 0.1-0.2 (Weak): Very subtle, for fine-tuning specific aspects
Recommended LoRA Combinations:
- Realistic Vision (base model) + Detail Tweaker LoRA (0.5) + Eye Detail LoRA (0.3)
- Anime model + Style LoRA (0.8) + Line Art LoRA (0.4) + Color Grading LoRA (0.3)
- DreamShaper + Fantasy LoRA (0.7) + Lighting LoRA (0.5) + Detail LoRA (0.3)
ControlNet: Precision Composition Control
ControlNet allows you to control image composition using reference images. It's the most powerful tool for consistent character poses, architectural layouts, and specific compositions.
ControlNet Types:
- Canny Edge: Detects edges, preserves composition
Use for: Maintaining structure from reference images - OpenPose: Detects human poses
Use for: Consistent character poses across generations - Depth: Preserves depth information
Use for: Architectural renders, 3D-consistent scenes - Scribble: Follows rough sketches
Use for: Quick concept art from simple drawings - Lineart: Follows line drawings
Use for: Coloring manga/comic line art
ControlNet OpenPose Workflow:
Load Image (Reference Photo)
↓
ControlNet Preprocessor (OpenPose)
- Extracts skeleton/pose from reference
↓
Load Checkpoint
↓
Apply ControlNet
- Control Image: OpenPose output
- Strength: 0.8-1.0
- Start Percent: 0.0
- End Percent: 1.0
↓
CLIP Text Encode (Your custom prompt)
↓
KSampler → Generate with controlled pose
↓
VAE Decode → Save ImageControlNet Strength Guide:
- 1.0 (Maximum): Exactly follows reference, use for pose/architecture
- 0.7-0.9 (Strong): Strong influence with some creative freedom
- 0.4-0.6 (Moderate): Balanced between reference and prompt
- 0.2-0.3 (Subtle): Gentle guidance, mostly follows prompt
Multi-ControlNet Workflow:
Stack multiple ControlNets for ultimate precision:
Apply ControlNet (OpenPose) - Strength: 1.0
↓
Apply ControlNet (Depth) - Strength: 0.6
↓
Apply ControlNet (Canny) - Strength: 0.4
↓
Continue to KSampler...This combination gives you pose control + depth consistency + edge preservation
Advanced Img2Img: Image Transformation
Img2Img transforms existing images while maintaining composition. Master denoise strength for different effects.
Img2Img Workflow:
Load Image (Your source image)
↓
VAE Encode (Convert to latent space)
↓
Load Checkpoint
↓
CLIP Text Encode (Positive/Negative)
↓
KSampler
- Latent Image: From VAE Encode (not Empty Latent!)
- Steps: 20-30
- CFG: 7-9
- Denoise: 0.4-0.7 (KEY PARAMETER!)
↓
VAE Decode → Save ImageDenoise Strength Guide:
- 0.2-0.3 (Subtle Enhancement):
Minor refinements, keeps original composition 95%
Use for: Upscaling, minor color correction - 0.4-0.5 (Moderate Changes):
Noticeable style change, keeps composition ~70%
Use for: Style transfer, adding details - 0.6-0.7 (Heavy Transformation):
Major changes, keeps general composition ~40%
Use for: Creative reinterpretation, genre changes - 0.8-1.0 (Almost New Image):
Barely uses reference, mostly follows prompt
Use for: Complete reimagining
Professional Upscaling Workflow
True professional upscaling uses a multi-stage process combining AI upscalers with detail enhancement.
2-Stage Upscaling Workflow:
STAGE 1: AI Upscale (2x)
Load Image (512x512)
↓
Upscale Image (Model)
- Upscale Method: nearest-exact
- Model: 4x-UltraSharp or RealESRGAN
- Scale by: 2.0
↓
Result: 1024x1024 upscaled image
STAGE 2: Detail Enhancement
VAE Encode (upscaled image)
↓
KSampler (Hi-Res Fix)
- Denoise: 0.3-0.4
- Steps: 15-20
- CFG: 7.0
- Use same prompt as original
↓
VAE Decode → Save Image (1024x1024 enhanced)Upscale Model Recommendations:
- 4x-UltraSharp: Best for general purpose, anime and realistic
- RealESRGAN x4plus: Excellent for photorealistic images
- 4x-AnimeSharp: Specialized for anime/manga artwork
- LDSR: Slower but highest quality for faces
4x Upscaling Strategy:
For maximum quality 4x upscaling (512px → 2048px):
- Generate at 512x512 with maximum quality settings
- First upscale: 512 → 1024 using 4x-UltraSharp
- Hi-res fix: Denoise 0.35, 20 steps
- Second upscale: 1024 → 2048 using same model
- Final hi-res fix: Denoise 0.25, 15 steps
This produces professional print-quality 2048x2048 images
Batch Processing for Production
Professional workflows require generating dozens or hundreds of variations. Here's how to set up efficient batch processing.
Batch Processing Techniques:
- Batch Size in Empty Latent:
Set batch_size to 4-8 (if VRAM allows)
Generates multiple images simultaneously - Seed Iteration:
Use "Seed Increment" node or manual seed changes
Creates variations with controlled randomness - Prompt Scheduling:
Use "Prompt from File" node for batch prompts
Automate different prompts in sequence - Queue Multiple Prompts:
Press "Queue Prompt" multiple times before generation
Builds a generation queue for unattended operation
Production Workflow Example:
Empty Latent Image
- Width: 768
- Height: 768
- Batch Size: 4 ← Generate 4 images at once
↓
KSampler
- Seed: -1 (random)
- Or use specific seeds: 12345, 12346, 12347, 12348
↓
Save Image
- Filename Prefix: character_variations_%batch_index%Workflow Optimization Tips
Memory Optimization:
- Use "Model Sampling Discrete" node for efficient VRAM usage
- Enable "Tiled VAE" for processing large images in chunks
- Use FP16 (half precision) models when possible
- Unload models when not in use with "Model Management" nodes
Speed Optimization:
- Use DPM++ 2M SDE Karras sampler (fast + high quality)
- Reduce steps to 20-25 for most workflows (diminishing returns after that)
- Enable xformers:
pip install xformers - Use lower resolution for testing, full res for final render
Quality Optimization:
- Always use negative prompts (quality matters!)
- Match resolution to model training (512 for SD1.5, 1024 for SDXL)
- Use separate VAE if model doesn't have baked-in VAE
- Test different samplers - results vary by model
Saving and Sharing Workflows
ComfyUI workflows are saved as JSON files that can be shared, versioned, and reused.
How to Save Workflows:
- Click "Save" button in top menu
- Choose location and name (e.g., "sdxl_refiner_workflow.json")
- Workflow saves complete node structure and settings
- To load: Drag JSON file onto ComfyUI canvas
Workflow Organization Tips:
- Create a "Workflows" folder in your ComfyUI directory
- Use descriptive names: "portrait_sdxl_refiner.json"
- Add comments using "Note" nodes in the workflow
- Version workflows: "character_v1.json", "character_v2.json"
- Share on GitHub or Civitai for the community
Master Advanced Workflows Faster
Get access to 50+ pre-built professional workflows, video tutorials, and step-by-step breakdowns of complex techniques in our comprehensive ComfyUI course.
Explore ComfyUI Mastery CourseIncludes downloadable workflow library and custom node tutorials