Quick Answer
Prompt structure that works on Midjourney V7, every parameter explained (--ar, --sref, --cref, --style, --weird, --chaos), the new style reference and character reference workflows, and 50+ tested prompt templates by genre.
Quick Answer
Optimal Midjourney V7 prompt structure: [subject] + [environment] + [composition] + [lighting] + [style] + [parameters]. Use --sref for style consistency, --cref for character consistency, --style raw to disable beautification, --ar 9:16 for vertical. Be specific about composition and lighting — vague prompts produce vague images.
The 6-Element Prompt Structure
- Subject: who or what is in the image. “Asian woman in red coat”
- Environment: setting and context. “cyberpunk Tokyo street at night”
- Composition: framing, angle, distance. “medium shot from behind”
- Lighting: light source and quality. “neon backlight, fog”
- Style: visual genre or reference. “photorealistic editorial, kodak portra 400”
- Parameters: aspect ratio, style mode, version. “--ar 9:16 --style raw --v 7”
Example full prompt: Asian woman in red trench coat, cyberpunk Tokyo street at night, medium shot from behind, neon backlight with fog, photorealistic editorial, kodak portra 400 grain, --ar 9:16 --style raw --v 7
Every Midjourney Parameter Explained (V7)
- --ar [ratio]: aspect ratio. Common:
--ar 1:1(square),--ar 9:16(vertical),--ar 16:9(cinematic),--ar 4:5(Instagram). - --v [version]: model version.
--v 7is current default.--niji 6for anime. - --style raw: disables Midjourney's default beautification. Critical for photorealistic results.
- --style 4a / 4b / 4c: legacy V4 style variations. Mostly deprecated.
- --s [0-1000]: stylize. Default 100. Higher = more artistic, lower = more literal to prompt.
- --c [0-100]: chaos. Default 0. Higher = more variation between the 4 generated images.
- --weird [0-3000]: unconventional aesthetics. Use sparingly.
- --seed [number]: reproduce a specific image by seed.
- --no [thing]: negative prompt.
--no text, watermark, blurry - --sref [url]: style reference image. Mimics the visual style of the URL.
- --sw [0-1000]: style weight. Controls --sref strength. Default 100.
- --cref [url]: character reference image. Locks character identity.
- --cw [0-100]: character weight.
--cw 100= full identity,--cw 0= face only. - --p [profile]: personalization. Pulls your trained personalization model.
- --tile: generate seamless repeating tile.
- --turbo: faster generation, 2x cost.
- --relax: slower generation, free on standard plan.
Style Reference (sref) — The 2026 Game-Changer
--srefis the biggest workflow change in Midjourney since V5. Pass an image URL and Midjourney mimics that image's lighting, color palette, mood, and compositional style without copying its content.
/imagine portrait of a woman in a forest --sref https://your.image.url/style.jpg --sw 100 --ar 9:16 --v 7--sw weight:0 (no style influence) to 1000 (maximum). Default 100. For brand work, build a sref library — 5-10 style anchor images you reuse to keep an account's aesthetic consistent.
Sref random: Use --sref randomto get a random aesthetic from Midjourney's library — useful for creative exploration.
Character Reference (cref) — For Consistent Characters
--creflocks a character's identity across generations. Critical for AI influencer pipelines.
/imagine the woman walking through Paris --cref https://your.image.url/face.jpg --cw 100 --ar 9:16 --v 7--cw weight: 0-100. --cw 100 preserves face, hair, clothing. --cw 50 face + hair. --cw 0 face only. Lower weight = more variation in body and outfit.
For AI influencers, the workflow: generate one canonical hero shot of the character → save URL → use --cref with --cw 70-100 across all subsequent generations to maintain identity.
50+ Prompt Templates by Genre
Editorial Portrait
[subject], [setting], medium shot, golden hour soft light, editorial fashion photography, kodak portra 400, --ar 4:5 --style raw --v 7Cinematic Wide
[subject] in [environment], wide cinematic shot, anamorphic lens, dramatic backlight, ARRI Alexa, color graded teal and orange, --ar 21:9 --style raw --v 7Product Photography
[product] on [surface], studio softbox lighting, white seamless backdrop, hyper-detailed, commercial product shot, --ar 1:1 --style raw --v 7Anime / Niji
[character], [scene], cinematic anime composition, ghibli-inspired color palette, soft watercolor texture, --ar 16:9 --niji 6Cyberpunk Atmospheric
[subject], rainy neon Tokyo backstreet, low angle, holographic billboards, fog, blade runner aesthetic, --ar 9:16 --style raw --v 7Minimalist Brand Hero
[subject] on [color] background, minimalist composition, single soft light source, brand campaign photography, negative space, --ar 16:9 --style raw --v 7Midjourney vs Flux: When to Use Which
Use Midjourney when:you want polished aesthetic with minimal prompt work, you need anime/niji styles, you don't need API access, you're creating brand mood content.
Use Flux when: you need character consistency via LoRA training, you need API access, you're building a production pipeline, you need text-in-image. See our Midjourney vs Flux comparison.
Build a Synthetic Creator That Earns
Prompts are the entry point. The full system is character creation, content pipeline, audience growth, monetization. Our AI Influencers course shows you the entire stack — Midjourney, Flux, Kling, ComfyUI — and how to scale to $5K-$50K/month.
AI Influencers: Build a Synthetic Creator
Midjourney, Flux, Kling — the full toolchain plus the monetization playbook scaling AI creators to $50K/month.
Get AI Influencers →