How to Use Midjourney in 2026 (Beginner-to-Pro Practical Guide)

A practical guide to Midjourney in 2026 — the alpha web app, parameters, style references, omni-reference, and the workflows real creators use to ship images that look intentional instead of generated.

TL;DR
  • The web app at midjourney.com has fully replaced Discord as the recommended workflow for almost every user.
  • Plans run from $10/month Basic to $60/month Mega, with an annual discount that drops Standard to roughly $24/month.
  • V7 is the current default model; omni-reference is the biggest practical upgrade since v6 and dramatically improves character and product consistency.
  • Parameters (--ar, --s, --c, --w, --p, --r, --niji) still matter — most "bad Midjourney output" is a parameter problem, not a prompt problem.
  • Prompt engineering is no longer optional: short prompts give you generic stock-AI looks, structured prompts give you a portfolio.

Midjourney finally left Discord — and that changes everything

For years, the joke about Midjourney was that the world's best image model was trapped inside a chat app designed for gamers. You typed /imagine, watched a public channel scroll your prompt past three thousand strangers, and waited for four tiny thumbnails to appear in a feed alongside someone else's anime cat. It worked, but it never felt like a real product.

That era is over. By 2026, Midjourney's center of gravity has fully moved to midjourney.com, the alpha web app that opened to all subscribers and has matured into the default surface for everything: prompting, editing, organizing, retexturing, and now video. Discord still exists as a legacy interface, but the team has stopped prioritizing it. New features — omni-reference, the in-browser editor, moodboards, Draft Mode, the V1 video model — land on the web first, and some never reach Discord at all.

If you're picking up Midjourney for the first time in 2026, or coming back after a year away, this guide is the version you actually need: one workflow, one URL, one set of parameters that still matter, and the handful of patterns that separate output that looks like AI from output that looks like a finished image.

The 2026 context: V7, the web app, and omni-reference

Three shifts define how Midjourney is used today. First, V7 is the default model. It's the first version trained from scratch since v4, with noticeably better prompt adherence, hand and text rendering, and a more "intentional" aesthetic — output looks like a deliberate photograph or illustration rather than an averaged composite. Second, the web app has full feature parity and then some: the editor, inpaint, retexturing, moodboards, and the public/private feed all live there. Third, omni-reference (a dramatic upgrade over v6's character reference) lets you pin a specific subject — a person, a product, a stylized character — and have it appear consistently across an entire campaign or storyboard.

Everything below assumes you're in the web app. If you're still in Discord, switch. You'll get features faster, you'll keep your work organized in folders instead of scrolling chat history, and your prompts won't be public by default.

Setup: getting from zero to your first image

Sign in at midjourney.com with Discord, Google, or email. New accounts go straight to the Create page — there's no Discord server to join, no /imagine slash command to memorize, no public channel where strangers can see what you're working on. You pick a plan, type a prompt in the bar at the top, and your generations land in a private grid. Folders, moodboards, and the editor are one click away.

If you used Midjourney in the Discord era, two muscle-memory habits to unlearn: there's no /imagine prefix anymore (just type the prompt), and parameters can be set as UI toggles instead of typed flags. The flags still work and are still faster once you're fluent, but beginners can ignore the syntax entirely on day one.

Pricing: which plan actually fits your use case

Midjourney has four tiers. The differences that matter day-to-day are Fast hours (how many generations you get without queueing), Relax mode (unlimited but slower, available from Standard up), and Stealth mode (private generations, Pro and Mega only).

Plan Monthly Fast hours Relax mode Stealth (private) Best for
Basic $10 ~3.3 hrs (~200 images) No No Trying it out, occasional personal use
Standard $30 ~15 hrs (~900 images) Unlimited No Most freelancers, hobbyists, content creators
Pro $60 ~30 hrs Unlimited Yes Client work, commercial projects, NDAs
Mega $120 ~60 hrs Unlimited Yes Studios, agencies, video-heavy workflows

Annual billing knocks roughly 20% off all tiers — Standard drops to about $24/month if you commit for a year. For commercial use of any kind, you need Pro or Mega; Basic and Standard images are technically usable for revenue under $1M ARR, but Stealth is the only way to keep client work off the public feed.

Prompt anatomy: the structure that consistently works

Almost every "Midjourney isn't listening to me" complaint traces back to the same problem: a prompt that's either three vague words or a 200-word run-on sentence stuffed with contradictions. V7 rewards structured prompts — order matters, specificity matters, and the model treats earlier tokens as more important than later ones.

The five-part prompt formula

  1. Subject — who or what is in the frame, with concrete attributes (age, posture, expression, clothing, color).
  2. Action / scene — what's happening, where, what time of day, what weather.
  3. Style anchor — medium and reference (35mm film, oil painting, editorial photography, isometric 3D render, ukiyo-e woodblock).
  4. Lighting and mood — golden hour, overcast, neon backlight, single key light, cinematic, melancholic, hopeful.
  5. Parameters — aspect ratio, stylize, chaos, version (the technical flags at the end of the prompt).

A working prompt looks like: "A weathered fisherman in his sixties mending a blue net on a wooden dock at dawn, fog rolling off the harbor, shot on 35mm Kodak Portra 400, soft directional light from the left, melancholic and quiet, --ar 3:2 --s 200 --v 7". Notice every clause adds a constraint. The model has nowhere to drift into generic AI-stock territory.

Parameters: the seven flags that actually matter

Midjourney has dozens of parameters; in practice you'll use seven. --ar sets aspect ratio (--ar 16:9 for cinematic, --ar 3:2 for editorial, --ar 9:16 for vertical social). --s (stylize) controls how much creative liberty Midjourney takes — --s 0 gives you literal interpretation, --s 250 is the sweet spot for most work, --s 750+ goes painterly and abstract. --c (chaos) increases variation between the four grid images; useful for exploration, terrible for iteration. --w (weird) pushes the output toward unusual aesthetics — try --w 250 when everything looks too clean. --p (personalize) activates your trained taste profile after you've ranked enough image pairs. --r (repeat) runs the same prompt N times in one click — handy for batch exploration. --niji 6 swaps in the anime-tuned model, which is still in a separate variant rather than baked into V7.

Defaults to remember: V7 lives at --v 7 and is automatic in the web app. Stylize defaults to 100; for portraits and product shots, push it to 200-300. For technical diagrams or logo work, drop it to 50 or below.

Style references and image references

Beyond text, Midjourney accepts visual inputs. An image reference (drag an image into the prompt bar or use --iw to set its weight) tells the model "make something compositionally like this." A style reference (--sref followed by an image URL or a numeric style code) borrows aesthetic — color palette, brushwork, grain, mood — without copying composition. Style references are how you keep an entire campaign visually coherent: lock one --sref code, vary subjects, get a consistent series.

You can stack multiple style references and weight them with ::. Numeric style codes like --sref 1923847562 are reusable across sessions — save the ones that work for your brand into a moodboard so the next person on your team can hit the same look without guessing.

Omni-reference: character and product consistency, finally

Omni-reference is the feature that quietly changed what Midjourney is good for. In v6, "character reference" was a hint — the model would try to keep your subject's hair color and rough face shape, but it leaked. Omni-reference (V7) holds identity: a specific person, a specific bottle, a specific stuffed animal can appear in twenty different scenes, lighting setups, and styles while staying recognizably the same subject.

The workflow is simple. Drop a clean reference image into the prompt and tag it as omni-reference. Set the weight (--ow) — higher values lock identity harder, lower values let style win. For product photography and brand mascots, push --ow high. For editorial portraits where you want stylistic flexibility, keep it moderate. This is the feature that finally makes Midjourney viable for storyboards, comic panels, ad campaigns with recurring characters, and e-commerce shots of a single SKU in multiple environments.

Common workflows

Photorealistic portraits

Anchor with a real camera and lens ("shot on Canon R5, 85mm f/1.4"), specify lighting source and direction, name the film stock or color profile, and use --s 200-300 --ar 3:2. Avoid the words "photorealistic" and "hyperrealistic" — they make output look like AI's idea of realism rather than an actual photograph. Add micro-details: pore texture, stray hair, slight asymmetry.

Anime and illustration (Niji)

Switch to --niji 6. Reference specific traditions ("shojo manga", "studio Ghibli backgrounds", "90s cel animation") rather than generic "anime." Niji handles cute, expressive, and stylized characters dramatically better than V7 for this domain. Stack --sref from a style you've locked.

Logo and icon work

Drop stylize to --s 50 or below, use a square aspect ratio, and prompt for "flat vector logo on solid white background, two-color, geometric, negative space." Midjourney isn't a vector tool — bring outputs into Illustrator or Figma to clean and trace. Use it for ideation, not finals.

Editorial illustration

Name the illustrator's tradition or magazine aesthetic ("in the style of New Yorker editorial illustration", "Polish poster school", "Tom Haugomat"). Push --s 400-600 for painterly looseness. Use moodboards to lock palette across a series.

Character consistency across scenes

Generate a clean hero shot of your character first. Use that image as the omni-reference for every subsequent scene. Vary the prompt around setting, action, and lighting; keep --ow stable. This is the storyboard pipeline that finally works without ControlNet hacks.

Editor and inpaint: the second half of the workflow

The web app's editor is where good prompts become finished images. Inpaint lets you mask a region and regenerate just that area — fix a hand, swap a background, change a hat — without rerolling the whole composition. Outpaint extends the canvas. Retexture (Vary Region with a new prompt) keeps composition and lighting but changes the material or subject. The editor also lets you upload external images and use them as the canvas, turning Midjourney into a powerful selective regeneration tool rather than a one-shot generator.

The practical workflow most pros land on: generate a strong base image, upscale it, then spend three to five inpaint passes fixing hands, eyes, fabric, and background detail. Treat the first generation as 70% of the way there, not the final.

A specific tactic that pays for itself: when inpainting a face or hand, mask generously around the area rather than tightly. Midjourney needs context to blend the regenerated region back into the surrounding lighting and skin tone. Tight masks produce visible seams; loose masks produce invisible fixes. Combine inpaint with a short, focused prompt for that region only — don't paste your full original prompt into a small mask, because the model will try to render the whole scene inside it.

Draft Mode and personalization: the speed-quality dial

Two underused features turn Midjourney from a creative toy into a daily-driver tool. Draft Mode generates images at roughly half the resolution and double the speed, letting you burn through 20-30 prompt variations in the time a single high-quality grid would take. Use it for ideation: lock the composition and style you want, then re-run the winning prompt at full quality. The economics matter on Standard plans, where Fast hours feel infinite in Relax + Draft Mode but still drain quickly during full-quality bursts.

Personalization (--p) is the feature most users skip and shouldn't. After ranking around 200 image pairs, Midjourney trains a private style profile that biases generations toward your taste — color preferences, composition habits, mood. Once activated, your default output stops looking like everyone else's Midjourney output. For brand work, train a separate personalization profile per project so the aesthetic stays scoped instead of bleeding across clients.

Moodboards: the team feature nobody talks about

Moodboards are private collections of images you've generated or uploaded that Midjourney can use as a combined style reference. Build one for "client X brand aesthetic" and reference it from any prompt — the moodboard's color, lighting, and texture vocabulary gets baked into every generation tagged with it. This is the closest Midjourney has come to "fine-tuning for taste" without actually training a model. For agencies running multiple brands in parallel, moodboards solve the cross-contamination problem that --sref codes alone don't.

Common mistakes that ruin your output

Avoid these patterns:
  • Negative-prompt thinking. Saying "no extra fingers, not blurry" often makes those things appear. Midjourney doesn't reliably parse negation. Use --no for genuinely unwanted concepts, and rely on positive description instead.
  • Stacking adjectives. "Beautiful stunning gorgeous breathtaking ultra-detailed cinematic masterpiece" is noise. The model averages them into mush.
  • Contradictory style anchors. "Oil painting, photorealistic, anime, 3D render" produces mud. Pick one medium per prompt.
  • Ignoring aspect ratio. Default 1:1 hides composition issues. Use --ar 3:2 or --ar 16:9 from the start so framing is real.
  • Skipping the editor. Treating Midjourney as one-shot is leaving 30% of the quality on the table. Inpaint is where pros separate from beginners.

FAQ

Do I still need Discord to use Midjourney?

No. The web app at midjourney.com is now the primary interface, and several 2026 features (the editor, omni-reference UX, moodboards, video) are web-first or web-only. Discord is maintained as a legacy option but is no longer the recommended workflow for any user.

Which plan should I start on?

Basic ($10) is fine for a week of testing. Most people quickly outgrow it because there's no Relax mode and Fast hours run out fast. Standard ($30) is the right starting point for hobbyists and freelancers — unlimited Relax generations alone justify the jump. Go Pro only when you need Stealth mode for client confidentiality.

Can I use Midjourney images commercially?

Yes, on paid plans, with Pro/Mega required if your company exceeds $1M annual revenue. Stealth mode (Pro+) keeps your generations off the public feed — essential for branded work, NDAs, and unreleased products. Always check Midjourney's current Terms of Service for the latest commercial language.

What's the difference between style reference and omni-reference?

Style reference (--sref) borrows aesthetic — palette, texture, mood — without copying subject matter. Omni-reference locks identity — a specific face, product, or character that needs to appear the same across multiple images. Use --sref for visual cohesion across a campaign; use omni-reference for character or product consistency.

Why do my Midjourney images look generic?

Almost always a prompt problem. Generic prompts ("beautiful woman in a forest") produce generic AI output because the model fills in the gaps with averaged training data. Specific prompts (lens, lighting direction, named aesthetic tradition, concrete subject details) force the model to make choices, and choices are what make an image feel intentional.

Is Midjourney better than DALL-E, Flux, or Ideogram in 2026?

For aesthetic quality, mood, and "looks like a real photograph or painting," Midjourney is still the leader. Flux is closer for photorealism and runs locally. Ideogram beats everyone at in-image text. DALL-E (via ChatGPT) is the most prompt-faithful and best for instruction-following. The honest answer: serious creators use two or three of them depending on the job.

The bottom line

Midjourney in 2026 is a different product than the one most people remember. The Discord workflow is gone in everything but name. V7 generates images that look intentional rather than generated. Omni-reference solves the consistency problem that made Midjourney unusable for serial work. The editor turns the tool into a real production pipeline instead of a slot machine.

What hasn't changed is what separates good output from bad: structured prompts, deliberate parameters, and willingness to iterate in the editor. The model is generous to people who treat it like a craft. It punishes people who type three words and expect magic.

Key takeaways

  • Move to the web app. Discord is legacy.
  • Standard at $30/month is the right starting plan for almost everyone; jump to Pro only when you need Stealth.
  • Use the five-part prompt structure: subject, action, style, lighting, parameters.
  • Seven parameters cover 95% of work: --ar, --s, --c, --w, --p, --r, --niji.
  • Lock --sref codes for campaign coherence; use omni-reference for character and product consistency.
  • Treat the first generation as 70% done. Inpaint is where pros finish the image.
  • Avoid negative-prompt thinking, adjective stacking, and contradictory style anchors.

Showcase your AI art on a page that loads in under a second

Midjourney makes the images. UniLink turns them into a portfolio, a shop, and a link-in-bio that converts — without a website builder, a Webflow subscription, or a developer. Drag in your best generations, sell prints or digital downloads, and track every click in one dashboard.

Build your AI art page free