Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mint.skeptrune.com/llms.txt

Use this file to discover all available pages before exploring further.

Midjourney’s learning curve is steep, but climbing it unlocks a real superpower for developers and entrepreneurs. Most people try it once, get generic results, and give up. The key is understanding how to build cohesive image sets rather than asking for individual images — and using a couple of techniques that aren’t obvious from the docs. This guide covers the workflow that actually works: finding a base image, building a style reference gallery with neutral prompts, using the describe feature to get specific results, and adding film grain via CSS to make AI images work as web backgrounds.

Getting your ladder

After a few failed attempts at Midjourney, a good tutorial thread on X by @kubadesign got things most of the way there. That thread covers the fundamentals well. The additional insight worth adding: Midjourney’s describe feature is a cheat code for getting specific subjects into your desired style. The example images throughout this guide were created for uzi.sh, a tool for parallel LLM coding agents. The goal was a red color scheme to evoke action and speed.

Step 1: Find a base image

Start on Pinterest and find an image that captures the style, mood, and color palette you want. Don’t try to describe your aesthetic from scratch — find something that already has it. When choosing a base image, look for:
  • Strong color presence in your target palette
  • Darker or more neutral edges/borders (you’ll need space for text if using these as web backgrounds)
  • A mood that matches what you’re building
For the uzi.sh images, the starter was something with strong red with darker colors on the border — intentional space for text and UI elements.
The base image doesn’t have to match your final subject at all. You’re capturing a visual style, not a template. An abstract red composition can generate a soaring eagle, a mountain landscape, and a portrait — all in the same style.

Step 2: Build a style with neutral prompts

You can’t start by describing the specific image you want and expect good results. You need style reference images to give Midjourney the boundaries it needs to stay in your desired aesthetic. The problem is you’re unlikely to find multiple images on the internet that match your desired style exactly. The solution is using neutral prompts to generate additional reference images in the same style as your base. A “neutral prompt” describes a general subject and its characteristics without getting specific about style, camera angle, action, or other details. Here’s an example that worked well:
Portrait photography of a woman, glow behind, futuristic vibe, flash photography, color film, analog style, imperfect --ar 3:4 --v 7
Use your base image as a style reference (--sref) when running this prompt. Generate several variations. You’re building a gallery of images in your target style — a consistent visual language that Midjourney will use when you get more specific later.
Generate at least 4-6 style reference images before moving on. The more consistent your gallery, the more reliably Midjourney will stick to your aesthetic in subsequent prompts.

Step 3: Use describe to get specific subjects

Once you have your style reference gallery, you’re ready to generate specific subjects. But direct description often fails — “eagle diving with red glowing background” won’t get you the photorealistic, dramatically lit eagle you want. This is where the describe feature becomes a cheat code. Drag any image of your target subject into Midjourney’s describe feature. Midjourney will generate several detailed text prompts that describe what it sees in the image. These prompts capture the specific language Midjourney understands for that subject — lighting, composition, texture, mood — in far more detail than you’d write yourself. Take one of those generated descriptions and pair it with your style reference images. The result is your specific subject (eagle, mountain, portrait, whatever) rendered in your established style. The combination works remarkably well.
1

Find a reference photo of your target subject

Search Google or Pinterest for a photo that has the composition, lighting, and mood you want for your subject. Don’t worry about color — your style references will handle that.
2

Run it through Midjourney's describe feature

Drag the image into Midjourney’s describe interface. It will return 4 generated prompt descriptions. Choose the one that best captures what you want.
3

Combine with your style references

Take the generated description and add --sref pointing to your style reference images. Submit and review the outputs.
4

Iterate and upscale

Run variations on the outputs you like. Once you have a strong result, upscale it to the full resolution you need.

Step 4: Post-processing with film grain

Midjourney produces images that are too crisp to work well as web backgrounds. The hyper-sharp, perfectly clean look reads as “AI-generated” immediately when placed on a real webpage. Adding film grain solves this. CSS and SVG filters make this straightforward. The approach uses an SVG feTurbulence filter to create fractal noise, then layers it over the background image as a semi-transparent overlay.

SVG filter definition

Add this to your HTML file. The CSS will reference it by ID:
<!-- SVG noise filter definition - place anywhere in your HTML -->
<svg style="display: none">
  <filter id="noiseFilter">
    <!-- Creates the fractal noise pattern -->
    <feTurbulence
      type="fractalNoise"
      baseFrequency="0.5"
      numOctaves="3"
      stitchTiles="stitch"
    />
    <!-- Converts noise to semi-transparent overlay -->
    <feColorMatrix
      type="matrix"
      values="0 0 0 0 0
              0 0 0 0 0
              0 0 0 0 0
              0 0 0 0.8 0"
    />
  </filter>
</svg>
Experiment with baseFrequency in feTurbulence to adjust grain texture. Experiment with the 0.8 alpha value in feColorMatrix to adjust grain intensity.

CSS for the grain overlay

/* Grain overlay - covers the background image */
.grain-overlay {
  position: absolute;
  top: 0;
  left: 0;
  width: 100%;
  height: 100%;
  z-index: 2; /* Above background image, below content */
  pointer-events: none; /* Allows clicks to pass through */
  filter: url(#noiseFilter);
}

/* Background image */
.background-image {
  position: absolute;
  top: 0;
  left: 0;
  width: 100%;
  height: 100%;
  background-image: url("./media/your-midjourney-image.png");
  background-size: cover;
  background-position: center;
  z-index: 1; /* Below grain overlay */
}

HTML layering structure

<div class="hero">
  <!-- Layer 1: Background image (z-index: 1) -->
  <div class="background-image"></div>

  <!-- Layer 2: Grain overlay (z-index: 2) -->
  <div class="grain-overlay" style="filter: url(#noiseFilter)"></div>

  <!-- Layer 3: Content (z-index: 3) -->
  <div class="content-center">
    <!-- Your text, buttons, etc. -->
  </div>
</div>
The pointer-events: none on the grain overlay is important — it ensures clicks pass through to content below. Full working source code for this technique is at github.com/devflowinc/uzi.

Be creative

There’s ongoing debate about AI-generated art, but Midjourney is just another tool in the toolkit. The key is using it to bring your vision to life — not to replace creativity, but to bridge the gap between the style you have in your head and what actually shows up on screen. Take inspiration from what you see, but make it your own. The techniques here are about developing a unique voice and letting AI help you express it better. The goal isn’t to generate something generic. It’s to create images that actually work for your projects and feel intentional. Prompting is a skill. The describe feature is a shortcut. Film grain is the finishing touch. Put them together and you get professional marketing visuals that don’t immediately read as “AI-generated.”