Prompt-to-production landing page hero in under 20 minutes
The thumbnail on the Landing Page Critic build card is a GPT Image 2 output from my launch-day comparison, dropped into production in 15 minutes. Here's the 60-word prompt that got it, the export recipe, and why this replaces the 'hire a designer' step for a single-founder project.
- Time
- 15 to 20 minutes
- Cost
- $0 via ChatGPT Plus
- Stack
- GPT Image 2ChatGPT Plus (Thinking mode)sips (macOS)Next.jsVercel
You’re stuck with
You ship a new build on your site every week or two. Every build needs a hero and a thumbnail. Stock illustrations look generic. Hiring a designer is a week and $500 minimum. Your old DALL-E outputs have garbled text and fake chart axes.
You end up with
A 1200x1200 production-ready hero that renders your real headline, real UI mock, real CTAs, and a real brand palette. Ready to drop into Next.js. Works first try once the prompt is right. $0 marginal cost via ChatGPT Plus today, ~$0.15 once the API is ungated.
The recipe
On April 21, 2026, OpenAI shipped GPT Image 2 and I ran a five-model comparison the same day. Prompt 02 asked for a Landing Page Critic hero. GPT Image 2's output was more polished than the screenshot I had been shipping on the real site.
So I swapped it in. The thumbnail you see on the Landing Page Critic build card is now that GPT Image 2 output. 15 minutes from "it won the comparison" to "deployed on shipwithtez.com".
This workflow is how.
1. Skip the API gate, use ChatGPT Plus with Thinking
The API is the obvious path and it doesn't work yet. gpt-image-2 via /v1/images/generations requires org verification through Stripe Identity, which is a 1-to-3-day process.
ChatGPT Plus gives you the same model, today, for $0 marginal cost. The one thing you have to do is turn Thinking mode on and let it plan before it draws. It takes 20 to 30 seconds of planning, then a few seconds on the image itself. That planning is what lets it render a real product mock with a real chart and a real 4-card grid instead of vaguely product-shaped pixels.
The full comparison across five models (and why Thinking matters) is in the launch-day build.
2. The shape of a hero prompt that lands first try
Four ingredients, in this order:
- The product name and one-line positioning. "Landing Page Critic. AI that scores any URL on 8 dimensions in 30 seconds."
- The layout structure. Two columns: left column headline + two CTAs, right column a product mock. Dark theme.
- The mock's actual content. Overall grade B+ 8.4. Category scores as a bar chart with 6 bars. 4 sub-metric cards. 3 recommendation rows. Browser chrome with
app.shipwithtez.com. - The brand palette. Dark navy background (#0B0F1A), lime green accents (#A3E635), white headlines, muted greys for body.
The lie of "AI can just design landing pages" is that the model has to invent all four. Name it, structure it, fill it with real content, palette it. Then the draw step is mechanical.
3. The actual prompt I used
Pasted into ChatGPT with Thinking mode on:
A dark-theme landing page hero for a build called "Landing Page Critic" on shipwithtez.com.
Left side: headline "Score any landing page in 30 seconds" and two buttons: "Try it free" and "View demo".
Right side: stylized browser window with a score badge showing "B+ 8.4" and a small bar chart.
Brutalist, precise typography.
Sixty words. What is interesting is what the model inferred that the prompt never spelled out: a full left sidebar (Overview, Breakdown, Copy Analysis, Design Analysis, Performance), a Category Scores chart with six labelled bars, a row of four sub-metric cards with specific scores and strength labels, a TOP RECOMMENDATIONS section with three numbered rows and HIGH IMPACT pills, and a realistic macOS-style browser chrome. None of that was in my prompt. All of it is Thinking-mode planning.
That is the real finding. You do not need a three-hundred-word prompt to get a rich output. Name the product, name the key visual elements, name the style direction, and let the planning phase fill the rest.
Output: one image, first try, usable as-is.
4. Iteration is optional, not required
In the comparison I ran this prompt once and scored it. The GPT Image 2 output was strong enough that it became the new Landing Page Critic thumbnail without further rounds. That is the typical case with Thinking mode on.
If a first try misses, three kinds of follow-up prompts usually close the gap:
- Layout tightening: "Make the product mock feel more like a real product screenshot and less like a designed page. Tighten the spacing, reduce the decoration."
- Specific element change: "Replace the circle grade card with a hexagonal badge showing 8.4."
- Cropping or density: "Shrink the three icon rows below the CTAs and put each label on one line."
Twenty to forty seconds per round. One extra round is usually enough. Three if you are being picky.
5. Export and optimize for Next.js
ChatGPT's web app gives you a PNG at roughly 1200x1200. Too heavy to ship as-is. The sips recipe on macOS resizes and keeps PNG format (so the extension matches the content, which matters for Next/Image optimization and CDN caches):
# Resize to 1200px longest side, keep PNG format.
sips -Z 1200 -s format png \
~/Downloads/ChatGPT\ Image\ Apr\ 21*.png \
--out ~/landing-page-critic-hero.png
That brings my ChatGPT-sized PNG down to roughly 480KB at 1200x675. Next.js Image will further optimize that into AVIF/WebP at request time, so you do not need to pre-encode a JPEG unless you want the smaller on-disk footprint.
If you prefer JPEG on disk, use -s format jpeg -s formatOptions 85 and save with a .jpg extension. Do not copy JPEG bytes into a .png filename: the extension/MIME mismatch breaks some CDN caches and any consumer that sniffs by extension. Keep content and extension aligned.
6. Swap in production
The build registry in a Next.js + MDX content site puts the thumbnail at a predictable path. For this site, the frontmatter was already pointing at .png filenames, so the swap is a direct PNG-to-PNG overwrite:
# Overwrite the existing PNGs with the new PNG.
cp ~/landing-page-critic-hero.png \
public/builds/landing-page-critic-thumb.png
# Same file as OG for now. 1200x630 is the ideal OG aspect; our 1200x675
# still renders cleanly in every social preview I checked.
cp ~/landing-page-critic-hero.png \
public/builds/landing-page-critic-og.png
No MDX change needed because the frontmatter already points to those exact filenames:
thumbnail: "/builds/landing-page-critic-thumb.png"
ogImage: "/builds/landing-page-critic-og.png"
If your frontmatter points to .jpg, export as JPEG and write to .jpg paths. The rule is: filename extension must match the bytes.
7. Deploy
git add public/builds/landing-page-critic-thumb.png public/builds/landing-page-critic-og.png
git commit -m "feat(landing-page-critic): swap thumbnail to GPT Image 2 hero output"
git push
Vercel picks it up on the next deploy. Total elapsed time from "GPT Image 2 returned the image I liked" to "it's live on shipwithtez.com": about 15 minutes, most of it waiting for the build to finish.
8. Before and after
The old thumbnail was a real screenshot of the Landing Page Critic app after I pasted stripe.com into it. It was honest. It was also flat, monochrome, and did not sell the product.
The new one is a GPT Image 2 render of what the product aspires to look like at its best. Still truthful (every UI element in it maps to a real feature of the app), but polished enough to make the card earn a click.
Honest screenshot of real app output is the right default. Hero image of the product at its best is the right card. Both can be true, and you want the card to make someone open the build.
9. When to use this workflow
- New build thumbnails. One prompt, 15 minutes per build, zero to three refinement rounds depending on how fussy you are. Scales to as many builds as you can ship.
- Blog post OG images. Same recipe, different aspect ratio (1200x630).
- Feature launch hero for product pages. When you need to explain what a not-yet-shipped feature will look like.
- Pitch deck screenshots. Same idea. Explain to GPT Image 2 what the screen shows in words, let it render it better than you can.
10. When this is the wrong tool
- Real screenshots of working product. For documentation, always use the real thing. Trust is higher than polish.
- Brand-safe commercial work. GPT Image 2 is not indemnified for enterprise use the way Adobe Firefly 3 is.
- Logos, icons, precise type. Ideogram 3 is still better at dense text and logo-quality typography.
- Photorealistic human subjects for marketing. The uncanny-valley risk is still high even with v2.
11. What I would do next
The follow-up I have in mind is a workflow that wires this into a GitHub Action: a new MDX file in content/builds/ triggers a headless render of a hero prompt derived from the frontmatter. Merge the PR, get the thumbnail auto-generated by the image model, land on the site. That one is worth a second pass once the gpt-image-2 API gate lifts.
Related: the launch-day comparison shows why GPT Image 2 is the right default for this workflow (ahead of Nano Banana Pro/2 on UI realism and product mocks). The landscape blog positions GPT Image 2 against every other serious image model at the frontier right now.
Get new workflows and breakdowns in your inbox.