QA your own feature with Chrome MCP before you ship it
Stop Cmd+tabbing through your dev server. Hand the localhost URL to Claude, describe what should work, and get screenshots of every state, a ranked issue list, and a pass or fail in 30 seconds. Catches the regression the rushed manual click misses.
- Time
- 30 seconds per QA pass
- Cost
- $0 (uses your Claude subscription)
- Stack
- Claude CodeChrome DevTools MCPlocalhost
You’re stuck with
You just finished a feature. Writing a real test costs more than the feature did. Manual clicking is fast but misses edge cases. Then a reviewer catches the regression you didn't notice.
You end up with
A 3-line prompt you paste after each feature. Claude opens your dev server, exercises the flow, measures elements where it matters, returns what broke with screenshots. 30 seconds per pass, zero tests to maintain.
The recipe
1. Install Chrome MCP once
Chrome DevTools MCP turns Claude into a browser tester for free. One-time install in Claude Code:
claude mcp add chrome-devtools -- npx -y @modelcontextprotocol/server-chrome-devtools
Restart Claude Code. The mcp__chrome-devtools__* tools appear. No API keys, no subscriptions, no rate limits.
2. Run your dev server on a real port
PORT=3005 pnpm dev
Pick a port nothing else uses. Chrome MCP attaches to the same Chrome you already have open, so anything at localhost:3005 is visible to Claude the instant it runs.
3. Paste the QA prompt
After every feature, paste the same three lines. Swap in your URL and the verify list:
Open localhost:3005/builds/landing-page-critic in Chrome MCP.
Submit the URL https://stripe.com and wait for the result card.
Verify: 8 dimension cards render, each has a score and a Fix: line, grade shows top right, no console errors. Report pass or fail with a screenshot.
That's the whole workflow. Claude navigates, clicks, types, waits, screenshots, reads the DOM, checks the console, and writes a pass or fail with the evidence attached.
4. The real example that saved me a broken merge
Two sessions ago I shipped new SVG workflow thumbnails. A full-page screenshot looked right. I almost merged the PR. The QA prompt caught the issue:
Open localhost:3005/workflows in Chrome MCP.
On each workflow card, call getBBox() on the root <svg> and on each text node.
Report any text node whose right edge exceeds the svg's viewBox width.
Claude ran evaluate_script, measured bounding boxes, and came back with three overflows:
BYOAIbody text: overflows viewBox by 0.5pxYOURSbody text: overflows cell by 6pxAGENTS READpill: overflows 48px pill by 13px
The full-page screenshot did not catch any of this. The thumbnail-size render on the card did. getBBox() did. I fixed the SVG, re-measured, got overflows: [], shipped. Codex approved the PR.
Moral: screenshots lie at smaller sizes. Measurements do not.
5. Three prompts I keep pinned
Different feature types, different verifications.
UX flow check
Open localhost:<port>/<path>. Click the primary CTA and walk the user flow to completion.
Pass criteria: each step advances, no 500s in the network tab, no console errors.
Return a screenshot of every state you visit, and flag any step that felt slow (>2s).
Element measurement check
Open localhost:<port>/<path>. Use evaluate_script to call getBoundingClientRect() on <list of selectors>.
Report anything overflowing its parent, misaligned by more than 2px, or off-screen.
Regression check on a bug you just fixed
Open localhost:<port>/<path>.
Reproduce the old bug: <3-line steps that used to break it>.
Report whether the bug still reproduces, with screenshots of each step.
6. When this is the wrong tool
Chrome MCP replaces manual clicking. It does not replace:
- CI regression suites. Playwright or Cypress with real assertions still owns that lane.
- Visual regression diffs. Percy or Chromatic catches per-pixel drift better.
- Load testing. One Claude, one browser.
The Chrome MCP workflow sits exactly where the manual click sits today. It is faster, more thorough, and runs while you read Slack.
7. Why this works now and did not last year
Three ingredients only shipped in the last twelve months:
- A local MCP that drives the browser with no API key.
- A model strong enough to read the DOM, measure, and reason about what should be true.
- A Claude Code harness that can chain those tools without a human in the loop between clicks.
The gap between "feature done" and "feature shipped" used to be 10 minutes of manual clicking. It is now 30 seconds of reading the QA report. Multiply across the number of features you ship in a week.
Related: the Landing Page Critic build runs the same Chrome MCP daily-driver pattern but on any public URL. Paste one in, get a scored critique back.
Get new workflows and breakdowns in your inbox.