Builds show what I make. Workflows show how you can make it too. Each one is a real recipe I ran on real work, with the exact prompts, commands, and artifacts. Copy what helps.
Stop Cmd+tabbing through your dev server. Hand the localhost URL to Claude, describe what should work, and get screenshots of every state, a ranked issue list, and a pass or fail in 30 seconds. Catches the regression the rushed manual click misses.
Claude Design ships beautiful artifacts, but only as preview URLs. This workflow pulls the real HTML into your own app so you can embed it, theme it, and ship it, no Tailwind imitation, no screenshots pretending to be proof.
Same prompt, same inputs, two models. Surface the real differences on a task that matters to you, in 20 minutes, instead of picking based on someone else's benchmark.
Raw CSV or JSON from any open data source becomes a filterable, shareable explorer on your own domain. No backend, no API keys, no runtime cost. Proof lives on-site. I've run this four times.
Karpathy's April 2026 post named the four constraints any serious personal AI memory system needs: Explicit, Yours, File over app, BYOAI. Here's each one translated into concrete implementation, with the pieces you actually install on day one.
Capturing everything is easy. The compounding is in what you promote. A weekly rhythm that turns raw daily captures into curated, LLM-readable learnings your agents treat as ground truth.
A short, dated history of how the 'personal markdown wiki for AI agents' pattern emerged, who shipped what, and why it all converged on the same four properties. Reading this first makes every other AI-memory recipe make more sense.
The markdown wiki is the foundation. This is the system on top of it: multi-agent orchestration, atomic session protocols, cron automation, mobile bridges, and live data feeds. What you end up with is not a notebook, it is an operating system for your own attention and work.