AI in the Wild: Failed startups are agent training data
This is episode 1 of AI in the Wild, a ShipWithTez field-notes series on real AI use cases, AI-native stacks, and workflow patterns worth stealing.
The source that kicked this off was an X post from Harnoor Singh pointing to Startups.RIP. Credit to the original builders and posters first. The useful SWT job is to explain the pattern, not pretend the original work is mine.
Failed startup history as product memory
The useful pattern is not copying dead startups. It is turning old attempts into a sharper research loop before a founder writes code.
What I Found
Startups.RIP packages histories of YC startups that shut down or got acquired. The public Product Hunt listing currently describes it as a wiki of 1,700+ YC startups, with AI-written postmortems and rebuild playbooks.
The X post I saw used a larger number, but the source page I could verify publicly says 1,700+ or 1,738+, so I am using the conservative number here.
The interesting part is not the number.
The interesting part is the workflow:
- take an old startup attempt
- study what it built
- identify why it failed
- ask what changed since then
- turn that into a rebuild plan an agent can help test
From graveyard to product memory
Each step should leave a receipt. If the evidence is weak, the agent should mark it weak.
Find the old attempt
Shutdown page, postmortem, acquisition note, launch archive.
Extract the failure model
Market, timing, distribution, cost, trust, or team.
Compare what changed
AI capability, buyer behavior, channel access, regulation, margin.
Design one honest test
A landing page, data pull, concierge workflow, or paid pilot.
That is a much better use of AI than asking a blank chat box for "startup ideas."
Why It Is Wild
A failed startup used to be mostly a story.
Now it can become structured product memory.
If the source material is grounded enough, an agent can help answer:
- who tried this before?
- what did they believe?
- what broke?
- was it a market problem, timing problem, distribution problem, cost problem, or team problem?
- what is cheaper or easier now?
- what would be the smallest honest test today?
That changes the job of product research.
The founder still has to do judgment, customer discovery, pricing, distribution, and execution. But the first research loop can get sharper.
Pattern To Steal
Before building a new idea, run the graveyard check.
The graveyard check
Open each question before you rebuild an old idea. The point is to end with a test, not a motivational idea list.
Who tried this before?OpenClose
Prevents fake novelty and finds the real ancestor of the idea.
A short source list with links and confidence notes.
Why did it break?OpenClose
Separates bad timing from weak demand or broken economics.
A failure map grouped by market, channel, cost, trust, and team.
What changed?OpenClose
Finds the new unlock instead of assuming AI magically fixes it.
A before and now comparison with concrete changes.
What still looks hard?OpenClose
Keeps the rebuild from becoming a hype exercise.
A risk list that names what AI does not remove.
What can I test in 7 days?OpenClose
Turns research into one observable action.
A small test with owner, input, output, and kill signal.
That last question is the point.
AI-native research should end in a test, not a motivational idea list.
What Not To Overclaim
There is a real boundary here.
Not every failed startup is a good market. Some failed because demand was weak. Some failed because economics never worked. Some postmortems are incomplete, biased, or speculative.
Failed startup plus AI equals free company
This skips customer discovery, distribution, pricing, trust, and execution. It turns a postmortem into fantasy.
Old attempts plus current tools create a better research loop
The agent helps compare evidence and design a first test. The founder still owns judgment.
That is much more useful.
Where I Would Use This
I would use this before:
- starting a side project
- choosing a SaaS niche
- building a founder workflow tool
- reviving an old consumer idea
- writing an investor or customer discovery memo
- deciding whether a "new" AI idea is actually a recycled old workflow with better timing
For ShipWithTez, this also maps to the bigger theme:
AI is not just helping us build faster. It is making more of the internet usable as product memory.
That is why this is episode 1.
It is not the most cinematic example in the backlog. The interactive science apps and math-to-visual-world posts can come later.
This one is simple enough to set the series rule:
Find the wild thing. Credit the source. Extract the pattern. Turn it into a workflow someone can actually use.
If you try the graveyard check on an idea, send me the memo. The next AI in the Wild note will keep the same rule: credit the source, show the workflow, and turn it into something you can test.
Source Trail
Want to see more projects like this? Browse all builds for interactive tools, dashboards, and case studies with source and build times. Or learn more about ShipWithTez.