Vibe Search
Search photos and memes by vibes, not keywords. Upload your own photo to find its vibe. Powered by Gemini Embedding 2.
What this proves
Multimodal embeddings make mood-based search possible without tags, captions, or keyword metadata.
How it works
Type a mood. Get matching photos. No keywords needed.
Built with Google's brand-new Gemini Embedding 2 (released March 10, 2026), the first multimodal embedding model that maps text, images, video, audio, and PDFs into a single vector space. Built same day the model shipped.
The interesting part
Traditional image search matches keywords in filenames or tags. Vibe Search matches the meaning of your text against the visual content of photos. "Lonely person working late" finds a dimly lit coding setup and a musician on stage, not because those words appear anywhere, but because the AI understands what the vibe looks like.
What you can do
- Text search: Type any mood, scene, or feeling
- Meme finder: Search meme templates by situation
- Find Your Vibe: Upload a photo to find similar images
- Click to explore: Click any result to find more photos with the same vibe
- Vibe Roulette: Get random weird prompts like "founder burnout at 2am"
How it's built
- One API call to embed your query, then all ranking happens client-side in the browser
- 768 dimensions per vector, cosine similarity in 5 lines of math
- Pre-embedded dataset as a static JSON file, no database needed
- Click-to-explore uses existing embeddings, zero additional API calls
What I learned
- Multimodal embeddings genuinely work across modalities, text queries find relevant images without any keyword metadata
- 768 dimensions (down from 3072) via Matryoshka scaling is more than enough
- The entire search runs client-side after one API call for the query embedding
- Image-to-image similarity is surprisingly accurate and instant when vectors are pre-loaded
Get new builds, breakdowns, and useful AI updates.