AI-native photographer discovery powered by natural language briefs and vision-based portfolio matching — designed for clients who know what they want but don't know how to search for it.
When a client needs a photographer, they typically describe their vision in feeling-based language: "warm and intimate," "editorial but not cold," "like film, but not try-hard." Search engines don't understand that. Directories don't understand that. Most booking platforms ask you to browse by location and price and hope for the best.
Pixeel is an exploration of what happens when you take natural language seriously as a search input — and use vision AI to bridge the gap between what clients say and what photographers make.
The brief a client gives a creative director — "moody, intimate, golden hour, less polished, more real" — is inherently multimodal. It's visual and emotional and contextual all at once. No amount of tag-based filtering can resolve that brief into a shortlist of photographers whose work actually matches.
"I know exactly what I'm looking for. I just have no idea how to find it on any of these platforms."
The design problem is: how do you build a discovery interface that actually speaks the language of creative briefs?
Pixeel's core interaction is a natural language brief field. You describe what you're looking for — as specifically or loosely as you want — and the system translates that into visual criteria that get matched against photographer portfolios using computer vision and style embedding.
Pixeel is less about photography and more about a class of discovery problems where the user knows what they want but lacks the vocabulary the system requires. That problem exists in fashion, in music, in interior design, in hiring — anywhere the gap between felt sense and structured query creates friction.
The interaction model Pixeel explores — natural language in, vision-matched results out, iterative refinement through reaction — is transferable. That's what makes it interesting to design.