An AI recommendation system where trust is a deliberate design feature — not an afterthought, not a disclaimer, not a setting buried in a menu.
Most AI recommendation systems are black boxes. They output a suggestion, a ranking, a decision — and expect the user to accept it. When the recommendation is wrong, users can dismiss it or ignore it, but they can't understand it, challenge it, or teach the system why it was wrong in a way the system actually learns from.
Arca explores what it looks like when explainability and user correction are treated as primary design requirements — not accessibility features, not compliance checkboxes — but core to how the product works.
You cannot design trust into a system by adding a "Why this?" button to an otherwise opaque recommendation engine. Trust requires that the system actually exposes its reasoning in a form users can evaluate — and that user feedback genuinely changes the system's behavior.
The question isn't "how do we communicate what the AI is doing?" It's "how do we build an AI that can be communicated about honestly?"
Most AI UX research treats explainability as a communication problem. Arca treats it as a systems design problem where the design and the AI architecture need to co-evolve.
Arca is a design exploration, not a shipped product. The questions it raises are relevant to any AI-adjacent product team: how do you build explainability into the system before the interface, not after? How do you measure whether users actually trust the AI more over time? What does a "trust score" even mean?
These are the questions I'm thinking through. If you're working on AI product design and want to talk through any of this, I'm interested.