AI Concept Speculative Design

Arca

An AI recommendation system where trust is a deliberate design feature — not an afterthought, not a disclaimer, not a setting buried in a menu.

Type
Speculative concept · AI product design
Theme
Explainability · Correction · Trust
Status
Concept exploration
Arca
Trust as a design feature

AI recommends. Users comply. Nobody asks why.

Most AI recommendation systems are black boxes. They output a suggestion, a ranking, a decision — and expect the user to accept it. When the recommendation is wrong, users can dismiss it or ignore it, but they can't understand it, challenge it, or teach the system why it was wrong in a way the system actually learns from.

Arca explores what it looks like when explainability and user correction are treated as primary design requirements — not accessibility features, not compliance checkboxes — but core to how the product works.

Trust is not a UI element. It's an architecture decision.

You cannot design trust into a system by adding a "Why this?" button to an otherwise opaque recommendation engine. Trust requires that the system actually exposes its reasoning in a form users can evaluate — and that user feedback genuinely changes the system's behavior.

The question isn't "how do we communicate what the AI is doing?" It's "how do we build an AI that can be communicated about honestly?"

Most AI UX research treats explainability as a communication problem. Arca treats it as a systems design problem where the design and the AI architecture need to co-evolve.

Four principles that shaped every Arca interaction.

01
Legible reasoning
Every recommendation comes with an explanation that's specific, not generic. Not "based on your history" — but "because you engaged with X, Y, and Z in the last 7 days."
02
Contestable outputs
Users can flag a recommendation as wrong and specify why in structured terms the system can act on — not a freeform "not interested" but a typed correction.
03
Visible learning
When the system changes its behavior based on user feedback, it says so. Corrections are acknowledged. The loop is closed explicitly, not silently.
04
Confidence disclosure
The system expresses its own uncertainty. High-confidence recommendations are distinguished from low-confidence ones, and the difference is made legible to the user.

Where this goes next.

Arca is a design exploration, not a shipped product. The questions it raises are relevant to any AI-adjacent product team: how do you build explainability into the system before the interface, not after? How do you measure whether users actually trust the AI more over time? What does a "trust score" even mean?

These are the questions I'm thinking through. If you're working on AI product design and want to talk through any of this, I'm interested.