The original text of this article can be found at:
https://www.linkedin.com/pulse/from-tool-teammate-framework-designing-ai-driven-steve-hickey-220ce

For product people, navigating the shift to AI isn't just about the technology. Our mission hasn't changed: we build products that serve human needs. The core design challenge is earning enough user trust to graduate our AI from a simple tool to a proactive teammate.

On my team at Robin, we’re developing a framework to help us be intentional about designing this human-AI relationship. It ensures we build user trust and comfort in lockstep with the AI’s growing capabilities. Here’s how we’re thinking about it.

Abstract art using circles and lines to represent a complex set of network relationships and entities

What kind of teammate are we designing?

Users will personify an AI whether we plan for it or not. This mental model sets their expectations for its role and autonomy. We need to be deliberate about the persona our AI adopts. I see it as a spectrum:

  • The Assistant: This is the entry-level relationship. The AI is subservient and handles simple, transactional tasks. The user delegates with clear commands like, "Find an available conference room" or "Pull last quarter's desk utilization report." It's a reliable tool.

  • The Collaborator: Here, the relationship becomes a partnership. The AI's capabilities complement the user's, and it can be trusted with more complex, generative tasks. A user might ask it to, "Propose a set of booking policies for our new hybrid model," expecting a solid first version to review and refine.

  • The Agent: This is the most advanced relationship, built on deep trust. The AI is a proactive, autonomous entity granted the responsibility to achieve goals on the user's behalf, often without explicit, step-by-step approval. For example, an agent could be empowered to proactively optimize an entire office layout based on incoming data, simply notifying the user of the outcome.

Our goal is to guide users on this journey. When someone trusts an AI as an agent, they move from just using a product to partnering with it. That deep integration is where we solve a customer's most valuable problems.

How do users interact with their AI teammate?

The UI is where the human-AI relationship takes shape. We see a few core interaction models emerging, each serving a different purpose.

  • Chat: The most flexible entry point. A conversational UI lets a user ask for anything, but its open-ended nature can make it difficult to discover the AI's full capabilities.

  • Split UI: A hybrid model that makes chat more powerful by embedding traditional UI into the conversation. An admin might ask, "Set up a new neighborhood for the engineering team." The AI could respond not with text, but with a visual floor plan editor, with the relevant area highlighted and a configuration panel ready to go.

  • Multimodal: This model accepts richer inputs beyond text. A user could upload a spreadsheet of new hires and ask, "Assign these people to desks near their teams and send them a welcome packet with a map."

  • Background: The most agentic model, where the AI operates proactively. The AI might notice that one team’s recurring meetings are causing constant scheduling conflicts and proactively suggest creating a dedicated project room, flagging both the pattern and a proposed solution.

The Core Trade-Off: Flexibility vs. Efficiency

These models bring up a fundamental design tension for every product team: the trade-off between flexibility (chat) and efficiency (traditional UI).

There's no single right answer, but the most effective strategy we've found is progressive disclosure. We need to meet users where they are—often with familiar, efficient UIs for specific tasks—and then strategically introduce them to more powerful, flexible interactions as they gain confidence.

But this progression is meaningless without a foundation of unambiguous user control. The goal isn't to force autonomy on users; it's to earn the trust to offer it as a powerful option. A user must always feel in command. Unwanted "help" isn't helpful; it's friction. They need a clear, simple way to dial back the AI's proactivity at any time.

A 4-Stage Roadmap for Building Trust

We can map this journey from simple tool to trusted teammate across four distinct stages. This gives our teams a clear roadmap for evolving the product's AI.

  1. Task-Specific: The user interacts with AI through constrained, predictable UI elements, like a "Summarize" button. Trust is built on reliability for one specific job.

  2. Assistant-Based: The user begins leveraging a chat interface. We can use suggested prompts to guide them and showcase what’s possible, expanding their understanding of the AI's scope.

  3. Intelligence-Based: The system becomes more proactive, surfacing insights and suggestions. For example: "Peak desk usage is on Tuesdays. You could implement a hot-desking policy on Fridays to save on energy costs."

  4. Autonomous: The user fully trusts the AI to act on their behalf to achieve preset goals, such as automatically optimizing office space based on new utilization data.

Getting Started

Any team looking to build beyond simple features is going to encounter some significant challenges. We're talking about things like ensuring Explainability (the "why did the AI do that?" question), designing for truly Complex Workflows, and building robust Edit/Undo functionality for AI-driven actions.

Now, while these are definitely large-scale efforts that will evolve over time, the foundational work—building trust with our users—can and absolutely should start right now. For any team embarking on this journey the immediate focus needs to be on building a strong 'safety net' for the user. This means prioritizing user control and transparency.

Think about it: before a user is willing to grant an AI more autonomy, they have to feel completely confident that they are still in command. They need to know, without a doubt, that they can easily direct, correct, or override the AI’s actions at any point.

By designing clear controls and intuitive feedback loops, we're giving our users the confidence they need to explore what's possible with AI. This 'safety net' is precisely what allows them to take that leap of faith—moving from simply using our products as tools to truly trusting them as indispensable, intelligent partners.