AI assistants and agents have joined us as active participants. From relatively simple support chatbots to more autonomous agents running full workflows, these systems will greatly reshape product and service experiences. We are now designing who these systems are, not just what they do. How we work this new material will shape the stories people tell about every service they encounter.
The Core Tension in Designing for Humans vs. AI
When designing for humans, we embrace variability. We accept the messiness of people and the many roles they inhabit in their lives. The design problem isn’t to reduce that variability; it is to build systems that can meet people where they are.
For AI assistants and agents, the challenge is constancy. Despite the underlying probabilistic technology, we must increase the likelihood that an agent will behave appropriately in the overall designed service system. This tension increases as these agents move from responding to our requests to executing work on our behalf. These are not just different design problems. They point in opposite directions.
Designers and organizations already feel the need for consistency in this domain. In an April 2026 Medium post, Gianluca Brugnoli identifies this tension as a central challenge, arguing that agents must offer a stable, recognizable pattern of behavior that users can build a mental model of. The question is: what gives us the tools to achieve it?
The instinct has been to reach for familiar tools, applying the term persona to AI assistants and agents. In the HCI tradition, personas flatten human complexity into useful models. Many designers have stopped using them in favor of other models, and a skin-deep riff on personas won’t close the gap between the agent’s relationship to people and the technical specifications that bring it to life. This gap creates real risk for delivering the right experiences to people and for the ethical use of AI.
Richly developed characters give us the best chance.
I am not referring to AI “characters” in entertainment, digital avatars, or roleplay bots. I am speaking of character in the literary sense. Just as a playwright defines a character to ensure they act consistently across every scene, we must define the assistant’s or agent’s internal logic to ensure it acts dependably across every interaction. The playwright does not go on stage. Their work is entirely in the preparation.
AI Actors: Where Story Meets System
The craft required to define an agent’s character lives in traditions most technology organizations don’t typically draw from: dramaturgy, literature, and screenwriting.
My design master’s thesis was a prototype for studying film by adding metadata—filmic techniques, genre, auteur, studio system, star system—at the shot level to John Ford’s Stagecoach. By applying that same framework to other films, I was able to surface connections that made film technique more learnable and teachable. I chose Stagecoach because it is the archetypal western, and its genre archetypes—the Reluctant Hero, the Loyal Sidekick—signal familiar character behaviors without exposition. When a filmmaker breaks those conventions intentionally, the viewer’s reaction is subjective. Intentionally here is key. In a designed service system, unintended behavior is not subjective entertainment; it’s objective experiential dissonance.
My undergraduate thesis was on Samuel Beckett. What drew me to his work was Beckett’s characters. As Martin Esslin observed, they are non-traditional, appearing on stage with no exposition, little psychological depth, and tightly constrained by their context.1 In Waiting for Godot, Vladimir and Estragon stem from the archetype of the vaudeville double act. What makes them feel specific is the precise nature of their relationship to one another. In a designed service system, an AI assistant or agent requires exactly that kind of definition: clarity of purpose, constraint, and consistent disposition toward the people it serves.
The craft of strong storytelling provides concepts and tools for designing AI assistants and agents. While the context and stakes differ, stealing from these centuries-old techniques can aid in designing for comprehension, interaction, and trust.
From Generic Archetype to Specified Character
This is where the distinction among persona, archetype, and character becomes essential.2 A persona is a generalized mask designed to represent a specific role. An archetype is a pattern, a recognizable template that recurs across stories and cultures. These patterns signal immediate recognizability for those who encounter them. But an archetype is still one-to-many. It describes a type that multiple characters can embody.
A character is specific. It has the qualities of purpose, constraint, disposition, and refusal. Its internal logic helps an actor—or a model—stay in character as the performance moves beyond the page. When organizations use personas loosely or name their AI assistants and agents after archetypes, they are stopping too early. They have created something branded, not embodied. Memorable, not governable. Codeable, not credible. They have not gone deep enough to specify how the agent must show up and reliably fulfill its role.
These four terms are worth defining precisely, because each one does different work, and together they move from familiar territory toward the design challenges organizations have not yet fully grappled with:
- Purpose: What the agent is designed to pursue with a specific orientation toward the people it serves. This concept is closest to what persona work already attempts when defining what the agent is for.
- Constraint: The boundaries within which the agent operates. What it will not do by design, not by technical limitation. Constraint is what keeps purpose from becoming unchecked.
- Disposition: The agent’s orientation toward the people it encounters. Not surface brand attributes, but the underlying logic from which all interactions flow. Adjacent to what prompt engineers call tone or personality, an approach that has too often produced overly sycophantic agents. Disposition is deeper and more difficult to specify without using tools from disciplines outside of design and engineering.
- Refusal: What the agent will not do regardless of instruction. This is where character becomes governance. This aspect is the least addressed by either persona work or engineering, and the most important to get right.
Defining a character this deeply doesn’t solve the probabilistic nature of the underlying technology. What it does is give every person in the organization—designers, engineers, product managers, and executives—a shared reference point for evaluating whether the agent is showing up the way it should. This approach provides a more effective and understandable foundation for governance.
In theater and film, the author with the script and the acting coach with the actor do the deep work before the performance. They define and hone the character, explore its purpose, and establish its behaviors and motivations. But the author and coach are not on stage. They cannot intervene in real time as the actor performs. They must do their jobs as best they can and then watch with the audience.
In products and services, the equivalent work is shaping the interplay of human agency and AI behavior. Anthropic has explored this dynamic from the technical side. In their paper on training Claude’s character and researcher Amanda Askell’s public remarks, they frame this work as an alignment mechanism, not a product feature. Responding well in unscripted situations requires intentionality and care. As engineering shapes at the model layer, the service designer shapes at the service layer. Ultimately, the holistic service system must grapple with the variability of humans and the desired constancy of AI actors.
Brugnoli’s checkpoints, mentioned earlier, provide one half of a solution. Real-time mechanisms keep humans meaningfully in the loop as the service unfolds. But checkpoints without well-defined characters that go beyond mere personas, imbued with the craft of storytelling, make human-agent collaboration open to misunderstandings and mistrust. Together, they connect story and system.
Four Principles of AI Agent Character Design
These are not proven methods. The field has not yet developed the practice. But the literary tradition that I’ve drawn from can inform an “author and acting coach” model for this challenge.
- Define Character Before Deployment
Deep character definition—purpose, constraint, disposition, and refusal—must happen before the agent goes live, not after problems emerge. Organizations that skip this work and rely solely on technical specifications will find themselves governing by exception rather than by design, reacting to drift they have no framework to evaluate.
The interrogation: Who in your organization is responsible for defining who this agent is, not what it does, but who it is? What is it designed to pursue? What will it refuse? Whose interests does it serve? - Stewarded Characters Over Time
Defining a character once is not enough. Model updates, organizational change, and the accumulation of edge cases will all exert pressure on the agent’s identity over time. Someone needs to be responsible for evaluating whether the agent is still showing up as designed, and for making the call when it isn’t. The acting coach doesn’t disappear after opening night.
The stewardship: Who owns this character over time? How will you know when a model update has changed something that matters? What is your basis for evaluating drift? - Systems Must Reinforce Character
The character manifests as intended only if backstage operations, technical capabilities, and organizational processes support it. Frontstage and backstage must tell the same story. The author writes the character, but the production must deliver it.
In Orchestrating Experiences: Collaborative Design for Complexity, Chris Risdon and I argued that coherent service experiences emerge from the collaboration of people across an organization, not from any single designer or team. The agentic era makes that argument more consequential.
The systemic lens: Does the system behind this agent reinforce who it is supposed to be, or undermine it? Where are the points of misalignment between the character you have defined and the capabilities you have built? - Ensembles Require Shared Conventions
As organizations deploy multiple agents, the question of how they relate to one another becomes urgent. Two performance traditions point in useful directions. Jazz ensembles give each player room to improvise, but the performance works because every member knows the genre deeply enough to listen and respond to one another. Improvisational theater works almost the opposite way, with performers trained to respond to whatever emerges. Most multi-agent systems will need both: a prespecified shared grammar and characters trained sufficiently to produce coherent behavior in real time.
The ensemble: How will your agents relate to one another? What shared conventions will allow them to perform coherently?
The author and the acting coach cannot guarantee a perfect performance. But they can increase the odds that when the actor encounters something novel, it can draw on a foundation deep enough to respond in a way that feels right.
This is not a solved problem. The craft required lives in storytelling traditions that organizations rarely intentionally apply to products and services. Whether they develop it deliberately or default to shallower work will shape the experiences that follow.
My practice, and that of many in service design, has always lived at the intersection of story and system. This transitional period in how we work and live requires these approaches more than ever.
Footnotes
- Samuel Beckett, Waiting for Godot (1953) and Endgame (1957). Martin Esslin’s The Theatre of the Absurd (1961). This stripped-down specificity (a character without backstory and an identity without interiority) should inspire the design of AI agents.
- As creative writing theory makes clear, all characters begin as archetypes and are fleshed out only as far as needed. The progression is archetype to character. See Vladimir Propp, Morphology of the Folktale (1928; English translation, University of Texas Press, 1958) and Northrop Frye, Anatomy of Criticism (Princeton University Press, 1957).
