Don't botch the bot: Designing interactions for AI
Summary
It seems like every company is adding a conversational AI chatbot to their website lately, but how do you actually go about making these experiences valuable and intuitive? Savannah Carlin will present a case study on a conversational AI chatbot—Marqeta Docs AI—that she designed for a developer documentation site in the fintech industry. She will share her insights, mistakes, and perspectives on how to use AI in a meaningful, seamless way, especially for companies like Marqeta that operate in highly regulated industries with strict compliance standards. The talk will use specific examples and visuals to show what makes conversational AI interactions uniquely challenging and the design patterns that can address those challenges. These include managing user expectations, handling errors or misunderstandings within the conversation, and ensuring that users can quickly judge the quality of a bot’s response. You’ll gain a deeper understanding of the intricacies involved in designing interactions for AI, along with practical advice you can apply in your own design processes. Take-aways What to consider before you add AI to your product to ensure it will be valuable, usable, and safe for its intended workflows The interactions that are unique to conversational AI experiences and the design patterns that work for them Common challenges in designing conversational AI experiences and how to overcome them
Key Insights
-
•
Defining a clear and specific primary use case is crucial before starting any generative AI chatbot project.
-
•
High-quality, thoroughly reviewed training data is foundational to delivering accurate and useful AI outputs.
-
•
Initial state messaging must clearly frame what the chatbot can and cannot help with to reduce irrelevant or off-topic queries.
-
•
Loading indicators for AI text responses should be subtle, with progress reflected by text appearing rather than distracting animations.
-
•
Supporting efficient scrolling and prompt review is vital since users frequently check and refine their inputs against often long answers.
-
•
Error states in AI chatbots shift from traditional fixed errors to helping users write better prompts to get more relevant results.
-
•
Transparency about accuracy, AI limitations, and source citations builds user trust, especially in regulated domains like FinTech.
-
•
Providing users with prompt engineering guidance via documentation significantly improves the quality of chatbot interactions.
-
•
Accessibility considerations, like keyboard navigation and screen reader compatibility, must be integrated from the start, especially given the large text outputs.
-
•
Chatbots can reduce customer support friction and encourage users to ask questions they might not have otherwise, enhancing user engagement with the product.
Notable Quotes
"If you have any doubts about the quality of the training data, do not proceed."
"You want to assist people in framing the interaction and setting their expectations correctly so they know how to be successful."
"The biggest difference with error states in AI bots is helping people write prompts effectively, not just recovering from simple failures."
"Loading text itself is a loading indicator; the letters appearing show progress better than jumpy animations."
"People often forget what they wrote and then want to check their prompt again before refining it."
"Every output should have at least three source links, almost like citations in a research paper."
"We had to be very careful about accuracy because we're in FinTech and compliance is critical."
"People started asking questions to the bot that they wouldn’t have taken the time to email about."
"It’s really important to be clear and transparent about what the tool is good at and what it’s not good at."
"Accessibility testing included making sure everyone could navigate it using a keyboard alone."
Or choose a question:
More Videos
"Daniel Day-Lewis went so far as to suffer three broken ribs immersing in his role for My Left Foot."
Daniel GloydDesigning From the Inside Out: How Method Acting Can Inspire Design Research
February 12, 2026
"Recruiting for indigenous participants via Craigslist missed the unique role community ties play, so we regrouped and connected through community organizations."
Deirdre Hirschtritt Cesar Paredes Marie PerrotResearch is Only as Good as the Relationships You Build
November 17, 2022
"It was like giving a map that you could not read or even understand."
Benjamin RealMaturity Models: A Core Tool for Creating a DesignOps Strategy
October 1, 2021
"What is your theory of change? How are you actually going to achieve the outcome you’re aiming for?"
Max Gadney Andrea Petrucci Joshua Stehr Hannah WickesAssessing UX jobs for impact in climate
August 14, 2024
"Leading change in a century-old company is a very different challenge."
Lada GorlenkoTheme 1: Intro
January 8, 2024
"Play disrupts hierarchy, which is why it helps teams stuck in defensiveness to move forward."
Feyikemi AkinwolemiwaPlay to innovate: How curiosity and experimentation transform UX
March 11, 2026
"Cohorts are a super duper fun way to engage with more of our community."
Bria AlexanderOpening Remarks
March 29, 2023
"Blurring user faces loses context and doesn’t address voice privacy, making it an inadequate solution."
Llewyn Paine[Demo] Deploying AI doppelgangers to de-identify user research recordings
June 5, 2024
"When teams are connected to real strategic missions and own their key results, OKRs become positive."
Bria Alexander Benson Low Natalya Pemberton Stephanie GoldthorpeOKRs—Helpful or Harmful?
January 20, 2022