Integrating generative AI into enterprise products: A case study from dscout
Summary
Join dscout for an in-depth case study walking through a first-year journey of integrating Generative AI into an established enterprise product. In a session tailored specifically for product teams, the speakers will delve into the paradigm shifts in UX professionals to technical relationships, strategies, and processes. dscout’s Jonathan Fairman, VP of Product, and Kevin Johnson, Head of AI, will discuss how the stochastic nature of AI technologies led to the move of a piloting approach to development and traditional wireframe testing of concepts no longer possible.
Key Insights
-
•
Building AI products requires giving up control over the product's behavior due to the unpredictable nature of LLMs.
-
•
Traditional usability testing assumptions break down with AI because each user experience dynamically diverges after the first interaction.
-
•
Longitudinal testing is critical for AI products to understand how user relationships with models evolve over time.
-
•
Wizard of Oz testing (humans simulating AI) is a low-cost method to validate assumptions before building complex AI models.
-
•
Model drift in LLMs causes unpredictable and sometimes harmful behavior, necessitating turn-capping or memory features to maintain consistency.
-
•
Users build their mental models for AI assistants dynamically while using them, unlike fixed software products.
-
•
The stochastic nature of AI results means identical commands can produce different outcomes, affecting user mastery and trust.
-
•
Multi-dimensional prioritization frameworks, like importance vs. difficulty, help teams align on assumptions to test first in AI product development.
-
•
Recognizing failure as inevitable and building failure metrics accelerates learning and innovation in generative AI.
-
•
Humans will retain roles centered on creativity, critical thinking, accountability, and authenticity in a future transformed by AI.
Notable Quotes
"Building on AI means giving up control, which is tough but necessary."
"If the folks who design LLMs don’t know exactly how they work, it’s okay if you don’t either."
"The product is no longer fixed; after the first interaction, it diverges user by user."
"We’re not studying products anymore; we’re studying experiences in ecosystems."
"Failure is inevitable—if you’re right all the time, you’re not taking big enough swings."
"Users are building mental models of how to use AI while they’re using it."
"Testing AI products requires longitudinal methods to see how relationships and experiences evolve."
"Wizard of Oz testing with humans playing bots can reveal insights without building a full model."
"Asynchronous conversations have nested threads humans track easily but are hard for bots to replicate."
"Creativity, accountability, and authenticity will be the new markers of humanity alongside AI."
Or choose a question:
More Videos
"What a screen reader user does is they don’t start at the top and tab through every single element; they navigate by headings and landmarks."
Sam ProulxEverything You Ever Wanted to Know About Screen Readers
June 11, 2021
"You can access the digital swag bag by scanning the QR code or visiting fld.me/cd2022 for cool sponsor offers."
Bria AlexanderOpening Remarks
November 17, 2022
"Layoffs are a collective trauma – it’s okay to acknowledge the emotions and grief you feel."
Corey Nelson Amy SanteeLayoffs
November 15, 2022
"Enterprises are basically people pursuing outcomes, doing activities, and using objects—this simple theory underpins the language of Edgy."
Milan GuentherA Shared Language for Co-Creating Ambitious Endeavours
June 6, 2023
"Without releasing control, democratizing research won’t scale. We have to empower people even if some things won’t be perfect."
Erin May Roberta Dombrowski Laura Oxenfeld Brooke HintonDistributed, Democratized, Decentralized: Finding a Research Model to Support Your Org
March 10, 2022
"If you falsely think you are having the same experience as a native user, you become afraid of accessibility and say it's impossible."
Sam ProulxUnderstanding Screen Readers on Mobile: How And Why to Learn from Native Users
June 6, 2023
"We experimented with AI-powered sprints, but had lots of dead ends and false starts before finding what really works for us."
Mujtaba HameedThe new horizon of ethnography: using AI to unlock the full potential of in-person research
March 11, 2026
"Regulation is part of the solution, but by diversifying who makes technology and embedding safeguards, we can better prevent corruption of well-intentioned products."
Ilana LipsettAnticipating Risk, Regulating Tech: A Playbook for Ethical Technology Governance
December 10, 2021
"When we build accessibility into an environment, especially if we do it subtly, it becomes the new normal."
Samuel ProulxFrom Standards to Innovation: Why Inclusive Design Wins
September 10, 2025