Operationalizing Responsible, Human-Centered AI
This video is featured in the AI and UX playlist.
Summary
AI-enabled systems that are responsible and human-centered, will be powerful partners to humans. Making systems that people are willing to be responsible for, and that are trustworthy to those using them enables that partnership. Carol will share guidance for operationalizing the work of making responsible human-centered AI systems based on UX research. She will share methods for UX teams to support bias identification, prevent harm, and support human-machine teaming through design of appropriate evidence of system capabilities and integrity through interaction design. Once these dynamic systems are out in the world, critical oversight activities are needed for AI systems to continue to be effective. This session will introduce each of these complex topics and provide references for further exploration of these exciting issues.
Key Insights
-
•
Responsible AI systems require humans to retain ultimate responsibility and control.
-
•
Bias in AI data is inevitable, but awareness and mitigation of harmful bias is crucial.
-
•
Human-machine teaming must be designed with clear responsibilities and transparency.
-
•
AI systems are dynamic and constantly evolving, making continuous oversight essential.
-
•
Speculative exercises like 'What Could Go Wrong' support anticipating harms proactively.
-
•
Calibrated trust in AI means users neither overtrust nor undertrust the system.
-
•
Ethical frameworks such as the Three Q Do No Harm help plan for impact on vulnerable groups.
-
•
Diverse teams improve innovation by being more aware of biases and ethical variation.
-
•
UX practitioners should understand AI concepts to effectively contribute without needing deep technical skills.
-
•
Designing safe AI includes making unsafe actions difficult and safe states easy to maintain.
Notable Quotes
"Responsible systems are systems that keep humans in control."
"Data is a function of our history; it reflects priorities, preferences, and prejudices."
"AI will ensure appropriate human judgment, not replace it."
"We want people to gain calibrated levels of trust, not overtrust or undertrust."
"If the system is not confident, it should transparently communicate that and hand off to humans."
"Ethical design is not superficial; if we don't ask the tough questions, who will?"
"We need to be uncomfortable and get used to asking hard questions about AI."
"Humans are still better at many activities and those strengths should be prioritized."
"Adopting technical ethics gives teams permission to question implications beyond opinions."
"These systems aren’t stable like old software; they change as data and models evolve."
Or choose a question:
More Videos
"Daniel Day-Lewis went so far as to suffer three broken ribs immersing in his role for My Left Foot."
Daniel GloydDesigning From the Inside Out: How Method Acting Can Inspire Design Research
February 12, 2026
"Starting meetings with simple icebreakers or sharing pronouns and locations really breaks the ice and builds relationships."
Deirdre Hirschtritt Cesar Paredes Marie PerrotResearch is Only as Good as the Relationships You Build
November 17, 2022
"We finally instead of using the compass and the map we finally were able to build a GPS."
Benjamin RealMaturity Models: A Core Tool for Creating a DesignOps Strategy
October 1, 2021
"Not all climate jobs have digital outputs, so don’t waste your time chasing digital roles in fields like green concrete."
Max Gadney Andrea Petrucci Joshua Stehr Hannah WickesAssessing UX jobs for impact in climate
August 14, 2024
"Ivana will teach you how to scale fast and big from day one through product operations."
Lada GorlenkoTheme 1: Intro
January 8, 2024
"Our research power is our ability to influence change across organizations, not just gather insights."
Feyikemi AkinwolemiwaPlay to innovate: How curiosity and experimentation transform UX
March 11, 2026
"Lauren Cantor keeps all those links and other resources so we don’t have to be googling while listening."
Bria AlexanderOpening Remarks
March 29, 2023
"If you can’t see a human in a video, how do you know the entire conversation wasn’t fabricated?"
Llewyn Paine[Demo] Deploying AI doppelgangers to de-identify user research recordings
June 5, 2024
"Junior engineers often resent having to create OKRs that feel disconnected from their day-to-day work."
Bria Alexander Benson Low Natalya Pemberton Stephanie GoldthorpeOKRs—Helpful or Harmful?
January 20, 2022