Rosenverse

Log in or create a free Rosenverse account to watch this video.

Log in Create free account

100s of community videos are available to free members. Conference talks are generally available to Gold members.

When AI Becomes the User’s Point Person—and Point of Failure

Thursday, August 7, 2025 • Rosenfeld Community
Share the love for this talk
When AI Becomes the User’s Point Person—and Point of Failure
Speakers: Heidi Trost
Link:

Summary

Imagine slipping on a sleek pair of smart glasses. Not only do you look sharp, the glasses capture everything you see, hear, and do. Your AI assistant—built into the glasses and synced to your email, social media accounts, health apps, and finances—manages your life. It’s tasked with paying bills, booking trips, replying to messages, even helping you swipe right. Over time, you find yourself chitchatting with your AI assistant. You call him Charlie. Now imagine you’re a threat actor. That trust between user and AI assistant? It’s your entry point. If your product is powered by AI, you’re not just designing features—you’re designing an entire relationship. You’re designing Charlie. Let’s talk about where that goes wrong—and how to get it right.

Key Insights

  • Users often do not understand why AI-powered systems request extensive personal data, increasing privacy risks.

  • Trust in AI agents can become excessive, creating new vectors for manipulation by threat actors.

  • Security issues typically occur beneath the surface until alerts disrupt the user experience, often causing frustration.

  • Prompt injection attacks pose a novel threat where malicious inputs manipulate AI agents to access sensitive user data.

  • Multimodal AI interfaces introduce complexity in security decisions, increasing chances for user errors.

  • Secure by default settings reduce burden on users and improve overall protection without requiring user intervention.

  • Cross-disciplinary collaboration between UX, security, product, legal, and compliance teams is crucial for safer AI design.

  • Users need clear, contextual guidance during onboarding to make informed decisions about data sharing and security settings.

  • Transparency about AI limitations and giving users the option to reverse AI actions are essential for building trust.

  • Threat actors are likely to exploit growing AI access to personal data and automate vulnerabilities discovery.

Notable Quotes

"When a product is powered by AI, you're not just designing the features; you are designing an entire relationship."

"Charlie is like the most annoying coworker who constantly surfaces problems but never offers solutions to Alice."

"Threat actors probably know your system better than you do and are looking for any entry points to exploit."

"Alice often perceives Charlie as just another barrage of alerts filled with jargon she doesn't understand."

"Prompt injection attacks can trick AI agents into accessing private data like emails without the user realizing."

"People become incrementally more comfortable giving away data because they see the value AI provides."

"We need secure defaults that protect users out of the box without them having to figure it out."

"Alert fatigue is real; users can't be burdened with constant security decisions or they'll ignore them."

"Giving users the ability to reverse AI-driven actions is critical but currently underexplored."

"If Charlie has been tampered with, Alice needs a clear way to be alerted that she shouldn't trust it."

Ask the Rosenbot
Dane DeSutter
Keeping the Body in Mind: What Gestures and Embodied Actions Tell You That Users May Not
2024 • Advancing Research 2024
Gold
Jemma Ahmed
Convergent Research Techniques in Customer Journey Mapping
2020 • Advancing Research 2020
Gold
Patrick Boehler
Fishing for Real Needs: Reimagining Journalism Needs with AI
2025 • Designing with AI 2025
Gold
Dave Malouf
The Past, Present, and Future of DesignOps: a 2-part DesignOps Community Call (Part 1)
2022 • DesignOps Community
Asia Hoe
Partnering with Product: A Journey from Junior to Senior Design
2023 • Design in Product 2023
Gold
Peter Van Dijck
Designing AI-first products on top of a rapidly evolving technology
2025 • Designing with AI 2025
Gold
Bria Alexander
Opening Remarks
2023 • DesignOps Summit 2023
Gold
Meredith Black
Scaling Design Culture
2017 • DesignOps Summit 2017
Gold
Devon Powers
Imagining Better Futures
2022 • Advancing Research 2022
Gold
Niko Laitinen
Adaptable Org Design for Resilient Times
2021 • Design at Scale 2021
Gold
Saara Kamppari-Miller
Theme Three Intro
2023 • DesignOps Summit 2023
Gold
Lada Gorlenko
Theme 2 Intro
2022 • Design at Scale 2022
Gold
Jen Crim
Culture, DIBS & Recruiting
2021 • Design at Scale 2021
Gold
Lily Aduana
5 Reasons to Bring Your Recruiting in-House (and How To Do It)
2021 • Advancing Research 2021
Gold
Sam Proulx
Designing For Screen Readers: Understanding the Mental Models and Techniques of Real Users
2021 • Civic Design 2021
Gold
Rachael Greene
Building a Design Ops Practice that Really Works (Most of the Time)
2025 • DesignOps Community

More Videos

Jon Fukuda

"Getting people who don’t communicate well to at least work together on a single thing is the first step to innovating together."

Jon Fukuda Amy Evans Ignacio Martinez Joe Meersman

The Big Question about Innovation: A Panel Discussion

September 25, 2024

Sam Proulx

"If you centralize accessibility knowledge in your organization, it can become a blocker."

Sam Proulx

Accessibility: An Opportunity to Innovate

March 9, 2022

Anna Avrekh

"If you don’t invest in retention, you get the leaky bucket syndrome where diverse talent keeps leaving."

Anna Avrekh Amy Jiménez Márquez Morgan C. Ramsey Catarina Tsang

Diversity In and For Design: Building Conscious Diversity in Design and Research

June 9, 2021

Greg Nudelman

"Context retention is tough; Siri and Google Assistant completely lose prior contexts after one action."

Greg Nudelman

Designing Conversational Interfaces

November 14, 2019

George Abraham

"You get real coded components, not just HTML snippets or style attributes, meaning developers can drag, drop, and extend with hooks already in place."

George Abraham Stefan Ivanov

Design Systems To-Go: Reimagining Developer Handoff, and Introducing App Builder (Part 2)

October 1, 2021

Shipra Kayan

"Synthetic data created by AI—like fake personas and journeys—is super derivative and often not insightful."

Shipra Kayan

Make your research synthesis speedy and more collaborative using a canvas

January 24, 2025

Sam Proulx

"When you press a button like an upvote, it’s the developer’s job to make sure the visual state is communicated via ARIA landmarks."

Sam Proulx

Designing For Screen Readers: Understanding the Mental Models and Techniques of Real Users

December 10, 2021

Shipra Kayan

"Ownership of VOC is a hot potato, so we built a coalition with a facilitator to keep it collaborative and effective."

Shipra Kayan

How we Built a VoC (Voice of the Customer) Practice at Upwork from the Ground Up

September 30, 2021

Dane DeSutter

"The computer metaphor of mind does not do a very good job of explaining what we see people do with their bodies."

Dane DeSutter

Keeping the Body in Mind: What Gestures and Embodied Actions Tell You That Users May Not

March 26, 2024