Why AI Is Bad at Research (and how to make it actually useful)
Summary
LLMs are everywhere, but when it comes to real research, they often fall short. Generic LLMs weren’t built for continuous research workflows, and product researchers quickly see the problem: the outputs are generic, lack full context, and struggle to connect multiple data sources. Instead of surfacing meaningful insights, they can amplify noise. In this session, Daniel will break down why AI often fails research teams and what’s missing. He’ll show how to make AI actually useful for continuous product research. Accelerating analysis, connecting insights across sources, and keeping researchers at the center, equipped with a powerful tool rather than replaced by one.
Key Insights
-
•
AI in research struggles with large datasets, often averaging results and missing subtle but important signals.
-
•
Curating and filtering datasets by removing irrelevant data improves AI research output quality.
-
•
Scoping research into focused projects or topics helps AI deliver more precise responses.
-
•
Asking one question at a time significantly enhances the quality of AI-generated answers.
-
•
Providing detailed contextual information (personas, company background, product details) to AI boosts specificity and nuance in responses.
-
•
AI hallucinations and trust issues necessitate human-in-the-loop processes to verify output quality and citations.
-
•
Iterative refinement of AI outputs, similar to app development, is critical for achieving polished research results.
-
•
Spot checking AI-generated citations can be an effective and efficient way to validate research quality.
-
•
Context passed as embedded knowledge rather than repeated in prompts yields better AI results.
-
•
Using multiple specialized AI agents to critique each other’s outputs can mitigate bias and improve research accuracy.
Notable Quotes
"AI has this strange weakness that when working with a large dataset, they often miss crucial, subtle findings."
"The larger the dataset you work with, the more costly it is to run a single operation on AI models."
"Whenever possible, you should be breaking down your work into specific research projects or topics."
"When you ask a question, try to ask one at a time so the model doesn't get lost."
"Context is everything — providing AI with a folder of your company’s knowledge makes responses more detailed and useful."
"Research with AI requires as much iteration and verification as building an app or prototype."
"AI-generated research reports should always be tied to real feedback that you can verify behind every sentence."
"There's no way to deny it: every industry needs to adapt to AI, but nobody really knows how yet."
"Human in the loop means constantly interacting with AI, documenting your thoughts and assuring quality."
"Some engineers build a council of agents that debate and generate responses, which can help with bias and accuracy."
Or choose a question:
More Videos
"It’s important to create a sincere environment where everyone feels they can equally participate, regardless of title or skill level."
Anne MamaghaniHow Your Organization's Generative Workshops Are Probably Going Wrong and How to Get Them Right
March 28, 2023
"Sometimes our most valuable work isn’t the insights we produce, but the conditions we create for shared reasoning across paradigms."
Kurt McCullochFaster alone, further together: Rebuilding collaboration in the age of AI research
March 10, 2026
"Every project and initiative we launched has been a Trojan horse navigating through dark matter."
Sofía Delsordo Kassim VeraPublic Policy for Jalisco's Designers to Make Design Matter
December 8, 2021
"I feel a lot of pain and anxiety in the community people wondering, you know is our seat at the table slipping away."
Christian CrumlishIntroduction by our Conference Chair
December 6, 2022
"We must not only fix today’s problem but also foresee what the future holds to build the best playground for our teams."
Ebru NamaldiDesigning the Designer’s Journey: Scaling Teams, Culture, and Growth Through DesignOps
September 11, 2025
"A formal critique often carries tension; focusing on people rather than work is a pitfall we must avoid."
Joseph MeersmanSweating the Pixel: Scaling Quality through Critique
June 10, 2021
"Our clients all have innovation portfolios, but if you zoom out, it's many micro decisions over time that are often opportunistic, lacking strategic connection."
Milan Guenther Benjamin KumpfThe $212 billion ‘so what?’: unlocking impact in development cooperation
November 20, 2025
"Sharing your skills and teaching others can boost your confidence and professional development."
Kathleen AsjesResearch Democratization: the Good, the Bad and the Ugly
March 10, 2022
"LLMs talk the talk but don’t understand the meaning. They’re master impersonators without true knowledge."
Josh Clark Veronika KindredSentient Design, AI, and the Radically Adaptive Experience (1st of 3 seminars)
January 15, 2025