Key Moments
Stanford CS547 HCI Seminar | Spring 2026 | Observing the User Experience in 2026
Key Moments
AI is rapidly automating UX research activities, but ground truth from real human experiences is becoming more valuable and harder to obtain, shifting the focus from methods to organizational impact.
Key Insights
The third edition of "Observing the User Experience" has been significantly revised to address the impact of remote work and AI, with every chapter touched by these changes.
AI tools can now automate tasks like transcription, coding, video editing, and drafting discussion guides, significantly speeding up traditional UX research processes.
The rise of AI has led to 'hyperscaled fraud,' with AI-generated fake survey answers and even fake interviewees posing a significant challenge to obtaining genuine user insights.
Layoffs in the tech industry began before the widespread adoption of ChatGPT, suggesting AI is often used as a justification for a broader trend rather than the sole cause.
The '3D' framework for automation (dirty, dangerous, dull) is evolving to the '3E' framework (extraneous, expensive, external) to describe knowledge work now being automated by AI.
Organizational power dynamics determine who defines what is 'extraneous,' 'expensive,' or 'internal,' influencing which roles and activities are automated.
Evolution of 'Observing the User Experience' through technological shifts
The podcast begins by discussing the upcoming third edition of the book "Observing the User Experience." The first edition, published in 2003 by Mike, was an attempt to define and codify the field of user research for a nascent discipline. The second edition, released in 2012 with co-author Liz, reflected the growing maturity of the field and the authors' experience, including Liz teaching the material. The third edition, however, underwent a significant overhaul due to two major disruptions: first, the widespread shift to remote work during the pandemic, and second, the rapid advancement of AI technologies. The authors describe how writing the book became an iterative process of grappling with these changes, leading to a complete rewrite where every chapter was re-evaluated in light of new tools and methodologies.
AI's transformation of UX research activities
The technological landscape has been dramatically altered by AI, fundamentally changing how UX research is conducted. Many traditional methods are now automated, making processes faster and more accessible. For instance, transcription services, once a manual or costly endeavor, are now free and ubiquitous. AI can also assist with coding transcripts, video editing, translation, and even drafting discussion guides and survey questions. Tools like Google's NotebookLM can help derive 'ground truth' from vast amounts of transcribed data. The authors note with amusement that AI often regurgitates information from their own book, indicating it has learned from the established knowledge in the field. This automation streamlines existing practices, making them more efficient.
The challenge of 'hyperscaled fraud' in the AI era
Despite the efficiency gains, AI introduces a significant new challenge: 'hyperscaled fraud.' The ease with which AI can generate fake data and personas means that a vast amount of research output can be fabricated. This includes fake survey answers and even AI-driven participants in interviews who are merely reading from a screen. The authors recount an example of a job interview where the candidate was revealed to be using ChatGPT. To combat this, they advocate for a 'zero trust' security model for UX research, involving continuous verification of participants and probing for anomalies. This means not paying participants until their authenticity is confirmed, essentially applying cybersecurity principles to user recruitment and data collection.
Navigating power dynamics and the evolving role of UX researchers
The discussion shifts to the changing power dynamics within tech development and how they affect UX researchers. The authors argue that AI is not a cause but a symptom of a longer evolution in who holds power and whose jobs are deemed automatable. They introduce a '3E' framework—extraneous, expensive, and external—to describe knowledge work that is increasingly being automated by AI. Historically, automation targeted 'dirty, dangerous, or dull' jobs in manual labor. Now, AI targets knowledge work that is seen as costly or non-core to an organization's primary functions. The critical question becomes who defines these terms. This is why 'ground truth'—genuine human experience—gains importance. It validates research and provides a tangible basis for organizational decisions, acting as a social contract that AI-generated content inherently lacks. This emphasizes the need for researchers to demonstrate the centrality of their work, making themselves indispensable rather than extraneously or externally positioned.
The shift from specific roles to shared competencies
In the context of AI and organizational shifts, the authors observe a paradox: there may be fewer people with explicit 'UX researcher' titles, yet research activities are more widespread. This indicates a move from specialized disciplinary roles towards shared organizational competencies. Titles are becoming less indicative of actual work performed. For example, Product Managers (PMs) are increasingly absorbing research tasks, partly because the PM title is ambiguous and carries significant organizational power. This blurs the lines of expertise and can lead to the acceptance of AI-generated outputs if the source of the research is not clearly established as a human-led, ground-truth-based effort. The value of research is increasingly tied to its ability to demonstrate genuine human insight and connect with organizational needs.
The enduring value of 'ground truth' and organizational context
The core message is that 'ground truth'—authentic human experience and perspective—remains invaluable, analogous to gold that needs to be mined, refined, and polished. AI tools can generate deliverables, but these lack the 'social contract' power of human-created work. When a deliverable is human-made, it signifies that real effort, understanding, and direct interaction have occurred, validating the conclusions. This is particularly crucial in an era where AI can easily produce superficially convincing outputs. Researchers must therefore focus on demonstrating the authenticity and derived value of their findings, often through direct engagement and the collection of qualitative data. When communicating insights, it's vital to adopt the organizational context and vocabulary, ensuring that research speaks directly to stakeholders' needs and goals, not just academic methodologies.
Strategies for demonstrating research value and building trust
To counteract the perception of research being automatable, the authors emphasize strategies that highlight human involvement and organizational relevance. This includes actions like taking photos with participants in their environment to serve as 'proof of work' and establishing rhetorical power. They advocate for researchers to become integral to the organization, not extraneous. This can involve communication hacks, such as creating easily shareable PowerPoint slides or emails that adopt the organization's language, ensuring that insights are digestible and valuable to internal stakeholders. The key is to prove that the research is not just an output, but a process rooted in genuine understanding and directly supportive of organizational goals, especially when working with niche domains or novel products where past data is insufficient for AI training.
The growing difficulty and importance of ethical data collection
The conversation turns to the increasing difficulty of obtaining ground truth from communities, especially those guarded due to concerns about data sovereignty and exploitation. The 'gold' metaphor for ground truth is re-examined, acknowledging that historical research practices have sometimes been extractive. The authors suggest that increased friction in data collection, particularly from marginalized or sovereign communities, can be a positive force, reflecting a necessary ethical consideration. They note that trauma-informed research emphasizes respecting community boundaries and consent. While AI can help with synthetic personas for less sensitive research, the challenge and importance of ethical, direct engagement with diverse human populations are underscored. This direct human connection and genuine understanding are precisely what AI cannot replicate, solidifying the critical role of researchers in navigating these complex dynamics.
Mentioned in This Episode
●Software & Apps
●Companies
●Organizations
●Books
●People Referenced
Maxims for UX Researchers in the AI Era
Practical takeaways from this episode
Do This
Avoid This
Common Questions
The book "Observing the User Experience" aims to articulate and canonize the authors' relationship with user research, guiding readers on the principles and evolving landscape of the profession, rather than offering specific techniques.
Topics
Mentioned in this video
A website mentioned as the source for data on layoffs.
Mentioned as a key AI model that emerged, prompting a significant rewrite of the book.
Mentioned as the tool used to create a chart about layoffs.
A platform where panicked discussions about the death of design and UX occur.
The programming language used to build the synthetic open-source user interview tool.
Mentioned humorously as being "in the Anthropic settlement" when an AI model produced text similar to the book.
A platform where panicked discussions about the death of design and UX occur.
Mentioned in the context of its career ladders for qualitative researchers and its analytics logs.
A company where Genevieve Bell worked and demonstrated a tactic to gain the CEO's attention.
More from Stanford Online
View all 35 summaries
78 minStanford CS336 Language Modeling from Scratch | Spring 2026 | Lecture 9: Scaling Laws
66 minStanford Robotics Seminar ENGR319 | Spring 2026 | Ingredientsfor Long-Horizon Robot Autonomy
101 minStanford CME296 Diffusion & Large Vision Models | Spring 2026 | Lecture 4 - Latent Space & Guidance
82 minStanford CS336 Language Modeling from Scratch | Spring 2026 | Lecture 7: Parallelism
Ask anything from this episode.
Save it, chat with it, and connect it to Claude or ChatGPT. Get cited answers from the actual content — and build your own knowledge base of every podcast and video you care about.
Get Started Free