Key Moments

Stanford CS547 HCI Seminar | Spring 2026 | Observing the User Experience in 2026

Stanford OnlineStanford Online
Education6 min read62 min video
Apr 29, 2026|719 views|18
Save to Pod
TL;DR

AI is rapidly automating UX research activities, but ground truth from real human experiences is becoming more valuable and harder to obtain, shifting the focus from methods to organizational impact.

Key Insights

1

The third edition of "Observing the User Experience" has been significantly revised to address the impact of remote work and AI, with every chapter touched by these changes.

2

AI tools can now automate tasks like transcription, coding, video editing, and drafting discussion guides, significantly speeding up traditional UX research processes.

3

The rise of AI has led to 'hyperscaled fraud,' with AI-generated fake survey answers and even fake interviewees posing a significant challenge to obtaining genuine user insights.

4

Layoffs in the tech industry began before the widespread adoption of ChatGPT, suggesting AI is often used as a justification for a broader trend rather than the sole cause.

5

The '3D' framework for automation (dirty, dangerous, dull) is evolving to the '3E' framework (extraneous, expensive, external) to describe knowledge work now being automated by AI.

6

Organizational power dynamics determine who defines what is 'extraneous,' 'expensive,' or 'internal,' influencing which roles and activities are automated.

Evolution of 'Observing the User Experience' through technological shifts

The podcast begins by discussing the upcoming third edition of the book "Observing the User Experience." The first edition, published in 2003 by Mike, was an attempt to define and codify the field of user research for a nascent discipline. The second edition, released in 2012 with co-author Liz, reflected the growing maturity of the field and the authors' experience, including Liz teaching the material. The third edition, however, underwent a significant overhaul due to two major disruptions: first, the widespread shift to remote work during the pandemic, and second, the rapid advancement of AI technologies. The authors describe how writing the book became an iterative process of grappling with these changes, leading to a complete rewrite where every chapter was re-evaluated in light of new tools and methodologies.

AI's transformation of UX research activities

The technological landscape has been dramatically altered by AI, fundamentally changing how UX research is conducted. Many traditional methods are now automated, making processes faster and more accessible. For instance, transcription services, once a manual or costly endeavor, are now free and ubiquitous. AI can also assist with coding transcripts, video editing, translation, and even drafting discussion guides and survey questions. Tools like Google's NotebookLM can help derive 'ground truth' from vast amounts of transcribed data. The authors note with amusement that AI often regurgitates information from their own book, indicating it has learned from the established knowledge in the field. This automation streamlines existing practices, making them more efficient.

The challenge of 'hyperscaled fraud' in the AI era

Despite the efficiency gains, AI introduces a significant new challenge: 'hyperscaled fraud.' The ease with which AI can generate fake data and personas means that a vast amount of research output can be fabricated. This includes fake survey answers and even AI-driven participants in interviews who are merely reading from a screen. The authors recount an example of a job interview where the candidate was revealed to be using ChatGPT. To combat this, they advocate for a 'zero trust' security model for UX research, involving continuous verification of participants and probing for anomalies. This means not paying participants until their authenticity is confirmed, essentially applying cybersecurity principles to user recruitment and data collection.

Navigating power dynamics and the evolving role of UX researchers

The discussion shifts to the changing power dynamics within tech development and how they affect UX researchers. The authors argue that AI is not a cause but a symptom of a longer evolution in who holds power and whose jobs are deemed automatable. They introduce a '3E' framework—extraneous, expensive, and external—to describe knowledge work that is increasingly being automated by AI. Historically, automation targeted 'dirty, dangerous, or dull' jobs in manual labor. Now, AI targets knowledge work that is seen as costly or non-core to an organization's primary functions. The critical question becomes who defines these terms. This is why 'ground truth'—genuine human experience—gains importance. It validates research and provides a tangible basis for organizational decisions, acting as a social contract that AI-generated content inherently lacks. This emphasizes the need for researchers to demonstrate the centrality of their work, making themselves indispensable rather than extraneously or externally positioned.

The shift from specific roles to shared competencies

In the context of AI and organizational shifts, the authors observe a paradox: there may be fewer people with explicit 'UX researcher' titles, yet research activities are more widespread. This indicates a move from specialized disciplinary roles towards shared organizational competencies. Titles are becoming less indicative of actual work performed. For example, Product Managers (PMs) are increasingly absorbing research tasks, partly because the PM title is ambiguous and carries significant organizational power. This blurs the lines of expertise and can lead to the acceptance of AI-generated outputs if the source of the research is not clearly established as a human-led, ground-truth-based effort. The value of research is increasingly tied to its ability to demonstrate genuine human insight and connect with organizational needs.

The enduring value of 'ground truth' and organizational context

The core message is that 'ground truth'—authentic human experience and perspective—remains invaluable, analogous to gold that needs to be mined, refined, and polished. AI tools can generate deliverables, but these lack the 'social contract' power of human-created work. When a deliverable is human-made, it signifies that real effort, understanding, and direct interaction have occurred, validating the conclusions. This is particularly crucial in an era where AI can easily produce superficially convincing outputs. Researchers must therefore focus on demonstrating the authenticity and derived value of their findings, often through direct engagement and the collection of qualitative data. When communicating insights, it's vital to adopt the organizational context and vocabulary, ensuring that research speaks directly to stakeholders' needs and goals, not just academic methodologies.

Strategies for demonstrating research value and building trust

To counteract the perception of research being automatable, the authors emphasize strategies that highlight human involvement and organizational relevance. This includes actions like taking photos with participants in their environment to serve as 'proof of work' and establishing rhetorical power. They advocate for researchers to become integral to the organization, not extraneous. This can involve communication hacks, such as creating easily shareable PowerPoint slides or emails that adopt the organization's language, ensuring that insights are digestible and valuable to internal stakeholders. The key is to prove that the research is not just an output, but a process rooted in genuine understanding and directly supportive of organizational goals, especially when working with niche domains or novel products where past data is insufficient for AI training.

The growing difficulty and importance of ethical data collection

The conversation turns to the increasing difficulty of obtaining ground truth from communities, especially those guarded due to concerns about data sovereignty and exploitation. The 'gold' metaphor for ground truth is re-examined, acknowledging that historical research practices have sometimes been extractive. The authors suggest that increased friction in data collection, particularly from marginalized or sovereign communities, can be a positive force, reflecting a necessary ethical consideration. They note that trauma-informed research emphasizes respecting community boundaries and consent. While AI can help with synthetic personas for less sensitive research, the challenge and importance of ethical, direct engagement with diverse human populations are underscored. This direct human connection and genuine understanding are precisely what AI cannot replicate, solidifying the critical role of researchers in navigating these complex dynamics.

Maxims for UX Researchers in the AI Era

Practical takeaways from this episode

Do This

Treat "ground truth" (empirical data from real people) as valuable and essential, requiring mining and refinement.
Regularly engage in traditional user research (talking to people, understanding their context) to validate AI-generated insights.
Think of deliverables (reports, wireframes) as social contracts that validate ground truth and demonstrate authenticity.
Understand and speak the niche cultural vocabulary of your specific organization and its context.
Apply user research principles to understand internal stakeholders and their needs to demonstrate your centrality.
Make your work easily copyable, reusable, and remixable within organizational communication formats (like PowerPoint or email).

Avoid This

Do not rely solely on AI-generated outputs without grounding them in real user experience.
Do not assume AI can fully understand organizational context or predict entirely new futures.
Do not treat deliverables as the end goal; they are the result of grounding work.
Avoid alienating stakeholders by using jargon they don't understand; speak directly to their needs and goals.
Do not dismiss the value of 'friction' in research; let it guide you to deeper understanding.

Common Questions

The book "Observing the User Experience" aims to articulate and canonize the authors' relationship with user research, guiding readers on the principles and evolving landscape of the profession, rather than offering specific techniques.

Topics

Mentioned in this video

More from Stanford Online

View all 35 summaries

Ask anything from this episode.

Save it, chat with it, and connect it to Claude or ChatGPT. Get cited answers from the actual content — and build your own knowledge base of every podcast and video you care about.

Get Started Free