Privacy by design: Building Earmark

AssemblyAIAssemblyAI
Science & Technology3 min read2 min video
Feb 9, 2026|643 views|6
Save to Pod

Key Moments

TL;DR

Privacy-first design for voice AI: avoid storing data; enable one-click capture.

Key Insights

1

Voice data is highly sensitive by default, so minimize storage, define retention, and consider encryption or no-storage options.

2

Temporary mode offers zero-retention by bypassing storage, demonstrating privacy-by-default across plans.

3

Privacy decisions should be baked into architecture and UX from the start, not tacked on later.

4

User experience must be forgiving and frictionless in real-time contexts (meetings, calls) to encourage adoption.

5

Design for automatic, self-healing flows that reduce user decisions when the user is multitasking.

6

Founders should test and implement practical privacy features early to build trust and reduce risk.

PRIVACY AS A DESIGN PRINCIPLE

Privacy should be treated as a design principle rather than a late-stage consideration, especially for voice data which is intrinsically sensitive. The core question shifts from “how do we store and process data?” to “do we even need to store this data, and for how long?” This mindset pushes teams to minimize data collection, implement encryption where data is stored, and establish clear retention policies. When privacy is woven into the product strategy, it reduces risk and signals to users that their conversations are treated with care. The approach requires cross-functional alignment across product, engineering, and policy to ensure that every feature aligns with privacy-by-design values, from data models to user-facing disclosures.

TEMPORARY MODE: NO RETENTION OPTION

A central practice highlighted is offering a dedicated no-retention mode—described as temporary mode—that literally prevents transcripts or any data from being stored. In this mode, data bypasses the database entirely, leaving no lasting footprint. This is not just a feature; it’s a commitment to privacy-by-default across all plan levels. Temporary mode gives users a tangible choice to use voice capabilities without data retention, reinforcing trust and reducing worries about sensitive conversations being archived. It also shows that privacy can be a practical, scalable option rather than a theoretical ideal.

DESIGNING AROUND PRIVACY FROM THE START

Beyond choosing not to store data, product design should privilege data minimization and security by default. Consider whether information needs to be stored at all, and if so, implement strict controls on what is kept, for how long, and who can access it. Encryption should be standard for stored data, and retention windows should be clearly defined and enforceable. These considerations should influence data schemas, logging practices, and telemetry so that each layer of the stack reinforces privacy. The overarching lesson is to ask early and often: how does this decision affect user privacy, now and in the future?

FORGIVING UX IN VOICE AI

Voice AI is often used in contexts where the user is already engaged in another task—meetings, calls, or conversations. If enabling capture requires multiple steps, users will simply skip it. The UX must be forgiving and obvious: ideally one-click activation or automatic capture that works seamlessly in background. If a user’s moment is interrupted or a blip occurs, the experience should still feel reliable. The goal is to make privacy and capture feel effortless, so users can rely on the feature without adding cognitive load during important conversations.

MINIMIZING FRICTION AND ENABLE SELF-HEALING FLOWS

Reducing decision points is essential when users are multitasking. The system should anticipate needs and gracefully handle interruptions by default, repairing or retrying actions without requiring the user to intervene. When privacy constraints intersect with real-time capture, the product should still behave intuitively—honoring privacy preferences while maintaining usability. Self-healing behaviors, such as automatic error handling and seamless recovery from glitches, help maintain trust and encourage consistent use in dynamic environments.

TAKEAWAYS FOR FOUNDERS BUILDING VOICE AI

Founders should embed privacy-by-design as a core business decision, not a compliance checkbox. This includes embracing data minimization, offering ephemeral or temporary storage options, and providing clear, user-friendly privacy features like temporary mode. Design for forgiving UX so real-world usage—often in noisy or busy contexts—remains intuitive and low-friction. Finally, validate flows in real environments with multitasking users to ensure the experience stays reliable, respectful of privacy, and genuinely accelerates productivity without introducing unnecessary risk.

Privacy-by-design: Quick Do's and Don'ts for Voice AI

Practical takeaways from this episode

Do This

Enable privacy features (e.g., temporary mode) to minimize data retention.
Keep data retention to a minimum and encrypt stored data where possible.
Design user flows to be as frictionless as possible (one-click activation when feasible).
Make UX forgiving and resilient to user multitasking (e.g., meetings, calls).

Avoid This

Don't require many steps to start capturing conversations.
Don't store transcripts by default unless absolutely necessary.
Don't introduce friction that interrupts the user's flow in other contexts.
Don't surprise users with retention policies or data collection.

Common Questions

It means you should be intentional about what data you store, how long you store it, whether you encrypt it, and whether you store transcripts at all. The idea is to build privacy considerations into the product from the ground up. (Starts at 7 seconds)

Topics

Mentioned in this video

More from AssemblyAI

View all 14 summaries

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free