Key Moments

I've spent 5 BILLION tokens perfecting OpenClaw...

M
Matthew Berman
Science & Technology7 min read40 min video
Feb 24, 2026|89,606 views|2,727|269
Save to Pod
TL;DR

OpenClaw AI is now a full-time employee, managing sponsorships and even drafting emails. Despite its sophistication, a cost of $1M per AI employee is not mentioned, and security remains a complex, multi-layered challenge.

Key Insights

1

OpenClaw is now integrated as a full-time employee, handling tasks like sponsorship email filtering and drafting, escalating exceptional opportunities directly to the team, and politely declining low-value ones.

2

The system uses a sophisticated, customizable rubric with dimensions like 'fit,' 'clarity,' 'budget,' 'seriousness,' 'company trust,' and 'close likelihood' to score inbound emails.

3

To manage different AI model prompting standards (e.g., Claude vs. GPT-5.2), Matthew Berman maintains dual sets of prompts, with nightly sync reviews to detect and correct prompt drift.

4

OpenClaw integrates with HubSpot to track sales deals, automatically updating stages when conversations indicate a progression, such as moving from 'qualified' to 'negotiations'.

5

A three-layer prompt injection defense system is implemented, including a deterministic sanitizer, a frontier scanner in a sandbox, and elevated risk markers.

6

Cost-saving measures include using local embeddings (Nomic MBEG), model tiering (Sonnet as primary, Opus for higher needs), prompt caching, and notification batching.

Automating sponsorship management with OpenClaw

Matthew Berman has effectively integrated OpenClaw as a full-time employee to manage his team's sponsorship inquiries. This involves assigning OpenClaw its own email address and workspace, routing all public-facing sponsorship emails to it. The AI identifies sponsorship emails, scores them using a sophisticated and editable rubric (categorized by dimensions like fit, clarity, budget, seriousness, company trust, and close likelihood), and then takes appropriate action based on the score. Exceptional deals are escalated, high sponsors are noted, medium-tier inquiries receive qualification questions, low-tier are politely declined, and spam is ignored. The AI even drafts custom email responses, semi-automating the initial outreach process. This system automates a significant portion of the sales funnel, moving interactions from initial contact to a point where human intervention is only needed for final decision-making or calls. The prompts for this system include elements like multi-account email marketing, Gmail access, scoring with an editable rubric, Gmail labeling, stage tracking, and context-aware reply drafting, utilizing advanced models like Opus 4.6 and the humanizer skill to ensure natural-sounding responses.

Advanced AI security protocols

Security is a paramount concern, addressed through multiple layers. Network gateway hardening is in place, supplemented by a nightly 'security council' that scans for attack vectors. Channel access control is strictly enforced, with DMs to Matthew having broader access than group channels, where information is redacted, and emails have even stricter policies. A three-layer prompt injection defense is crucial: first, a deterministic sanitizer checks for common injection phrases; second, a 'frontier scanner' uses a top-tier model in a sandbox environment to re-evaluate data for malicious content; and third, elevated risk markers provide an additional scoring layer. Secret protection involves outbound redaction of sensitive information like PII. Pre-commit hooks prevent common Git key patterns, and file permissions are locked down. Automated reviews include nightly security council checks on configurations and secrets, alongside offensive, defensive, and data privacy assessments. Databases are encrypted, backed up with passwords, and data classification tiers are enforced, including SSRF and SQL injection protection.

Dual prompt stacks for model versatility

To accommodate different AI models, Matthew has implemented a dual prompt stack system. This is crucial because models like Claude (Opus 4.6) and GPT-5.2 have distinct prompting standards. For instance, Claude prefers natural language instructions, while GPT-5.2 can utilize all caps. To manage this, he maintains separate sets of markdown files optimized for each model family: one set for Claude (natural language, root directory) and another for Codex (or other models) in a separate folder. A nightly sync review ensures that the core information across both sets of prompts remains identical. Any detected drift triggers a Telegram alert, allowing for quick correction. This system ensures optimal performance regardless of the active model and adheres to prompt engineering best practices, preventing subtle but significant changes in AI output due to differing model behaviors. The prompt for this setup involves creating dual prompt stacks, with root files optimized for Claude and separate folders for other models, ensuring identical operational facts and using swap commands for easy model switching.

Streamlining CRM and meeting intelligence

OpenClaw's CRM functionality has been significantly enhanced, acting as a central hub for contact management and business intelligence. It scans Gmail and calendars to discover and classify contacts, filtering out spam and marketing emails. Once a contact is in the CRM, OpenClaw conducts proactive research on their company, saving any relevant news or articles. This integration allows for natural language queries against the CRM and triggers automatic follow-ups or nudges. Meeting intelligence further enriches this system; after transcribing meetings via tools like Fathom, OpenClaw matches attendees to the CRM, extracts insights and action items, and then assigns these action items to the correct deals in HubSpot, even identifying responsible team members. This creates a cohesive flow of information from conversations to business actions. The prompt for this involves a contact discovery pipeline with a natural language interface, relationship intelligence, daily cron jobs, and an email drafting system, all storing data locally in a SQL database with a vector column for both SQL and natural language querying.

Knowledge base and content pipeline automation

A robust knowledge base is maintained by ingesting articles, videos, and posts from various sources like Telegram and Slack. This content is sanitized, sandboxed, and scanned before being chunked, embedded, and stored in SQLite. The system cross-posts relevant content to an 'AI trends' channel for team visibility. When a potential video idea is identified in Slack, OpenClaw automatically queries the knowledge base, searches X and Twitter for supplementary discussions, and generates a structured ASA card in a project management tool. It then creates a video outline, suggests packaging ideas (hook, thumbnail, title), and posts the update back to Slack or Telegram. This automates content ideation and initial production workflow, leveraging the knowledge base for comprehensive and relevant output.

Operationalizing OpenClaw: File structure and cron jobs

The operational framework of OpenClaw relies on a structured set of markdown files and scheduled cron jobs. Key files include 'agents.md' for operational rules, 'soul.md' for the agent's persona, 'user.md' detailing information about the user, 'tools.md' for environment-specific details, 'heartbeat.md' for periodic tasks, and 'memory.md' for private data. 'PRD.md' defines all functionality, 'use cases.md' lists applications, and 'security best practices.md' guides security adherence. Cron jobs are meticulously scheduled, often spread throughout the night to manage token quotas efficiently (e.g., Instagram analytics at 1 AM, X/Twitter at 1:15 AM, CRM at 2 AM). This asynchronous execution ensures that heavy processing tasks don't consume quota during peak usage times, maximizing the utility of subscription limits.

Memory management and notification batching

Addressing common complaints about AI memory, Matthew emphasizes the benefits of Telegram group topics in limiting the context OpenClaw needs to retain. He advises monitoring the 'status' command to check context fullness and manually clearing it if necessary. He also uses an automated cron job to prune files, removing duplicate information and trimming content by approximately 10% every other day. To combat notification noise, a batching system is implemented: critical notifications are immediate, high-importance ones (CRM updates, cron failures) are hourly, and medium-importance ones are every three hours. These batched notifications are delivered in a summarized format, with all notifications logged in a dedicated database for later review. This approach significantly reduces distraction while ensuring important information is still conveyed.

Cost savings and backup strategies

Several strategies are employed to manage OpenClaw's costs. Local embeddings are run on a MacBook Air using the Nomic embedding model, making them effectively free. Model tiering is utilized, with the more affordable Sonnet model as the primary choice, escalating to Opus 4.6 only when necessary. Spreading usage throughout the day and leveraging built-in prompt caching further optimize token consumption. Context-aware polling and using faster, cheaper models for less demanding tasks also contribute to cost reduction. For data backup, OpenClaw automatically discovers database files, encrypts them, uploads them to Google Drive, and documents the restoration process. Hourly Git syncs commit changes to GitHub, providing an additional layer of data redundancy and enabling straightforward recovery in case of hardware failure or data loss.

OpenClaw Workflow Optimization

Practical takeaways from this episode

Do This

Assign unique identities and workspace accounts to AI agents.
Implement sophisticated rubrics for scoring and classifying emails.
Provide continuous feedback to AI models to improve their performance.
Utilize dual prompt stacks, optimizing for different AI models.
Organize information using Telegram group topics for better context management.
Integrate multiple data sources (email, calendar, Slack) for a comprehensive CRM.
Leverage meeting intelligence for transcription, insight extraction, and action item assignment.
Build and query a knowledge base with articles, videos, and research.
Implement multi-layered security protocols, including prompt injection defense.
Schedule heavy cron jobs overnight to manage token quotas effectively.
Optimize memory by using Telegram group topics and pruning files.
Batch notifications to reduce distractions and improve focus.
Log everything: errors, LLM calls, and external service interactions.
Utilize local embeddings and model tiering for cost savings.
Automate backups of databases and code repositories.
Use a continuous learning approach by saving errors and learnings.

Avoid This

Expose token-based authentication directly to the internet.
Rely solely on automated actions without human oversight for critical escalations.
Use generic templates for email replies; opt for context-aware drafting.
Do not embed sensitive information directly in prompts without sanitization.
Allow AI agents unlimited access to all data; implement strict channel access control.
Let AI models make critical decisions without an approval layer (e.g., action items).
Ignore the importance of prompt engineering and model-specific prompting standards.
Let AI models become overly noisy; use notification batching.
Neglect logging and error tracking; it's crucial for self-healing.
Share confidential or personally identifiable information improperly.
Assume AI systems are perfectly deterministic; implement safeguards against data leakage.

Common Questions

OpenClaw is an AI agent the speaker has integrated into their workflow, treating it as a full-time employee. It handles tasks like managing sponsorship emails, drafting replies, researching companies, managing a CRM, generating video outlines, and more.

Topics

Mentioned in this video

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free