Agentic AI Apps
AI Solutions
Pre-built Applications

Ready-to-deploy applications across industries and functions.

AI for Banking
AI for Healthcare
AI for Retail
AI for IT
AI for HR
AI for Recruiting
Application Accelerators

Leverage pre-built AI agents, templates, and integrations from the Kore.ai Marketplace.

Kore.ai Marketplace
Pre-built agents
Templates
Integrations
Tailored Applications

Design and build applications on our Agent Platform using our enteprise modules.

Platform
Agent Platform

Your strategic enabler for enterprise AI transformation.

Learn more
Enterprise Modules
AI for Work
AI for Service
AI for Process
Top Resources
From search to action: what makes agentic AI work in practice
AI use cases: insights from AI's leading decision makers
Beyond AI islands: how to fully build an enterwise-wide AI workforce
QUICK LINKS
About Kore.aiCustomer StoriesPartnersResourcesBlogWhitepapersDocumentationAnalyst RecognitionGet supportCommunityAcademyCareersContact Us
Agent Platform
Agent Platform
Agent Platform

Your strategic enabler for enterprise AI transformation.

learn more
PLATFORM MODULES
Multi-Agent Orchestration
AI Engineering Tools
Search + Data AI
Observability
No-Code + Pro-Code Tools
AI Security + Governance
Agent Management Platform
Integrations
Enterprise Modules
For Service
AI AgentsAgent AI AssistanceAgentic Contact CenterQuality AssuranceProactive Outreach
For Work
Modules
Enterprise SearchIntelligent OrchestratorPre-Built AI AgentsAdmin ControlsAI Agent Builder
Departments
SalesMarketingEngineeringLegalFinance
For Process
Process AutomationAI Analytics + MonitoringPre-built Process Templates
upcoming event
Recent AI Insights
What's new in AI for Work: features that drive enterprise productivity
What's new in AI for Work: features that drive enterprise productivity
AI INSIGHT
20 Feb 2026
Parallel Agent Processing
Parallel Agent Processing
AI INSIGHT
16 Jan 2026
The AI productivity paradox: why employees are moving faster than enterprises
The AI productivity paradox: why employees are moving faster than enterprises
AI INSIGHT
12 Jan 2026
Agent Marketplace
More
More
Resources
Resource Hub
Blog
Whitepapers
Webinars
AI Research Reports
AI Glossary
Videos
AI Pulse
Generative AI 101
Responsive AI Framework
CXO Toolkit
Private equity
support
Documentation
Get support
Submit RFP
Academy
Community
COMPANY
About us
Leadership
Customer Stories
Partners
Analyst Recognition
Newsroom
Events
Careers
Contact us
Agentic AI Guides
forrester cx wave 2024 Kore at top
Kore.ai named a leader in The Forrester Wave™: Conversational AI for Customer Service, Q2 2024
Generative AI 101
CXO AI toolkit for enterprise AI success
upcoming event

HumanX is one of the leading global conferences focused on artificial intelligence (AI) — designed for senior leaders, technologists, investors, and decision-makers shaping enterprise deployment of AI technologies.

San Francisco
6 Apr
register
Talk to an expert
Not sure which product is right for you or have questions? Schedule a call with our experts.
Request a Demo
Double click on what's possible with Kore.ai
Sign in
Get in touch
Background Image 1
Blog
Conversational AI
The lesson enterprise AI needs to learn from the OpenClaw incident

The lesson enterprise AI needs to learn from the OpenClaw incident

Published Date:
March 18, 2026
Last Updated ON:
March 20, 2026

A couple of weeks ago, Summer Yue, director of AI Alignment at Meta, posted a thread on X that got 9.6 million views. She had been testing an AI agent called OpenClaw on a separate test account (Toy inbox) for weeks, and it had been working exactly as expected. 

Confident it was ready, she connected it to her actual primary inbox with one clear instruction: review this inbox and suggest what you would archive or delete and “confirm before acting”.

What followed, in her own words, was watching it "speedrun deleting her inbox" while she typed stop repeatedly from her phone before giving up and physically running to her computer to shut it down. When she asked the agent if it remembered her instruction, it said yes, it remembered, and it had violated it, and it was sorry. Members on X immediately jumped in asking if this was intentional, if she was testing its guardrails. Her answer: "rookie mistake, to be honest."

The irony was hard to miss. The person whose job is keeping AI aligned couldn't keep her own AI aligned. But if you look at what actually went wrong, the irony is the least interesting part. Here the agent didn't go rogue. It simply forgot the instruction. As it worked through her real inbox, significantly larger than the test account, it ran out of memory and compressed its older context to make room for new information, a process called context window compaction. Her safety instruction disappeared in that process. With nothing left to constrain it, the agent did exactly what it thought it was supposed to do: clean the inbox and it did exactly that.

This is what the absence of platform-level governance looks like in practice. And it is not Summer Yue's problem alone.

A prompt in a chat window is not governance

OpenClaw is an open-source agent built for personal productivity. It was not designed with enterprise controls in mind. No role-based access, no audit trails, no confirmation gates, no observability. When Summer Yue connected it to her primary work inbox, the only governance in place was the instruction she typed.

This is the distinction that gets lost in the excitement around agentic AI. Not every AI agent is built for enterprise use. Consumer and open source tools put the responsibility of control entirely on the user. Enterprise platforms are a different category, built with the assumption that agents will operate across thousands of employees, touching sensitive data and taking consequential actions, and that governance cannot depend on what the user remembers to type.

And in OpenClaw's case, the problem ran deeper than just a missing guardrail. Agents like this optimize toward objectives, not human judgment. They don't inherently understand reversibility or consequences. Suggesting what to delete and actually deleting it look exactly the same to an agent trying to complete a task. Without something in the architecture that forces it to pause before taking an irreversible action, it simply will not.

This is not a criticism of one tool or one company. But it is a lesson worth taking seriously. Governance that lives inside a prompt is only as reliable as the agent's memory. Prompts are instructions, they are not infrastructure. And when an agent is touching your emails, your contracts, your HR records and your financial data, that distinction matters a great deal.

What ungoverned agents actually do

Summer Yue's situation was in a way a fairly low risk scenario. Some emails got deleted, she was able to stop it, and the damage was recoverable to a degree. But there was no audit trail of what had been deleted or why. And that was just one person, one inbox, one agent.

Now think about what happens when that same absence of governance plays out across an enterprise, where agents are running continuously, touching customer data, financial records, and internal communications across thousands of employees.

AI researcher Simon Willison coined the term lethal trifecta to describe what makes this genuinely dangerous. When an agent has access to private data, processes content from untrusted sources, and can communicate externally, a malicious instruction hidden inside an email or a document it reads as part of its normal work can redirect what it does next. The agent cannot tell the difference between your instruction and that one. It follows both. And because these agents run continuously, that instruction does not have to trigger right away. It can sit in memory and execute long after it was first received.

This is not a distant theoretical risk. It is what happens when you give an agent broad access and assume a prompt will keep it honest.

The agent is only as safe as the platform it runs on

When you deploy AI agents for thousands of employees, the first question is not what the agent can do. It is how the platform underneath it will govern it.

Every organisation has rules about who can see what. A sales executive does not have access to HR records. A contractor does not have the same reach as a full-time employee. Those rules exist for good reason, and they do not stop being relevant just because the work is now being done by an agent. The question is whether those boundaries exist in your AI platform too, or whether the agent operates without them because nobody defined the limits before deployment

The same thinking applies to actions. Every time an agent helps an employee update a record, send a communication, or modify data in a connected system, someone needs to have authorised that. A prompt cannot do that. It is a platform-level question, and if the platform has no answer for it, the agent will make that call on its own every time.

This is what governance by design actually means. Hard constraints enforced at the system level, not typed into a chat window. Access scoped to what each employee actually needs. Confirmation required before any action that cannot be undone. And when something does go wrong, the damage is contained because the platform was built for that possibility.

The answer is not better instructions or more carefully worded prompts. It is a platform that decides what agents can access, what they can do, and what requires a human in the loop, before any action is taken.

Here’s why Governance is the DNA of AI for Work platform

When we built AI for Work, the OpenClaw scenario wasn't a hypothetical. It was the exact failure mode we were designing against.

Before writing a single line of product code, we started with worst-case scenarios. What happens when an agent acts beyond its intended scope? When sensitive data crosses boundaries, it shouldn't? When an action is taken without a human in the loop, and there's no trail to explain why? Each answer became a design requirement.

User Management: Custom workspaces, user roles, and access controls (RBAC) ensure the right people are working with the right agents. Collaboration scales without access become a liability.

Security and Compliance: PII masking, SSO, IP restrictions, and filters are enforced at the platform level, with regular reviews built in. Data access is controlled before the agent ever touches it.

Data Retention Controls: Retention is configured by account or by agent. Full queries or just metadata, depending on sensitivity. The enterprise decides what gets stored and what doesn't.

Orchestration Settings: Guardrails, small talk handling, routing, and fallback behaviour are all administrator-configured. Each orchestration step can be enabled or disabled to suit organisational needs. The agent follows what was decided, not what it infers.

Monitor and Governance: System usage and agent activity are tracked continuously under the Observability framework. Compliance is not reviewed after the fact. It is monitored as the platform runs.

Workspace Analytics: Usage trends, key metrics, and agent performance are surfaced through dashboards. Administrators can see what is working, what isn't, and where adoption needs attention.

Audit Logs: Every user action and every agent activity is logged in detail. When something goes wrong, the trail already exists.

Workspace Settings: Workspace-level permissions, publishing rules, agent types, and workspace creation and deletion are all administrator-controlled. Nothing defaults to open.

Building Responsible AI

We have had the privilege of working with hundreds of enterprises that have trusted our Agentic Platform with their data, their workflows, and their people. That trust is not given lightly, and it has shaped every decision we have made, from how we designed AI for Work to how we think about every capability we add to it. 

Responsible AI in an enterprise context is not a feature and it is certainly not a compliance checkbox. It is the ongoing commitment to ensuring that every agent, across every deployment, operates within boundaries that the enterprise can answer for.

That commitment is also the answer to the harder question the industry is now being forced to ask: what does it actually mean to deploy AI responsibly at enterprise scale? It has to be built into the foundation. And how the industry answers it will define what enterprise AI looks like for the next decade.

‍

Explore
Book a demo
Share
Link copied
authors
Keerthi Agnihotram
Keerthi Agnihotram
Forrester logo at display.
Kore.ai named a leader in the Forrester Wave™ Cognitive Search Platforms, Q4 2025
Access Report
Gartner logo in display.
Kore.ai named a leader in the Gartner® Magic Quadrant™ for Conversational AI Platforms, 2025
Access Report
Stay in touch with the pace of the AI industry with the latest resources from Kore.ai

Get updates when new insights, blogs, and other resources are published, directly in your inbox.

Subscribe
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Recent Blogs

View all
What is agent memory? How AI Agents learn and remember
March 18, 2026
What is agent memory? How AI Agents learn and remember
Agent Management Platform: Why Every Enterprise Deploying AI Agents Needs One
March 18, 2026
Agent Management Platform: Why Every Enterprise Deploying AI Agents Needs One
Agentic AI in travel and hospitality: Transforming guest experience and operations
March 18, 2026
Agentic AI in travel and hospitality: Transforming guest experience and operations
Accelerate time-to-value from AI

Find out how Kore.ai can help

Talk to an expert
Start using an AI agent today

Browse and deploy our pre-built templates

Marketplace
Background Image 4
Background Image 9
You are now leaving Kore.ai’s website.

‍

Kore.ai does not endorse, has not verified, and is not responsible for, any content, views, products, services, or policies of any third-party websites, or for any verification or updates of such websites. Third-party websites may also include "forward-looking statements" which are inherently subject to risks and uncertainties, some of which cannot be predicted or quantified. Actual results could differ materially from those indicated in such forward-looking statements.



Click ‘Continue’ to acknowledge the above and leave Kore.ai’s website. If you don’t want to leave Kore.ai’s website, simply click ‘Back’.

CONTINUEGO BACK
Agentic AI applications for the enterprise
English
Spanish
Spanish
Spanish
Spanish
Pre-Built Applications
BankingHealthcareRetailRecruitingHRIT
Kore.ai agent platform
Platform OverviewMulti-Agent OrchestrationAI Engineering ToolsSearch and Data AIAI Security and GovernanceNo-Code and Pro-Code ToolsIntegrations
 
AI for WorkAI for ServiceAI for ProcessAgent Marketplace
company
About Kore.aiLeadershipCustomer StoriesPartnersAnalyst RecognitionNewsroom
resources
DocumentationBlogWhitepapersWebinarsAI Research ReportsAI GlossaryVideosGenerative AI 101Responsive AI frameworkCXO Toolkit
GET INVOLVED
EventsSupportAcademyCommunityCareers

Let’s work together

Get answers and a customized quote for your projects

Submit RFP
Follow us on
© 2026 Kore.ai Inc. All trademarks are property of their respective owners.
Privacy PolicyTerms of ServiceAcceptable Use PolicyCookie PolicyIntellectual Property Rights
|
×