Zenity announced a new product capability: runtime protection for OpenAI AgentKit, intended to deliver “enterprise-grade enforcement that detects and blocks data leakage, secret exposure and unsafe agent behavior in real time.”
The launch follows Zenity Labs’ recent research, which uncovered “critical gaps in AgentKit’s guardrails” via prompt injection, response obfuscation, credential exposure and other attack vectors.
In short: as enterprises increasingly build and deploy autonomous AI agents using AgentKit, Zenity sees a rising risk profile, and it is positioning itself as a guardrail-and-governance layer to close those gaps. That matters for cybersecurity professionals, risk and compliance teams, and any business operating in regulated industries.
What exactly is new and why does it matter
AgentKit from OpenAI enables developers to build and deploy autonomous AI agents via AgentBuilder, ChatKit, and the Connector Registry.
The risk: while AgentKit accelerates innovation and scale, it also expands the “attack surface overnight” creating situations where traditional guardrails may miss nuanced threats.
Zenity’s offering responds by applying rule-based, deterministic enforcement at the endpoint level (i.e., every user-agent interaction). According to the announcement, it “inspects every interaction between users and agents built with AgentKit, identifying and blocking risky behavior in real time.”
Key features cited:
-
Data leakage detection: stops agents attempting to exfiltrate sensitive/regulatory-controlled information.
-
Secrets exposure prevention: detects embedded credentials or keys in agent responses and blocks the action.
-
Unsafe response blocking: ensures that policy-violating, compliance-failing or brand-trust-eroding agent responses are blocked.
The shift from probabilistic guardrails (e.g., heuristics, large-language-model internal filters) to deterministic, policy-driven enforcement is significant. It gives security teams a more predictable and auditable way to govern agent behaviour rather than hoping that built-in guardrails in the agent platform or model will suffice.
Implications for the Cybersecurity Industry
For cybersecurity practitioners and vendors, the announcement signals several key trends and challenges:
-
Agentic AI = new terrain for attacker and defender
As AI agents proliferate (especially in enterprises), they introduce new risk vectors: automatic tool invocation, autonomous decision-making, dynamic chaining of tasks, connector access to infrastructure or data stores. Traditional endpoint and network security tools may not recognise these outgoing flows or “agent behaviour” as different from regular application traffic. By building runtime protection specifically for agents, Zenity identifies this emerging category of threat.The cybersecurity industry must adapt: investing in detection and prevention that recognise AI-agent-centric flows, intent inspection (what the agent is trying to do), and rule enforcement aligned with governance policies.
-
Guardrail gaps are real
Zenity’s own research shows that AgentKit guardrails can be bypassed via prompt injection, response obfuscation, credential exposure etc. For cybersecurity vendors, this validates the risk that even vendor-provided “safe” frameworks for AI aren’t sufficient. This opens a market for complementary controls governance layers, runtime monitoring, agent-behaviour anomaly detection. -
Enterprise-grade governance becoming a must-have
Enterprises in regulated industries (financial services, healthcare, government) will increasingly demand controls around AI agents: data leakage prevention, secrets management, auditability, deterministic policy enforcement. Cybersecurity vendors that cater to governance, visibility, and control will be elevated. Vendors like Zenity that offer an “agent-centric” security platform are likely to see growth. -
Shift from “model risks” to “agent risks”
Much of the discussion in 2023-24 was about model risks (bias, output quality, hallucinations). This announcement emphasises agent risks how the agent is behaving, what it is accessing, which tools it invokes, and how it interacts with data. The cybersecurity sector must broaden its focus accordingly: agent-centric lifecycle, not just model monitoring. -
Opportunity for security integration and orchestration
Since agent behaviour may traverse SaaS, cloud, on-prem, endpoint environments, there’s a strong need for integration across endpoint security, cloud access security brokers (CASBs), enterprise identity, and AI-agent-specific controls. Vendors or managed-security providers that offer end-to-end visibility discovery, posture management, real-time detection and response will be differentiated.
Also Read: Commvault Launches “Data Rooms” to Bridge Backup and AI Platforms
Impact on Businesses Operating in Cybersecurity-Affected Industries
For businesses especially those operating in sectors with heightened regulation or significant data sensitivity this news carries immediate relevance.
-
Reduced blind spots when deploying AI agents: Many enterprises are excited about deploying AI agents for productivity, customer interaction, workflow automation. But they face hesitation around security and compliance. Zenity’s runtime protection helps reduce those blind spots: enabling organisations to deploy with more confidence.
-
Faster AI agent adoption: With better runtime controls, businesses may accelerate adoption of agent-based workflows. The “security perimeter” concerns become more manageable when you have deterministic enforcement in place.
-
Governance and audit states improved: Businesses subject to regulatory compliance (GDPR, HIPAA, SOX, PCI-DSS) will benefit from improved visibility into agent behaviour, enforcement of policy before data is exfiltrated or secrets exposed. This reduces risk of non-compliance fines and reputational damage.
-
Vendor risk and third-party agent usage: As businesses use agents built by third parties or deploy them in partner ecosystems, the runtime protection capability helps mitigate risk from those externally built or managed agents. Security teams get greater control over what happens at the endpoint, regardless of where the model originates.
-
Brand trust and customer-facing agents: For businesses exposing agents to customers (chatbots, virtual assistants, autonomous workflows), the risk of unsafe responses or brand damage is real. Having a layer that blocks “unsafe responses” before delivery helps protect brand trust, compliance, and reduces liability.
-
Cost control via prevention: While detection and response remain important, prevention (blocking exfiltration, secrets exposure before damage) is often more cost efficient. Businesses can reduce incident response costs, insurance premiums, and reputational loss by proactively enforcing controls.
Looking Ahead: Strategic Considerations
-
Businesses should map their agent-footprint and risk profile: Which AI agents are in use (internal, third-party, SaaS)? What data do they access? What tools or connectors do they invoke? This helps prioritise where runtime protection matters most.
-
Security teams must update policies to reflect agent behaviour: Traditional policies focus on users, devices, networks. The next wave needs to address agents: what they’re allowed to do, what they should not do, how their behaviour is monitored and blocked.
-
Cybersecurity vendors will increasingly offer agent-centric modules: The market will likely see growth in technologies focused on agent discovery, agent behaviour analytics, runtime enforcement, and agent-focused threat modelling.
-
For enterprises, vendor selection will matter: Look for solutions that offer deterministic rule-based enforcement, inline prevention, and end-to-end coverage (SaaS, home-grown, endpoint). As Zenity’s announcement emphasises: “security teams can now address guardrail gaps in AgentKit as agentic AI adoption grows.”
-
Ongoing monitoring and incident response will need to evolve: Even with runtime protection, organisations should assume agents will be targeted (or mis-used) and should prepare for incident response specific to agent-behaviour, such as connector misuse, chain of tool invocation, or secrets exfiltration.
Conclusion
The launch by Zenity of runtime protection for OpenAI’s AgentKit is more than a product update it underscores a fundamental shift in how enterprises must think about AI agents, cybersecurity, and governance. The cybersecurity industry is being called to move beyond traditional model-centric monitoring and standard endpoint/network defences, and to focus squarely on agent behaviour, policy enforcement, and real-time prevention.
For businesses across regulated sectors or operating at large scale, this announcement offers the promise of safer, more scalable agent deployment with the kinds of controls needed to protect sensitive data, secrets, brand trust and compliance posture. As AI agents become more pervasive in enterprise workflows, the interplay between innovation and risk will increasingly be mediated by platforms that can enforce policy with precision and Zenity’s offering is one of the early movers in that space.

