Site icon AIT365

Pangea Unveils AI Security Guardrails for LLM Risks

Pangea

Pangea AI Guard and Prompt Guard Now Generally Available; Registration Open for Virtual Escape Room Challenge

Pangea, a leading provider of security guardrails, announced the general availability of AI Guard and Prompt Guard to secure AI, defending against threats like prompt injection and sensitive information disclosure. Alongside the company’s existing AI Access Control and AI Visibility products, Pangea now offers the industry’s most comprehensive suite of guardrails to secure AI applications.

“As companies race to build and deploy AI apps via RAG and agentic frameworks, integrating LLMs with users and sensitive data introduces substantial security risks,” said Oliver Friedrichs, CEO and Founder of Pangea. “New attacks surface daily, requiring countermeasures to be rolled out equally fast. As a proven and trusted partner in the cybersecurity industry, Pangea constantly identifies and responds to new generative AI threats, before they can cause harm.”

“I’ve seen firsthand how vulnerabilities in computer systems can lead to damaging real-world impacts if left unchecked. AI’s potential for autonomous action could amplify these consequences,” said Kevin Mandia, Founder of Mandiant, and Strategic Partner at Ballistic Ventures. “Pangea’s security guardrails draw from decades of cybersecurity expertise to deliver essential defenses for organizations building AI software.”

Also Read: Dynatrace Expands Security with Cloud Posture Management

Accelerating Secure AI Software Delivery

Pangea AI Guard prevents sensitive data leakage and blocks malicious and unwanted content like profanity, self harm, and violence. Pangea employs over a dozen different detection technologies to inspect and filter AI interactions, including over 50 types of confidential and personally identifiable information. Threat intelligence is provided by partners Crowdstrike, DomainTools, and ReversingLabs, providing millions of threat intelligence data points to scan files, IPs, and domains.

The system can redact, block or disarm offending content and also offers a unique format preserving encryption feature that protects data while maintaining its data structure and schema so it does not break database formats.

Pangea Prompt Guard analyzes user and system prompts to block jailbreak attempts and organizational limit violations. Using a defense-in-depth approach, it detects prompt injection attacks through heuristics, classifiers, and custom-trained large language models that can reliably detect attack techniques such as token smuggling, alternate language attacks, and indirect prompt injection with over 99% efficacy.

Grand Canyon Education chose Pangea to secure its internal AI chatbot platform from the risk of sensitive data leakage. “What I love about Pangea is I can provide an API centric solution out of the box to developers that automatically redacts sensitive information at machine speed without any end user impact or user experience change,” said Mike Manrod, Chief Information Security Officer at Grand Canyon Education. “If you try to put a fence around AI to block its use people will find workarounds, so instead we created a path of least resistance with Pangea to make secure AI software development an easy and obvious choice.”

“The introduction of Pangea’s new offerings is a significant development in the field of AI security, particularly given the increasing importance of robust guardrails,” said Karim Faris, General Partner at GV. “The team has taken a comprehensive approach to the OWASP Top Ten Risks for LLM Applications and has established expertise in security innovation, including the creation of SOAR. We are highly optimistic about Pangea‘s future.”

Source: PRNewswire

Exit mobile version