New security threats posed by GenAI models necessitate a leap beyond passive observability toward real-time control of AI-powered applications
WhyLabs, the AI observability and security company, announced the launch of a new type of AI operations platform: the AI Control Center. The new platform, which offers teams real-time control over their AI applications, was developed by WhyLabs in response to rising security and reliability threats posed by GenAI, which have rendered traditional observability tools insufficient for operating AI responsibly.
The GenAI revolution has unleashed a host of new challenges and vulnerabilities for enterprises. Teams must ensure that LLMs are not prompt injected, are not leaking confidential data, and are not generating responses that can erode trust. To address these challenges, the WhyLabs AI Control Platform assesses data in real-time from user prompts, RAG context, LLM responses, and application metadata to surface potential threats. With low-latency threat detectors optimized to run directly in inference environments, WhyLabs maximizes safety without the associated cost, performance, or data privacy concerns.
“Our customers are moving Generative AI initiatives from prototypes to production, where security and quality are paramount,” said Alessya Visnjic, Co-Founder and CEO of WhyLabs. “Passive observability tools alone are not sufficient for this leap because you cannot afford a 5-minute delay in learning that a jailbreak incident has occurred in an application. Our new security capabilities equip AI teams with safeguards that prevent unsafe interactions in under 300 milliseconds, with 93% threat detection accuracy.”
“Yoodli is building for a future where everyone can be a better communicator through our state-of-the-art Generative AI speech coaching application,” explained Varun Puri, CEO of Yoodli. “WhyLabs AI Control Platform provides us with an accessible and easily adaptable solution that we can trust. We are really excited about the new capabilities that enable us to execute AI control across five critical dimensions for our communication application: protection against bad actors, misuse, bad customer experience, hallucinations, and costs.”
Also Read: Veritas Strengthens Cyber Resilience with New AI-Powered Solutions
“WhyLabs is a key partner to Glassdoor on some of our most important generative AI initiatives. We are excited about the release of the WhyLabs AI Control Platform, which enables us to run LLM applications in a secure and reliable manner,” said Malathi Sankar, Director, Machine Learning Engineering, Platform and Search at Glassdoor. “The combination of continuous monitoring, powerful guardrails, and the ability to fine-tune the LLM-powered applications is the kind of innovation we are looking for in bringing generative AI capabilities to production.”
Built upon the company’s observability tools that are used and trusted by many Fortune 500 companies, the new platform offers unprecedented real-time control over AI applications. This enables engineering, security, and business teams to prevent critical risks presented by generative AI. Organizations leveraging the WhyLabs AI Control Platform for generative AI applications see the following outcomes:
- Prevention of Security Threats: Users can detect bad actors and misuse of externally-facing chatbots and Q&A applications.
- Prevention of Unsafe User Experiences: Users can detect false, misleading, inappropriate, and toxic outputs.
- Out-of-the-Box and Customizable Threat Detection Rules: Curated rule sets enable users to run the most advanced threat detectors in minutes. Beyond that, users can customize threat detectors, continuously tune them on new examples, and customize system messages that steer the application toward safe behavior.
- Fast and Collaborative Investigations: Users can identify root causes of issues and develop improvement strategies by analyzing detected threat trends and annotated application traces.
- Drive Continuous Improvements: Users can improve model evaluation through high-quality datasets built from application interactions captured and annotated by the safeguards.
These capabilities are available today in the WhyLabs AI Control Platform alongside the existing solutions for predictive model health. Thousands of AI developers from organizations like Block, Regions Bank, and Glassdoor rely on WhyLabs to prevent issues in production ML models and ensure high-quality customer experience in AI-powered applications.
Source: BusinessWire