Monday, November 25, 2024

Protect AI Acquires Laiyer AI to Secure Large Language Models (LLMs)

Related stories

Deep Instinct Expands Zero-Day Security to Amazon S3

Deep Instinct, the zero-day data security company built on...

Foxit Unveils AI Assistant in Admin Console

Foxit, a leading provider of innovative PDF and eSignature...

Instabase Names Junie Dinda CMO

Instabase, a leading applied artificial intelligence (AI) solution for...
spot_imgspot_img

Laiyer AI’s LLM Guard extends Protect AI’s offerings with advanced LLM security capabilities; Open source tool has been downloaded over 2.5 million times per month

Protect AI, the leading artificial intelligence (AI) and machine learning (ML) security company, announced it has acquired Laiyer AI. With the acquisition, Protect AI will be offering a commercial version of Laiyer AI’s open source LLM Guard with expanded features, capabilities, and integrations within the Protect AI platform. LLM Guard is freely available today, and an industry leading open-source project for protecting large language models (LLMs) against security threats, misuse and prompt injection attacks, while also providing tools to manage risk and compliance needs.

OpenAI’s GPT-4 and other LLMs are revolutionizing AI, and excel at understanding and generating human language. Their adoption spans various sectors, including customer service, healthcare, and content creation, driving the market’s growth from USD 11.3 billion in 2023 to an expected USD 51.8 billion by 2028, according to multiple industry analysts. This growth, fueled by the demand for applications like chatbots and virtual assistants, positions LLMs as key tools for businesses seeking to leverage textual data for competitive advantage. However, security and misuse concerns are limiting wider adoption among major companies.

“Protect AI is thrilled to announce the acquisition of Laiyer AI’s team and product suite, which significantly enhances our leading AI and ML security platform. These new capabilities will empower our customers in automotive, energy, manufacturing, life sciences, financial services, and government sectors to develop safe, secure GenAI applications,” said Ian Swanson, CEO of Protect AI. “Our industry-leading platform now boasts advanced features and filters for governing LLM prompts and responses, elevating the end-user experience and reaffirming our commitment to safeguarding Generative AI applications,”

Also Read: Introducing ClickUp Brain: The First AI Neural Network for Work

In 2023, the OWASP Top 10 for LLM Applications spotlighted the unique security risks associated with deploying Large Language Models that business leaders should understand. Key risks include prompt injections, training data poisoning, and supply chain vulnerabilities. A notable concern is Prompt Injection Vulnerabilities, where attackers can manipulate LLMs through crafted inputs, leading to data exposure or decision manipulation. These attacks can be direct, via the LLM’s input, or indirect, through tainted data sources, and often bypass detection due to the implicit trust in LLM outputs. With upcoming regulations on LLMs, it’s vital to safeguard against such malicious activities and harmful responses to maintain corporate integrity and security.

Laiyer AI LLM Guard is a groundbreaking security solution for addressing the challenges associated with deploying LLMs. Unlike many closed-source, untested options prevalent in the market, LLM Guard offers a transparent, open-source alternative that bolsters confidence in deploying LLMs at an enterprise scale. This innovative tool is designed to enhance the security of LLM interactions, supporting both proprietary and third-party models.

LLM Guard’s core features include the detection, redaction, and sanitization of inputs and outputs from LLMs, effectively mitigating risks such as prompt injections to personal data leaks. These features are integral to preserving LLM functionality while safeguarding against malicious attacks and misuse. Moreover, LLM Guard integrates seamlessly with existing security workflows, offering observability tools like logging and metrics. This positions Laiyer AI at the forefront of providing essential security solutions, enabling developers and security teams to deploy LLM applications securely and effectively.

“There’s a clear need in the market for a solution that can secure LLM use-cases from start to finish, including when they scale into production. By joining forces with Protect AI, we are extending Protect AI’s products with LLM security capabilities to deliver the industry’s most comprehensive end-to-end AI Security platform,” said Neal Swaelens and Oleksandr Yaremchuk, Co-founders of Laiyer AI.

LLM Guard exemplifies price/performance leadership in the enterprise security sector for LLMs. The innovative solution balances latency, cost, and accuracy, boasting an impressive scale of adoption with over 13,000 library downloads and 2.5 million downloads of its proprietary models on HuggingFace in just 30 days. LLM Guard’s performance is enhanced by a 3x reduction in CPU inference latency, enabling the use of cost-effective CPU instances instead of expensive GPUs without compromising accuracy. LLM guard is a leader in the field, reinforced by its status as the default security scanner for Langchain and several other leading global enterprises.

The integration of Laiyer AI reinforces Protect AI’s status as the premier platform in AI security and MLSecOps. Protect AI offers unmatched capabilities, enabling enterprises to build, deploy, and manage AI applications that are not only secure and compliant but also operationally efficient.

SOURCE: BusinessWire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img