Site icon AIT365

CyberArk Launches Open-Source Tool to Prevent AI Jailbreaks

CyberArk

CyberArk, the global leader in identity security, announced the launch of FuzzyAI, a cutting-edge open-source framework that has jailbroken every major tested AI model. FuzzyAI helps organizations identify and address AI model vulnerabilities, like guardrail bypassing and harmful output generation, in cloud-hosted and in-house AI models. To understand first-hand how organizations can adopt AI while mitigating cyber risks, Black Hat Europe 2024 attendees can explore the tool’s capabilities and applications.

Why FuzzyAI?

AI models are transforming industries with innovative applications in customer interactions, internal process improvements and automation. Internal usage of these models also presents new security challenges for which most organizations are unprepared.

Also Read: Torq Enhances AI with New Multi-Agent Security Framework

FuzzyAI helps solve some of these challenges by offering organizations a systematic approach to testing AI models against various adversarial inputs, uncovering potential weak points in their security systems and making AI development and deployment safer. At the heart of FuzzyAI is a powerful fuzzer – a tool that reveals software defects and vulnerabilities – capable of exposing vulnerabilities found via more than ten distinct attack techniques, from bypassing ethical filters to exposing hidden system prompts. Key features of FuzzyAI include:

“The launch of FuzzyAI underlines CyberArk’s commitment to AI security and helps organizations take a significant step forward in addressing the security issues inherent in the evolving landscape of AI model usage,” said Peretz Regev, Chief Product Officer at CyberArk. “Developed by CyberArk Labs, FuzzyAI has demonstrated the ability to jailbreak every major tested AI model. FuzzyAI empowers organizations and researchers to identify weaknesses and actively fortify their AI systems against emerging threats.”

Source: Businesswire

Exit mobile version