Saturday, December 21, 2024

Protect AI Announces Guardian, A Secure Gateway To Enforce ML Model Security

Related stories

Doc.com Expands AI developments to Revolutionize Healthcare Access

Doc.com, a pioneering healthcare technology company, proudly announces the development...

Amesite Announces AI-Powered NurseMagic™ Growth in Marketing Reach to Key Markets

Amesite Inc., creator of the AI-powered NurseMagic™ app, announces...

Quantiphi Joins AWS Generative AI Partner Innovation Alliance

Quantiphi, an AI-first digital engineering company, has been named...
spot_imgspot_img

Industry leading AI security platform now scans and blocks risks in widely deployed open-source models from Hugging Face and other public ML model repositories

Protect AI, the artificial intelligence (AI) and machine learning (ML) security company, announced Guardian, an industry-first secure gateway, which enables organizations to enforce security policies on ML Models to prevent malicious code from entering their environment. Guardian is based on ModelScan, an open-source tool from Protect AI that scans machine learning models to determine if they contain unsafe code. Guardian brings together the best of Protect AI’s open source offering, and enables enterprise level enforcement and management of model security, and extends coverage with proprietary scanning capabilities.

The growing democratization of Artificial Intelligence and Machine Learning (AI/ML) is largely driven by the accessibility of open-source ‘Foundational Models’ on platforms like Hugging Face. These models, downloaded millions of times monthly, are vital for powering a wide range of AI applications. However, this trend also introduces security risks, as the open exchange of files on these repositories can lead to the unintended spread of malicious software among users.

“ML models are new types of assets in an organization’s infrastructure, yet they are not scanned for viruses and malicious code with the same rigor as even a PDF file before they are used,” said Ian Swanson, CEO of Protect AI. “There are thousands of models downloaded millions of times from Hugging Face on a monthly basis, and these models can contain dangerous code. Guardian enables customers to take back control over open-source model security.”

The security posture of openly shared machine learning models puts an enterprise at critical risk to a Model Serialization attack. This occurs when malware code is added to the contents of a model during serialization (saving) and before distribution – creating a modern version of the Trojan Horse. Once added to a model, this unseen malicious code can be executed to steal data and credentials, poison data, and much more. These risks are prevalent in models hosted in large repositories such as Hugging Face.

Also Read: Codenotary Introduces Trustcenter 4.0 with New Machine Learning Guided Search Engine

Last year, Protect AI launched ModelScan, an open-source tool to scan AI/ML models for potential attacks in order to help secure systems from supply chain attacks. Since then, Protect AI has used ModelScan to evaluate over 400,000 models hosted on Hugging Face in order to identify unsafe models, and refreshes this knowledge base, nightly. To date, over 3300 models were found to have the ability to execute rogue code. These models continue to be downloaded and deployed into ML environments, but without the security tools needed to scan models for risks, prior to adoption.

Unlike other open-source alternatives, Protect AI’s Guardian acts as a secure gateway, bridging ML development and deployment processes that use the Hugging Face and other model repositories. It uses proprietary vulnerability scanners, including a specialized scanner for Keras lambda layers, to proactively scan open-source models for malicious code, ensuring the use of secure, policy-compliant models in organizational networks. With advanced access control features and dashboards, Guardian provides security teams control over model entry and comprehensive insights into model origins, creators, and licensing. Guardian also seamlessly integrates with existing security frameworks and complements Protect AI’s Radar for extensive AIML threat surface visibility in organizations.

Guardian enhances Protect AI’s leading position in AI security and MLSecOps, adding essential capabilities to our comprehensive platform. Recognized for our deep expertise in AI and ML model security, Protect AI offers unparalleled features. These enable enterprises to develop, deploy, and manage secure, compliant, and operationally efficient AI applications, by providing the ability to see, know, and manage security risks across enterprise AI environments. Protect AI is committed to leading the charge towards a safer AI-powered world and pioneering the adoption of MLSecOps practices. Contact Protect AI to learn more about Guardian and other Protect AI offerings.

SOURCE: BusinessWire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img