Friday, November 22, 2024

HiddenLayer Uncovers Critical Security Flaw on Hugging Face

Related stories

Capgemini, Mistral AI & Microsoft Boost Generative AI

Capgemini announced a global expansion of its Intelligent App...

Rackspace Launches Adaptive Cloud Manager for Growth

Rackspace Technology®, a leading hybrid, multicloud, and AI technology...

Theatro Launches GENiusAI to Boost Frontline Productivity

Theatro, a pioneer in voice-controlled mobile communication technology, is...

Denodo 9.1 Boosts AI & Data Lakehouse Performance

Latest release adds an AI-powered assistant, an SDK to...

Health Catalyst Launches AI Cyber Protection for Healthcare

Health Catalyst, Inc., a leading provider of data and...
spot_imgspot_img

HiddenLayer, the leading security provider for artificial intelligence (AI) models and assets, has exposed a significant vulnerability on Hugging Face, a popular platform that allows AI developers to share open-source code, models, and data to kick-start their artificial intelligence projects. This exposure impacts all entities currently utilizing the platform to host their AI models that have had models converted into the Safetensors format.

Hugging Face’s widely-used SFconvertbot, designed to convert insecure machine learning model formats to the more secure Safetensors format, has inadvertently become a vector for potential security breaches. Prominent companies such as Google and Microsoft, with a total of 905 models hosted on their public-facing Hugging Face profiles, have relied on the Safetensors bot to enhance the security of their models, having previously trusted and accepted its recommendations.

However, HiddenLayer research has revealed that malicious actors can exploit the Safetensors conversion process to submit pull requests containing malicious code or backdoored models to any company or individual with a public repository on the platform. Additionally, any user who enters their user token to convert a private repository is liable to have had their token stolen and, consequently, their private model repositories and datasets accessed. Unlike conventional code review processes, identifying and mitigating these malicious changes is exceptionally challenging and time-consuming for affected companies. The simplicity of the method employed by the HiddenLayer team to achieve this exploit is detailed in their blog post “Silent Sabotage: Hijacking Safetensors Conversion on Hugging Face.”

Also Read: Infoblox’s New AI-Powered SOC Insights Capability Reduces Critical Security Operations Challenges

Chris “Tito” Sestito, Co-Founder and CEO of HiddenLayer, emphasized the wider impact of the vulnerability, “This vulnerability extends beyond any single company hosting a model. The compromise of the conversion service has the potential to rapidly affect the millions of users who rely on these models to kick-start their AI projects, creating a full supply chain issue. Users of the Hugging Face platform place trust not only in the models hosted there but also in the reputable companies behind them, such as Google and Microsoft, making them all the more susceptible to this type of attack.”

Out of the top 10 most downloaded models from both Google and Microsoft combined, the models that had accepted the merge from the Safetensors bot had a staggering 16,342,855 downloads in the last month. While this is only a small subset of the 500,000+ models hosted on Hugging Face, they reach an incredible number of users. The bot itself has made over 42,657 pull requests to repositories on the site to date, any of which have the potential to be compromised.

The exposure of this vulnerability underscores the urgent need for organizations to implement more stringent security protocols for AI technologies. With the rapid adoption of AI outpacing the implementation of proper security measures, companies such as HiddenLayer are offering solutions to address these vulnerabilities. HiddenLayer’s AISec Platform provides a comprehensive suite of products designed to safeguard ML models against adversarial attacks, vulnerabilities, and malicious code injections, offering organizations defense against emerging threats to AI.

SOURCE: PRNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img