Site icon AIT365

HiddenLayer SAI Team Unveils ShadowLogic

HiddenLayer

HiddenLayer, a leader in security for AI solutions, announces a groundbreaking discovery by its SAI team: ShadowLogic, a novel technique for creating surreptitious backdoors in neural network models. This innovative method allows adversaries to implant codeless backdoors into models of any modality by manipulating the model’s computational graph, posing a significant threat to AI supply chains. For example, exploiting this vulnerability in a Generative AI model could lead to any fact being altered, driving the exponential spread of disinformation.

ShadowLogic poses a significant threat due to its ability to create backdoors that persist through fine-tuning. This allows compromised foundation models to trigger attacker-defined behaviors in downstream applications upon receiving specific inputs. Such capabilities elevate the risks associated with AI systems; for instance, a model responsible for quality assurance in manufacturing could allow defective products to pass inspections, potentially endangering consumers. The urgent need for enhanced security measures is paramount.

“Computational graph-based backdoors are a critical concern in the modern AI threat landscape,” said Tom Bonner, VP of Research at HiddenLayer. “With ShadowLogic, we are unveiling a method that not only bypasses traditional security controls but also enhances the sophistication of potential attacks on machine learning models.”

Also Read: Accenture and Google Cloud Advance AI Adoption and Cybersecurity

While existing techniques for implanting malicious code often require extensive access to training data or may be fragile to changes, ShadowLogic simplifies the process. This technique allows for the creation of no-code logic backdoors that are easily implanted in pre-trained models, facilitating highly targeted attacks with unprecedented ease.

SOURCE: PRNewswire

Exit mobile version