Thursday, July 24, 2025

Mindgard’s free tool lifts the lid on unknown and undetected AI cyber risks

Related stories

Gathr.ai Launches Data Warehouse Intelligence Feature

Gathr.ai has introduced Data Warehouse Intelligence, a powerful new...

AWS & Meta Partner to Accelerate AI Innovation

Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company,...

Daylight Launches AI-Driven MDR with Human Expertise

Daylight Security announces that it has emerged from stealth...

OpenText Launches Cloud Editions 25.3 to Advance AI, Cloud, Security

OpenText has unveiled its latest release, Cloud Editions (CE)...

Dante Omics AI Brings Genomics to the AI Era With GPUs

Advanced NVIDIA GPU Integration Transforms Genome and Multiomics Alignment,...
spot_imgspot_img

Mindgard, the market-leading AI cybersecurity platform, launched Mindgard’s AI Security Labs, a free online tool for engineers to perform red teaming by evaluating the cyber risk to AI systems, including large language models (LLMs) like ChatGPT. As well as derisking a wide range of AI deployment scenarios, the tool marks a major advance in the cyber threat educational tools available to engineers.

As enterprises rapidly develop or adopt AI to gain competitive advantage, they are exposed to new attack vectors that conventional security tools cannot address. Even using so-called foundation models, such as ChatGPT, exposes them to risks as there have been no automated processes available until now to test the possible impacts of attacks.

Mindgard’s AI Security Labs lifts the lid on exposure to ML attacks faced by model developers and user organisations alike. These risks are currently predominantly undetected due to the complexity of identification and the lack of the specialised skills needed. Current AI penetration tests – if they happen at all – require months of programming and testing by hard-to-find and highly expensive teams. Even where they do happen, any subsequent change in the AI stack, model or underlying data necessitates a completely new test. As a result, senior management is often completely unaware of the likely impact of any disruption.

Also Read: AgileBlue Automates SecOps and SOAR with Sapphire AI

Mindgard’s free AI Security Labs automates the threat discovery process, providing repeatable AI security testing and reliable risk assessment in minutes rather than months. It allows engineers to select from a range of attacks against popular AI models, datasets and frameworks to assess potential vulnerabilities. The results provide insight on what is the current “art of the possible” in AI attacks and the likelihood of evasion, IP theft, data leakage, and model copying threats.

“Most organisations are flying blind when deploying AI, with no way to perform red teaming against emerging cyber risks,” said Dr. Peter Garraghan, CEO/CTO of Mindgard and Professor at Lancaster University.”Until now, there has been nowhere for technical teams to learn about the real threats to AI security. We created this free tool to empower engineers on the front lines of AI adoption with the education and capabilities to properly evaluate the attack surface.”

After the phenomenally fast adoption of LLMs following the launch of ChatGPT 3.5, a range of possible attacks on AI systems has been starting to emerge. Data poisoning has been observed, where chatbots have been made to swear or produce anomalous results. Data extraction is another threat, where an LLM reveals, for example, the sensitive data on which it was trained. And copying entire AI/ML models is increasingly common: Mindgard’s AI security researchers demonstrated this by copying ChatGPT 3.5 in two days at a cost of just $50.

SOURCE: Mindgard

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img