Site icon AIT365

CrowdStrike Launches AI Red Team to Secure Against Emerging Threats

CrowdStrike

CrowdStrike launched CrowdStrike AI Red Team Services, reinforcing its leadership in protecting the infrastructure, systems and models driving the AI revolution.  Leveraging CrowdStrike’s world-class threat intelligence and elite-expertise in real-world adversary tactics, these specialized services proactively identify and help mitigate vulnerabilities in AI systems, including Large Language Models (LLMs), so organizations can drive secure AI innovation with confidence.

As organizations adopt AI at a rapid pace, new threats such as model tampering, data poisoning, sensitive data exposure and more, increasingly target AI applications and their underlying data. The compromise of AI systems, including LLMs, can result in a breach of confidentiality, reduced model effectiveness and increased susceptibility to adversarial manipulation. Announced at Fal.Con Europe, CrowdStrike’s inaugural premier user conference in the region, CrowdStrike AI Red Team Services provide organizations with comprehensive security assessments for AI systems, including LLMs and their integrations, to identify vulnerabilities and misconfigurations that could lead to data breaches, unauthorized code execution, or application manipulation. Through advanced red team exercises, penetration testing, and targeted assessments, combined with Falcon platform innovations like Falcon Cloud Security AI-SPM and Falcon Data Protection, CrowdStrike remains at the forefront of AI security.

Also Read: GreyNoise Intelligence Discovers Zero-Day Vulnerabilities in Live Streaming Cameras with the Help of AI

Key features of the service include:

“AI is revolutionizing industries, while also opening new doors for cyberattacks,” said Tom Etheridge, chief global services officer, CrowdStrike. “CrowdStrike leads the way in protecting organizations as they embrace emerging technologies and drive innovation. Our new AI Red Team Services identify and help to neutralize potential attack vectors before adversaries can strike, ensuring AI systems remain secure and resilient against sophisticated attacks.”

SOURCE: CrowStrike

Exit mobile version