Tuesday, November 5, 2024

Sysdig Launches AI Workload Security to Mitigate Active AI Risk

Related stories

Crayon Joins AWS Generative AI Partner Innovation Alliance

Crayon announced it will work with Amazon Web Services...

Sikich announced the appointment of Ray Beste as Principal AI Strategist

Sikich, a Chicago-based leading global technology-enabled professional services company,...

Wondershare Unveils SelfyzAI 3.0: New AI Features Enhance Image Editing Experience

Wondershare proudly launched SelfyzAI 3.0, the latest version of...

Dan Muscatello Joins OneSix as Chief Revenue Officer

OneSix, a leading data and artificial intelligence (AI) consultancy...
spot_imgspot_img

New capability helps companies gain visibility into their AI workloads, identify active risk and suspicious activity in real time, and ensure compliance with emerging AI guidelines

Sysdig, the leader in cloud security powered by runtime insights, announced the launch of AI Workload Security to identify and manage active risk associated with AI environments. The newest addition to the company’s cloud-native application protection platform (CNAPP) is designed to help security teams see and understand their AI environments, identify suspicious activity on workloads that contain AI packages, and fix issues fast.

“The addition of AI Workload Security to the Sysdig CNAPP comes in response to widespread demand for a solution that empowers the secure adoption of AI so that companies can harness its power and accelerate business. With AI Workload Security, organizations can understand their AI infrastructure and identify active risks such as workloads containing in-use AI packages that are publicly exposed and have exploitable vulnerabilities. AI workloads are a prime target of attack for bad actors, and AI Workload Security allows defenders to detect suspicious activity within these workloads and address the most imminent threats to their AI models and training data,” said Knox Anderson, SVP of Product Management at Sysdig.

Kubernetes has become the deployment platform of choice for AI. Securing data and mitigating active risk in containerized workloads are inherently difficult, however, given their ephemerality. Understanding malicious activities and runtime events that may lead to a breach of sensitive training data requires a real-time solution with runtime visibility. The Sysdig CNAPP is rooted in open source Falco, the standard for threat detection in the cloud. It is designed for cloud-native runtime security, like Kubernetes clusters, regardless of whether those workloads are in the cloud or on-premises.

With the introduction of real-time AI Workload Security, Sysdig helps companies immediately identify and prioritize workloads in their environment with leading AI engines and software packages such as OpenAI, Hugging Face, TensorFlow, and Anthropic. By understanding where AI workloads are running, Sysdig enables organizations to manage and control their AI usage – whether that usage is official or deployed without proper approval. Sysdig also simplifies triage and reduces response times by fully integrating real-time AI Workload Security with the company’s unified risk findings feature, giving security teams a single view of all correlated risks and events to provide a more efficient workflow to prioritize, investigate, and remediate active AI risks.

Also Read: Invicti Launches First AI-Enabled Predictive Risk Scoring for Application Security Testing

Widespread AI Adoption Brings Growing Public Exposure

Of all generative AI workloads currently deployed, Sysdig found that 34% are publicly exposed. Public exposure, which refers to a workload’s accessibility from the internet or another untrusted network without appropriate security measures in place, puts the sensitive data leveraged by generative AI models in urgent danger. In addition to increasing the risk of security breaches and data leaks, public exposure also opens the door for regulatory compliance challenges.

Today’s announcement is timely given the increasingly rapid pursuit of AI deployment, as well as growing concern with the security of these models and the data used to train them. A recent Cloud Security Alliance survey concluded that over half of organizations, 55%, are planning to implement generative AI solutions this year. Sysdig also found that, since December, the deployment of OpenAI packages has nearly tripled. Of the generative AI packages currently deployed, OpenAI makes up 28%, followed by Hugging Face’s Transformers at 19%, Natural Language Toolkit (NLTK) at 18%, TensorFlow at 11%, and Anthropic at less than 1%.

The introduction of AI Workload Security also aligns with forthcoming guidelines and increasing pressures to audit and regulate AI, as proposed by the Biden Administration’s October 2023 executive order and following recommendations from the National Telecommunications and Information Administration (NTIA) in March 2024. By highlighting public exposure, exploitable vulnerabilities, and runtime events, Sysdig AI Workload Security also helps organizations across industries fix issues fast ahead of this imminent AI legislation.

“Without adequate runtime insights, AI workloads expose organizations to undue risk. Threat actors can exploit vulnerabilities in running packages to access sensitive training data or modify AI requests and responses,” Anderson said. “Organizations must establish enhanced security controls and runtime detections tailored to these unique challenges, and Sysdig helps customers address these ethical concerns and blind spots so that they can reap all of the benefits of efficiency and speed that generative AI offers.”

Source: BusinessWire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img