Monday, December 23, 2024

Protect AI Named to 2023 Fortune Cyber 60 List

Related stories

Doc.com Expands AI developments to Revolutionize Healthcare Access

Doc.com, a pioneering healthcare technology company, proudly announces the development...

Amesite Announces AI-Powered NurseMagic™ Growth in Marketing Reach to Key Markets

Amesite Inc., creator of the AI-powered NurseMagic™ app, announces...

Quantiphi Joins AWS Generative AI Partner Innovation Alliance

Quantiphi, an AI-first digital engineering company, has been named...
spot_imgspot_img

Startup led by Former AWS and Oracle AI executives helps organizations secure their ML models and AI applications from end-to-end

Protect AI, the artificial intelligence (AI) and machine learning (ML) security company, announced it has been named to the inaugural edition of the Fortune Cyber 60 List, which recognizes the top 60 cyber security companies in the world.

Protect AI is profiled in the Fortune Cyber 60 report, and was selected for helping organizations see, know and manage security risk in ML systems and AI applications, end-to-end, so they can defend against unique AI security vulnerabilities, data breaches and emerging threats.

“Being named to the Fortune Cyber 60 List in our first year of operation, alongside some of the largest security vendors in the industry is a tremendous honor, and speaks to the importance of the problem we are addressing in securing AI,” said Ian Swanson, co-founder and CEO of Protect AI. “AI/ML is being adopted by organizations in virtually every industry, and at breakneck speed, but without appropriate security measures and controls. Protect AI is helping customers understand the unique threat posed by AI and ML, and providing the visibility, testing and remediation capabilities to remediate risks.”

Protect AI Products
To help customers build safer AI and manage security risks in their ML environment, Protect AI offers the following products, services and resources:

Protect AI Radar, the industry’s most advanced offering for securing and managing AI risk, provides complete visibility of the ML attack surface, and identifies risks and threats in ML systems and AI applications. With integrated security checks, Radar creates a ML Bill of Materials (MLBOM) for visibility and auditability into the ML attack surface. Radar includes an advanced policy engine to enable efficient risk management across regulatory, technical, operational and reputational domains.

Also Read: A Beginner’s Guide to AI in Cybersecurity: What You Need to Know

Protect AI ModelScanis an open source project that scans models to determine if they contain unsafe code. It is the first model scanning tool to support multiple model formats, including H5, Pickle, and SavedModel formats. This protects projects that use PyTorch, TensorFlow, Keras, Sklearn, XGBoost, with more on the way.

Protect AI NB Defenseis a free, open source tool that addresses vulnerabilities in Jupyter Notebooks, which are a core component in machine learning experiments. Data scientists use Jupyter Notebooks to develop ML workloads, as well as generate and share documents that contain code, visualizations, and text. Since notebooks are often shared across teams, organizations, and sometimes uploaded to publicly accessible code repositories, they represent an entry point for attackers seeking to exploit an organization’s ML pipeline.

huntr is the world’s first AI/ML bug bounty program, and provides a single place for security researchers to submit vulnerabilities found in Open Source AI tools and frameworks. This ensures the security and stability of AI/ML applications, and provides the world with critical intelligence on AI vulnerabilities and how to fix them.

MLSecOps.com is an online community dedicated to advancing the field of Machine Learning Security Operations (MLSecOps). It features original weekly podcasts, learning resources, hybrid events, and a Slack community. By engaging visionary thought leaders and subject matter experts in categories such as ML Supply Chain Vulnerability, Model Provenance, GRC, Trusted AI (Bias, Fairness, and Explainability), and Adversarial ML, MLSecOps.com provides a range of interesting and engaging topics to improve members’ and visitors’ awareness and proficiency in MLSecOps.

SOURCE: BusinessWire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img