Friday, July 25, 2025

Using AI Responsibly: U.S. Leads Efforts to Develop ISO/IEC 42001, Artificial Intelligence Management System Standard

Related stories

Hyperfine Launches Optive AI™ to Boost Swoop® Imaging

Hyperfine, the pioneering health technology company behind the world’s...

FlutterFlow Unveils AI-First Dreamflow with Tri-Surface Dev

FlutterFlow has announced a major upgrade to its AI-powered...

Google Cloud Taps ZK ID Protocol Self for AI, Web3 Growth

Self, a ZK-powered identity protocol that enables users to...

Vanta Secures $150M series D to fuel AI-driven trust future

Vanta, the leading AI-powered trust management platform, has announced...

Corridor & Oliver Wyman Launch AI Sandbox with Google Cloud

Following the successful launch of GenGuardX, its flagship GenAI...
spot_imgspot_img

A new international standard provides guidance for organizations of all kinds to use artificial intelligence (AI) systems responsibly: ISO/IEC 42001, Artificial intelligence – Management system, developed by the International Organization for Standardization / International Electrotechnical Commission (ISO/IEC) Joint Technical Committee (JTC) 1, Information technology, Subcommittee (SC) 42, Artificial Intelligence. The U.S. has a leading role in JTC 1, with the American National Standards Institute (ANSI), the U.S. member body to ISO, serving as secretariat.

While AI is gaining traction across all sectors that utilize information technology, it poses risks to organizations that necessitate careful governance mechanisms. ISO/IEC 42001:2023 guides an AI management system to address the complexities of the technology, providing a framework for managing the risks and opportunities while supporting the responsible use of AI. The standard specifies requirements and provides guidance for establishing, implementing, maintaining, and continually improving an AI management system within the context of any organization—regardless of size, type, and nature—that provides or uses products or services that utilize AI systems.

Also Read: Codenotary Introduces Trustcenter 4.0 with New Machine Learning Guided Search Engine

The standard centers on a “Plan-Do-Check-Act” approach of establishing, implementing, maintaining, and continually improving AI; this system supports improved quality, security, traceability, transparency, and reliability of AI applications. Ultimately, the goal of the standard is to help organizations achieve the maximum benefits from AI, while reassuring stakeholders that systems that incorporate AI are being developed, governed, and used responsibly.

“ISO/IEC 42001:2023 is a first-of-its-kind AI international standard that will enable certification, increase consumer confidence in AI systems, and enable broad responsible adoption of AI,” said Wael William Diab, chair of SC 42. “This novel approach takes the proven management systems approach and adapts it to AI. The standard is broadly applicable across a wide variety of application domains and will help unlock the societal benefits of AI while simultaneously addressing ethical and trustworthy concerns.”

The standard was developed by stakeholders representing diverse interests, including representatives of the public and private sectors, regulators, technology experts, researchers, academia, and more. SC 42 is comprised of 63 countries, with more than one third from developing nations. Many members of SC 42 have stakeholders that will be predominately users of AI—contributing to a balanced standards developing environment that included both users and developers of the technology.

SOURCE: PRNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img