In support of efforts to create safe and trustworthy artificial intelligence (AI), NIST is establishing the U.S. Artificial Intelligence Safety Institute (USAISI). To support this Institute, NIST has created the U.S. AI Safety Institute Consortium. The Consortium brings together more than 200 organizations to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world. This will help ready the U.S. to address the capabilities of the next generation of AI models or systems, from frontier models to new applications and approaches, with appropriate risk management strategies.
Building upon its long track record of working with the private and public sectors and its history of reliable and practical measurement and standards-oriented solutions, NIST works with research collaborators through the AISIC who can support this vital undertaking. Specifically, it will:
- Establish a knowledge and data sharing space for AI stakeholders
- Engage in collaborative and interdisciplinary research and development through the performance of the Research Plan
- Prioritize research and evaluation requirements and approaches that may allow for a more complete and effective understanding of AI’s impacts on society and the US economy
- Identify and recommend approaches to facilitate the cooperative development and transfer of technology and data between and among Consortium Members
- Identify mechanisms to streamline input from federal agencies on topics within their direct purviews
- Enable assessment and evaluation of test systems and prototypes to inform future AI measurement efforts
To create a lasting approach for continued joint research and development, the work of the consortium will be open and transparent and provide a hub for interested parties to work together in building and maturing a measurement science for trustworthy and responsible AI.
Consortium members contributions will support one of the following areas:
- Develop new guidelines, tools, methods, protocols and best practices to facilitate the evolution of industry standards for developing or deploying AI in safe, secure, and trustworthy ways
- Develop guidance and benchmarks for identifying and evaluating AI capabilities, with a focus on capabilities that could potentially cause harm
- Develop approaches to incorporate secure-development practices for generative AI, including special considerations for dual-use foundation models, including
- Guidance related to assessing and managing the safety, security, and trustworthiness of models and related to privacy-preserving machine learning;
- Guidance to ensure the availability of testing environments
- Develop and ensure the availability of testing environments
- Develop guidance, methods, skills and practices for successful red-teaming and privacy-preserving machine learning
- Develop guidance and tools for authenticating digital content
- Develop guidance and criteria for AI workforce skills, including risk identification and management, test, evaluation, validation, and verification (TEVV), and domain-specific expertise
- Explore the complexities at the intersection of society and technology, including the science of how humans make sense of and engage with AI in different contexts
- Develop guidance for understanding and managing the interdependencies between and among AI actors along the lifecycle
NIST received over 600 Letters of Interest from organizations across the AI stakeholder community and the United States. As of February 8, 2024, the consortium includes more than 200 member companies and organizations.