HITRUST, the information risk management, standards, and certification body, published a comprehensive AI strategy for secure and sustainable use of AI. The strategy encompasses a series of important elements critical to delivery of trustworthy AI. The resulting HITRUST AI Assurance Program prioritizes risk management as a foundational consideration in the newly updated version 11.2 of the HITRUST CSF. HITRUST is also announcing AI risk management guidance for AI systems soon to follow, as well as the use of inheritance in support of shared responsibility for AI and an approach for industry collaboration as part of the AI Assurance Program.
AI, and more specifically, Generative AI, made popular by OpenAI’s ChatGPT, is unleashing a technological wave of innovation with transformative economic and societal potential. Goldman Sachs Research predicts that Generative AI could raise global GDP by 7% over the next 10 years . Organizations are eager to transform their operations and boost productivity across business functions ranging from customer relationship management (CRM) to software development in order to unlock new layers of value through a growing evolution of enterprise AI use cases. However, any new disruptive technology also inherently delivers new risks, and Generative AI is no different.
AI foundational models now available from cloud service providers and other leading organizations allow organizations to scale AI across industry use cases and specific enterprise needs. But the opaque nature of these deep neural networks introduces data privacy and security challenges that must be met with transparency and accountability. It is critical for organizations offering AI solutions to understand their responsibilities and ensure that they have reliable assurances for their service and solution providers.
The HITRUST AI assurance program builds upon a common, reliable, and proven approach to security assurance that will allow organizations implementing AI and the AI models and services to understand the risks associated, and reliably demonstrate their adherence, with AI risk management principles using the same transparency, consistency, accuracy, and quality available through all HITRUST Assurance reports.
Also Read: Security Compliance Associates Partners with Phished.io
“Risk management, security and assurance for AI systems requires that organizations contributing to the system understand the risks across the system and agree how they together secure the system,” said Robert Booker, Chief Strategy Officer, HITRUST. “Trustworthy AI requires understanding of how controls are implemented by all parties and shared and a practical, scalable, recognized, and proven approach for an AI system to inherit the right controls from their service providers. We are building AI Assurances on a proven system that will provide the needed scalability and inspire confidence from all relying parties, including regulators, that care about a trustworthy foundation for AI implementations.”
Organizations can deploy Generative AI large language models (LLMs) through a variety of methods. These include self-hosting LLMs on-premise or delivering or accessing a LLM through a service provider. Each method comes with differences in how LLMs can be built, trained, and tuned, as well as different shared responsibilities for managing security and data privacy risks.
Cloud service providers are building AI on their cloud foundations, and already assist thousands of organizations in achieving HITRUST certification more quickly through the hundreds of Shared Responsibility and Inheritance control requests they receive daily. This provides their customers the benefit of importing and inheriting the strong controls and assurances provided by their existing HITRUST certifications. Adding AI to the HITRUST CSF extends this proven approach to help organizations also provide assurances around their use of and reliance on AI.
Microsoft Azure OpenAI Service supports HITRUST maintenance of the CSF and enables accelerated mapping of the CSF to new regulations, data protection laws, and standards. This in turn supports the Microsoft Global Healthcare Compliance Scale Program, enabling solution providers to streamline compliance for accelerated solution adoption and time-to-value.
AI systems are made up of the system that is using or consuming AI technologies, the organizations that are providing the AI service, and in many cases, additional data providers supporting the machine learning system and large language model underpinning the system. The context of the overall system on which AI is delivered and consumed is critical to understand as is the benefit of partnering with high-quality AI service providers that provide clear, objective, and understandable documentation of their AI risks and how those risks, including security, are managed in their services. Users of AI services can leverage the capabilities of those high-quality service providers as part of their overarching risk management and security system accompanying their deployment of AI with resulting increase in efficiency and trustworthiness of their systems if the provider is committed to an approach that supports inheritance and shared responsibility.
“AI has tremendous social potential and the cyber risks that security leaders manage every day extend to AI. Objective security assurance approaches such as the HITRUST CSF and HITRUST certification reports assess the needed security foundation that should underpin AI implementations,” says Omar Khawaja, Field CISO of Databricks. “Databricks is excited to be working with HITRUST to build on this important foundation and to significantly reduce the complexity of risk management and security for AI implementations across all industries.”
SOURCE: PRNewswire