Wednesday, January 21, 2026

HackerOne Establishes Industry Standard for AI Testing with Good Faith AI Research Safe Harbor

Related stories

HackerOne, the global leader in continuous threat exposure management, announced the Good Faith AI Research Safe Harbor, the first-ever framework of its kind that offers specific legal protection and the clear right to conduct AI testing. The fact is, with the increasing use of AI in mission-critical products, the lack of clear legal risk has hindered responsible AI research. The new safe harbor framework works to eliminate this obstacle, allowing the identification and patching of AI-related vulnerabilities within mission-critical products much more quickly.

Building on HackerOne’s widely adopted Gold Standard Safe Harbor introduced in 2022, the Good Faith AI Research Safe Harbor extends standardized protections from traditional software to include the unique testing behaviors associated with AI systems. Together, the two frameworks offer a comprehensive blueprint for how organizations should authorize, support, and safeguard research into both conventional and AI-powered environments.

AI testing often includes activities that do not fit within conventional vulnerability disclosure constructs, creating legal ambiguity that can slow discovery and elevate risk. The Good Faith AI Research Safe Harbor addresses this challenge by defining what constitutes Good Faith AI Research and explicitly authorizing responsible testing of AI technologies.

“AI testing breaks down when expectations are unclear,” said Ilona Cohen, Chief Legal and Policy Officer at HackerOne. “Organizations want their AI systems tested, but researchers need confidence that doing the right thing won’t put them at risk. The Good Faith AI Research Safe Harbor provides clear, standardized authorization for AI research, removing uncertainty on both sides.”

Also Read: Rubrik Unveils Rubrik Security Cloud Sovereign to Strengthen Global Data Control and Compliance

Organizations that adopt the Good Faith AI Research Safe Harbor agree to officially recognize ethical AI research as authorized activity. Key commitments include refraining from legal action against researchers acting in good faith, offering limited exemptions from restrictive terms of service, and providing support should third parties pursue claims related to authorized research. The safe harbor applies to AI systems owned or controlled by the adopting entity and is structured to enable collaborative, responsible disclosure.

“AI security is ultimately about trust,” said Kara Sprague, CEO of HackerOne. “If AI systems aren’t tested under real-world conditions, trust erodes quickly. By extending safe harbor protections to AI research, HackerOne is defining how responsible testing should work in the AI era. This is how organizations find problems earlier, work productively with researchers, and deploy AI with confidence.”

The Good Faith AI Research Safe Harbor is available to HackerOne customers as a standalone adoption option and can be used alongside the Gold Standard Safe Harbor. Organizations that adopt this framework signal to the research community that AI testing is welcomed, authorized, and protected driving higher-quality engagement and stronger security outcomes.

This new initiative reinforces HackerOne’s leadership in shaping the intersection of security, trust, and authorization in the AI era, and establishes clear expectations for organizations and researchers navigating the future of AI systems.

Subscribe

- Never miss a story with notifications


    Latest stories