Sunday, December 22, 2024

Bugcrowd Launches AI Bias Assessment Offering for LLM Applications

Related stories

Doc.com Expands AI developments to Revolutionize Healthcare Access

Doc.com, a pioneering healthcare technology company, proudly announces the development...

Amesite Announces AI-Powered NurseMagic™ Growth in Marketing Reach to Key Markets

Amesite Inc., creator of the AI-powered NurseMagic™ app, announces...

Quantiphi Joins AWS Generative AI Partner Innovation Alliance

Quantiphi, an AI-first digital engineering company, has been named...
spot_imgspot_img

Bugcrowd, the leader in crowdsourced security, announced the availability of AI Bias Assessments as part of its AI Safety and Security Solutions portfolio on the Bugcrowd Platform. AI Bias Assessment taps the power of the crowd to help enterprises and government agencies adopt Large Language Model (LLM) applications safely, efficiently,  and confidently.

LLM applications run on algorithmic models that are trained on huge sets of data. Even when that training data is curated by humans, which it often is not, the application can easily reflect “data bias” caused by stereotypes, prejudices, exclusionary language, and a range of other possible biases from the training data. Such biases can lead the model to behave in potentially unintended and harmful ways, adding considerable risk and unpredictability to LLM adoption.

Some examples of potential flaws include Representation Bias (disproportionate representation or omission of certain groups in the training data), Pre-Existing Bias (biases stemming from historical or societal prejudices present in the training data), and Algorithmic Processing Bias (biases introduced through the processing and interpretation of data by AI algorithms).

Also Read: Cado Security Joins Wiz Integrations (WIN) Platform to Enable Cloud Forensics and Incident Response

The public sector is urgently affected by this growing risk. As of March 2024, the US Government mandated its agencies to conform with AI safety guidelines – including the detection of data bias. That mandate extends to Federal contractors later in 2024.

This problem requires a new approach to security because traditional security scanners and penetration tests are unable to detect such bias. Bugcrowd AI Bias Assessments are private, reward-for-results engagements on the Bugcrowd Platform that activate trusted, third-party security researchers (aka a “crowd”) to identify and prioritize data bias flaws in LLM applications. Participants are paid based on the successful demonstration of impact, with more impactful findings earning higher payments.

The Bugcrowd Platform’s industry-first, AI-driven approach to researcher sourcing and activation, known as CrowdMatch™, allows it to build and optimize crowds with virtually any skill set, to meet virtually any risk reduction goal, including security testing and beyond.

Bugcrowd’s work with customers like the United States Department of Defense (DoD) Chief Digital and Artificial Intelligence Office (CDAO), along with our partner ConductorAI, has become a crucial proving ground for AI detection by unleashing the crowd for identifying data bias flaws,” said Dave Gerry, CEO of Bugcrowd. “We’re eager to share the lessons we’ve learned with other customers facing similar challenges.”

“ConductorAI’s partnership with Bugcrowd for the AI Bias Assessment program has been highly successful. By leveraging ConductorAI’s AI audit expertise and Bugcrowd’s crowdsourced security platform, we led the first public adversarial testing of LLM systems for bias on behalf of the DoD. This collaboration has set a solid foundation for future bias bounties, showcasing our steadfast commitment to ethical AI,” said Zach Long, Founder, ConductorAI.

SOURCE: PRNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img