Friday, November 22, 2024

LatticeFlow AI joins the US AI Safety Institute Consortium

Related stories

Capgemini, Mistral AI & Microsoft Boost Generative AI

Capgemini announced a global expansion of its Intelligent App...

Rackspace Launches Adaptive Cloud Manager for Growth

Rackspace Technology®, a leading hybrid, multicloud, and AI technology...

Theatro Launches GENiusAI to Boost Frontline Productivity

Theatro, a pioneer in voice-controlled mobile communication technology, is...

Denodo 9.1 Boosts AI & Data Lakehouse Performance

Latest release adds an AI-powered assistant, an SDK to...

Health Catalyst Launches AI Cyber Protection for Healthcare

Health Catalyst, Inc., a leading provider of data and...
spot_imgspot_img

LatticeFlow AI , the leading platform enabling artificial intelligence (AI) teams to build high-performance, secure and reliable AI solutions, is proud to announce that it has joined the US AI Safety Institute Consortium (AISIC). LatticeFlow AI researchers will support Working Group #3, which focuses on capability assessment. In collaboration with the National Institute of Standards & Technology (NIST) and other consortium members, LatticeFlow AI will support the development of testing methods, benchmarks, and environments that help organizations operationalize the practices outlined in the framework. NIST’s AI Risk Management (RMF) Framework.

Dave Henry , vice president of business development at LatticeFlow AI, adds: “AI security programs are interdisciplinary in nature and require a broad range of technical and management skills to execute. The Consortium brings together diverse experts to create sustainable and innovative practices that foster trustworthy AI. We look forward to contributing our knowledge of AI model security and collaborating on new approaches for scalable assessments. »

Leaders Demand Trustworthy AI

Despite the impressive accuracy of AI models demonstrated in pilots and proof-of-concept projects, building AI solutions that operate reliably on real-world data remains an immense challenge. This concerns both the technical teams who design and deliver AI solutions and the management teams who must quantify risks and approve the use of AI solutions in critical operations. As a result, 85% of models never make it into production, and of those that do, 91% degrade over time.

“Unfortunately, high-value AI deployments are delayed due to developer delay, lack of good data governance, and failure to quantify possible risks associated with the use of their AI models says Randolph Kahn Esq. , president of Kahn Consulting Inc. “Consortia such as AISIC can produce detailed guidelines and best practices, and when combined with LatticeFlow AI technology, this can lead to a significant reduction in time and effort. costs needed to conduct in-depth risk assessments and harness the value of AI systems.

The US AI Safety Institute Consortium

AISIC was created by the National Institute of Standards and Technology (NIST) of the U.S. Department of Commerce to contribute to the priority actions outlined in U.S. President Joe Biden’s executive order on the safe, secure, and reliable development and use of technology. ‘AI. This includes developing guidelines for expert teams, technical assessment standards, and risk management practices, among other key elements of developing trustworthy AI and its responsible use .

Also Read: Prompt Security launches industry’s first open-source fuzzer tool for GenAI Application Vulnerability Assessment

LatticeFlow AI Contributions to AI Safety and Reliability

LatticeFlow AI’s association with AISIC follows a series of contributions made by the LatticeFlow AI team toward the goals set forth by AISIC and President Joe Biden’s executive order. Since 2020, LatticeFlow AI has been an invited technical contributor to key AI security initiatives within the International Organization for Standardization, where it has actively contributed to the working group focused on AI development and standards. Trustworthy AI. Earlier this year, LatticeFlow AI hosted renowned AI leaders at the World Economic Forum’s AI House , bringing together key AI figures including Gary Marcus (NYU), Apostol Vassilev (NIST), Kai Zenner (European Parliament), Matthias Bossardt (KPMG), Thomas Stauner (BMW), among other industry experts, to delve into the latest developments in AI and discuss responsible AI adoption.

From standards to practice: first AI assessment for a major Swiss bank

Beyond developing AI guidelines and standards, LatticeFlow AI this year announced the first AI technical assessment for Migros Bank , a large Swiss bank, demonstrating how these AI standards are used in practice to mitigate risks and ensure regulatory compliance for business-critical AI solutions. The results of this assessment, along with a concrete plan to implement AI governance and technical assessments of AI in businesses and governments, were presented at a dedicated event organized at ETH AI Center .

LatticeFlow AI Engagement with the US and NIST AI Safety Institute Consortium

With its contributions to AISIC, LatticeFlow AI will continue its commitment to helping U.S. government agencies such as the U.S. Army ensure the security and reliability of critical AI systems. Last year, White House officials announced that LatticeFlow AI had won first place in the red teaming category of the AI ​​Privacy Prize Challenge. Subsequently, the company announced a strategic expansion into the United States and a three-year strategic engagement with the U.S. Army to build next-generation resilient AI solutions for critical defense use cases.

“Joining AISIC will allow us to accelerate and expand our impact by aligning our efforts to ensure AI trust and safety with global AI leaders, businesses and governments,” says Petar Tsankov , co-founder and CEO of LatticeFlow AI.

Source: BusinessWire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img