Tuesday, December 3, 2024

HYAS Infosec Groundbreaking Research on AI-Generated Malware Contributes to the AI Act, Other AI Policies and Regulations

Related stories

Trellix Earns AWS Generative AI Competency Certification

Trellix, the company delivering cybersecurity’s broadest AI-powered platform, announced...

Aidoc Launches CARE1™ AI Model, Revolutionizing CT Imaging

Aidoc, the global leader in clinical AI, has developed...

VEON’s Kyivstar Launches GenAI Lab with AWS for Innovation

VEON Ltd., a global digital operator, announces the launch...

Aeva Expands Collaboration with SICK for Factory Automation Sensing

Aeva, a leader in next-generation sensing and perception systems,...

KnowBe4’s AI Defense Against Phishing & Human Cyber Risk

KnowBe4, the world-renowned cybersecurity platform that comprehensively addresses human...
spot_imgspot_img

Provides AI Regulation Initiatives with Deep Insight into the Potential Harms of Fully Autonomous and Intelligent Malware and Helps Advance Cybersecurity Protections Against AI-Driven Threats

HYAS Infosec, an adversary infrastructure platform provider that offers unparalleled visibility, protection and security against all kinds of malware and attacks, is pleased to share that research cited from HYAS Labs, the research arm of HYAS, is being utilized by contributors to and framers of the European Union’s AI Act.

The AI Act is widely viewed as a cornerstone initiative that is helping shape the trajectory of AI governance, with the United States’ policies and considerations soon to follow.

AI Act researchers and framers assert that the Act reflects a specific conception of AI systems, viewing them as non-autonomous statistical software with potential harms primarily stemming from datasets. The researchers view the concept of “intended purpose,” drawing inspiration from product safety principles, as a fitting paradigm and one that has significantly influenced the initial provisions and regulatory approach of the AI Act.

However, these researchers also see a substantial gap in the AI Act concerning AI systems devoid of an intended purpose, a category that encompasses General-Purpose AI Systems (GPAIS) and foundation models.

HYAS’ work on AI-generated malware — specifically, BlackMamba, as well as its more sophisticated and fully autonomous cousin, EyeSpy – is helping advance the understanding of AI systems that are devoid of an intended purpose, including GPAIS and the unique challenges posed by GPAIS to cybersecurity.

Also Read: Carahsoft and Fend Incorporated Partner to Bring Innovative Cybersecurity Solutions to the Public Sector

HYAS research is proving important for both the development of proposed policies and for the real-world challenges posed by the rising dilemma of fully autonomous and intelligent malware which cannot be solved by policy alone.

HYAS is providing researchers with tangible examples of GPAIS gone rogue. BlackMamba, the proof of concept cited in the research paper “General Purpose AI systems in the AI Act: trying to fit a square peg into a round hole,” by Claire Boine and David Rolnick, exploited a large language model to synthesize polymorphic keylogger functionality on-the-fly and dynamically modified the benign code at runtime — all without any command-and-control infrastructure to deliver or verify the malicious keylogger functionality.

EyeSpy, the more advanced (and more dangerous) proof of concept from HYAS Labs, is a fully autonomous AI-synthesized malware that uses artificial intelligence to make informed decisions to conduct cyberattacks and continuously morph to avoid detection. The challenges posed by an entity such as EyeSpy capable of autonomously assessing its environment, selecting its target and tactics of choice, strategizing, and self-correcting until successful – all while dynamically evading detection – was highlighted at the recent Cyber Security Expo 2023 in presentations such as “The Red Queen’s Gambit: Cybersecurity Challenges in the Age of AI.”

In response to the nuanced challenges posed by GPAIS, the EU Parliament has proactively proposed provisions within the AI Act to regulate these complex models. The significance of these proposed measures cannot be overstated and will help to further refine the AI Act and sustain its continued usefulness in the dynamic landscape of AI technologies.

HYAS CEO, David Ratner, said: “The industry as a whole must prepare for a new generation of threats. Cybersecurity and cyber defense must have the appropriate visibility into the digital exhaust and meta information thrown off by fully autonomous and dynamic malware to ensure operational resiliency and business continuity.”

SOURCE: BusinessWire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img