Arm Holdings and Meta Platforms have deepened their strategic collaboration to make artificial intelligence (AI) efficiency seamless across the entire computing continuum from milliwatt-scale devices used by consumers to megawatt-scale data centers. This partnership is set to bring more capable and energy-efficient AI experiences to more than 3 billion global users.
Shared Vision for Scalable AI
The new partnership builds on years of teamwork between Meta and Arm. It combines Arm’s power-efficient computing with Meta’s innovative AI-driven products and infrastructure. This partnership will enhance AI performance and scalability across different platforms and workloads.
“From the moments that happen on our platforms to the hardware we design, AI is changing how humans connect and build. Collaborating with Arm allows us to drive that innovation efficiently to the over 3 billion people using Meta’s technologies and apps,” said Santosh Janardhan, Head of Infrastructure at Meta.
Rene Haas, CEO of Arm, emphasized, “AI’s next era will be defined by delivering efficiency at scale. Partnering with Meta, we’re uniting Arm’s performance-per-watt leadership with Meta’s AI innovation to bring smarter, more efficient intelligence everywhere from milliwatts to megawatts.”
Also Read: NVIDIA Spectrum-X Switches Boost Networks for Meta & Oracle
Optimizing AI Infrastructure
Meta’s AI-based ranking and recommendation products, driving discovery and personalization across Facebook and Instagram platforms, will use Arm’s Neoverse-based datacenter solutions. These datacenter solutions provide superior performance and reduced power consumption over legacy x86 systems and allow Meta to gain performance-per-watt parity along with scalability at hyperscale.
The partnership also includes tuning Meta’s AI infrastructure software stack such as compilers, libraries, and large AI frameworks for Arm architectures. Joint tuning of open-source components like Facebook GEneral Matrix Multiplication (FBGEMM) and PyTorch using Arm’s vector extensions and performance libraries has yielded quantifiable gains in inference efficiency and throughput. The optimizations are being returned to the open-source community to enable the overall global AI ecosystem.
Evolution of AI Software from Edge to Cloud
The collaboration is extended to the optimization of AI software across the PyTorch machine-learning framework, the ExecuTorch edge-inference runtime engine, and the vLLM datacenter-inference engine. The partnership resulted in the optimization of ExecuTorch with Arm KleidiAI, increasing efficiency on billions of devices. These open-source tech projects are core to Meta’s AI vision, allowing for the building and deployment of applications from recommendations to conversational intelligence. Both firms plan to keep improving these open-source projects. This will help millions of developers worldwide use AI efficiently on Arm platforms.
Commitment to Open-Source Innovation
Arm and Meta are dedicated to developing open-source AI technologies. The collaborative work of optimizing AI software and infrastructure is designed to drive innovation and collaboration among the global developer community, making AI developments available and useful for everyone.