Friday, November 22, 2024

Google Cloud and Hugging Face Announce Strategic Partnership to Accelerate Generative AI and ML Development

Related stories

Capgemini, Mistral AI & Microsoft Boost Generative AI

Capgemini announced a global expansion of its Intelligent App...

Rackspace Launches Adaptive Cloud Manager for Growth

Rackspace Technology®, a leading hybrid, multicloud, and AI technology...

Theatro Launches GENiusAI to Boost Frontline Productivity

Theatro, a pioneer in voice-controlled mobile communication technology, is...

Denodo 9.1 Boosts AI & Data Lakehouse Performance

Latest release adds an AI-powered assistant, an SDK to...

Health Catalyst Launches AI Cyber Protection for Healthcare

Health Catalyst, Inc., a leading provider of data and...
spot_imgspot_img

Google Cloud and Hugging Face announced a new strategic partnership that will allow developers to utilize Google Cloud’s infrastructure for all Hugging Face services, and will enable training and serving of Hugging Face models on Google Cloud.

The partnership advances Hugging Face’s mission to democratize AI and furthers Google Cloud’s support for open source AI ecosystem development. With this partnership, Google Cloud becomes a strategic cloud partner for Hugging Face, and a preferred destination for Hugging Face training and inference workloads. Developers will be able to easily utilize Google Cloud’s AI-optimized infrastructure including compute, tensor processing units (TPUs), and graphics processing units (GPUs) to train and serve open models and build new generative AI applications.

Google Cloud and Hugging Face will partner closely to help developers train and serve large AI models more quickly and cost-effectively on Google Cloud, including:

  • Giving developers a way to train, tune, and serve Hugging Face models with Vertex AI in just a few clicks from the Hugging Face platform, so they can easily utilize Google Cloud’s purpose-built, end-to-end MLOps services to build new gen AI applications.
  • Supporting Google Kubernetes Engine (GKE) deployments, so developers on Hugging Face can also train, tune, and serve their workloads with “do it yourself” infrastructure and scale models using Hugging Face-specific Deep Learning Containers on GKE.

Also Read: Tenable Named a CRN 2024 Cloud 100 Company

  • Providing more open source developers with access to Cloud TPU v5e, which offers up to 2.5x more performance per dollar and up to 1.7x lower latency for inference compared to previous versions.
  • Adding future support for A3 VMs, powered by NVIDIA’s H100 Tensor Core GPUs, which offer 3x faster training and 10x greater networking bandwidth compared to the prior generation.
  • Utilizing Google Cloud Marketplace to provide simple management and billing for the Hugging Face managed platform, including Inference, Endpoints, Spaces, AutoTrain, and others.

“Google Cloud and Hugging Face share a vision for making generative AI more accessible and impactful for developers,” said Thomas Kurian, CEO at Google Cloud. “This partnership ensures that developers on Hugging Face will have access to Google Cloud’s purpose-built AI platform, Vertex AI, along with our secure infrastructure, which can accelerate the next generation of AI services and applications.”

“From the original Transformers paper to T5 and the Vision Transformer, Google has been at the forefront of AI progress and the open science movement,” said Clement Delangue, CEO of Hugging Face. “With this new partnership, we will make it easy for Hugging Face users and Google Cloud customers to leverage the latest open models together with leading optimized AI infrastructure and tools from Google Cloud including Vertex AI and TPUs to meaningfully advance developers ability to build their own AI models.”

SOURCE: PRNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img