Sunday, December 22, 2024

Cloudflare Powers Hyper-Local AI Inference with NVIDIA Accelerated Computing

Related stories

Doc.com Expands AI developments to Revolutionize Healthcare Access

Doc.com, a pioneering healthcare technology company, proudly announces the development...

Amesite Announces AI-Powered NurseMagic™ Growth in Marketing Reach to Key Markets

Amesite Inc., creator of the AI-powered NurseMagic™ app, announces...

Quantiphi Joins AWS Generative AI Partner Innovation Alliance

Quantiphi, an AI-first digital engineering company, has been named...
spot_imgspot_img

Businesses can now access Cloudflare’s global data center network for affordable and secure AI inference to power leading-edge applications anywhere

Cloudflare, Inc., the leading connectivity cloud company, announced its global network will deploy NVIDIA GPUs at the edge combined with NVIDIA Ethernet switches, putting AI inference compute power close to users around the globe. It will also feature NVIDIA’s full stack inference software —including NVIDIA TensorRT-LLM and NVIDIA Triton Inference server — to further accelerate performance of AI applications, including large language models.

Starting, all Cloudflare customers can access local compute power to deliver AI applications and services using fast and more compliant infrastructure. With this announcement, organizations will be able to run AI workloads at scale, and pay for compute power as needed, for the first time through Cloudflare.

AI inference is how the end user experiences AI and is set to dominate AI workloads. Organizations’ have great demand for GPUs. Cloudflare, with data centers in over 300 cities across the world, can deliver fast experiences to users and meet global compliance regulations.

Cloudflare will make it possible for any organization globally to start deploying AI models — powered by NVIDIA GPUs, networking and inference software — without having to worry about managing, scaling, optimizing, or securing deployments.

Also Read: CoreWeave and VAST Data Join Forces to Build the Data Foundation for a Next Generation Public Cloud with NVIDIA AI

“AI inference on a network is going to be the sweet spot for many businesses: private data stays close to wherever users physically are, while still being extremely cost-effective to run because it’s nearby,” said Matthew Prince, CEO and co-founder, Cloudflare. “With NVIDIA’s state-of-the-art GPU technology on our global network, we’re making AI inference — that was previously out of reach for many customers — accessible and affordable globally.”

“NVIDIA’s inference platform is critical to powering the next wave of generative AI applications,” said Ian Buck, Vice President of Hyperscale and HPC at NVIDIA. “With NVIDIA GPUs and NVIDIA AI software available on Cloudflare, businesses will be able to create responsive new customer experiences and drive innovation across every industry.”

Cloudflare is making generative AI inferencing accessible globally, and without up-front costs. By deploying NVIDIA GPUs to its global edge network, Cloudflare now provides:

  • Low-latency generative AI experiences for every end user, with NVIDIA GPUs available for inference tasks in over 100 cities by the end of 2023, and nearly everywhere Cloudflare’s network extends by the end of 2024.
  • Access to compute power near wherever customer data resides, to help customers anticipate potential compliance and regulatory requirements that are likely to arise.
  • Affordable, pay-as-you-go compute power at scale, to ensure every business can access the latest AI innovation — without the need to invest massive funds upfront to reserve GPUs that may go unused.

SOURCE: BusinessWire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img