Site icon AIT365

Snowflake and NVIDIA Power Customized AI Applications for Customers and Partners

Snowflake

Bringing together the industry’s leading AI-powered applications, models, and hardware so customers can deliver enterprise AI across their businesses with ease, efficiency, and trust

Snowflake, the AI Data Cloud company, announced at Snowflake Summit 2024 a new collaboration with NVIDIA that customers and partners can harness to build customized AI data applications in Snowflake, powered by NVIDIA AI.

With this latest collaboration, Snowflake has adopted NVIDIA AI Enterprise software to integrate NeMo Retriever microservices into Snowflake Cortex AI, Snowflake’s fully managed large language model (LLM) and vector search service. This will enable organizations to seamlessly connect custom models to diverse business data and deliver highly accurate responses. In addition, Snowflake Arctic, the most open, enterprise-grade LLM, is now fully supported with NVIDIA TensorRT-LLM software, providing users with highly optimized performance. Arctic is also now available as an NVIDIA NIM inference microservice, allowing more developers to access Arctic’s efficient intelligence.

As enterprises look for ways to further unlock the power of AI across their teams, there’s an increasing need to apply data to drive customization. Through Snowflake’s collaboration with NVIDIA, organizations can rapidly create bespoke, use-case specific AI solutions, enabling businesses across industries to realize the potential of enterprise AI.

“Pairing NVIDIA’s full stack accelerated computing and software with Snowflake’s state-of-the-art AI capabilities in Cortex AI is game-changing,” said Sridhar Ramaswamy, CEO, Snowflake. “Together, we are unlocking a new era of AI where customers from every industry and every skill level can build custom AI applications on their enterprise data with ease, efficiency, and trust.”

“Data is the essential raw material of the AI industrial revolution,” said Jensen Huang, founder and CEO, NVIDIA. “Together, NVIDIA and Snowflake will help enterprises refine their proprietary business data and transform it into valuable generative AI.”

Also Read: xAI announces series B funding round of $6 billion

Snowflake Cortex AI + NVIDIA AI Enterprise Software

Snowflake and NVIDIA are collaborating to integrate the key technologies of NVIDIA AI Enterprise software platform – such as NeMo Retriever – into Cortex AI, so business users can efficiently build and leverage bespoke AI-powered applications that maximize their AI investments.

NVIDIA AI Enterprise software capabilities to be offered in Cortex AI include:

In addition, NVIDIA NIM inference microservices – a set of pre-built AI containers and part of NVIDIA AI Enterprise – can be deployed right within Snowflake as a native app powered by Snowpark Container Services. The app enables organizations to easily deploy a series of foundation models right within Snowflake.

Quantiphi, an AI-first digital engineering company, and ‘Elite’ tier partner with both Snowflake and NVIDIA, is one of the many AI providers building Snowflake Native Apps using Snowpark Container Services. These apps run within a customer’s Snowflake account to help ensure data remains protected, while delivering faster time-to-value. Quantiphi’s Native Apps, baioniq – a generative AI platform for boosting knowledge worker productivity and Dociphi – an AI-led intelligent document processing platform for the banking, financial services, and insurance industries, target specific business personas to accelerate their industry use cases and day-to-day operations. Both Dociphi and baioniq were developed using the NVIDIA NeMo framework and will be available on Snowflake Marketplace for users to deploy without leaving their Snowflake environment.

Expanded Support for Snowflake Arctic

The state-of-the-art Snowflake Arctic LLM, launched in April 2024 and trained on NVIDIA H100 Tensor Core GPUs, is available as an NVIDIA NIM so users can get started with Arctic in seconds. The Arctic NIM hosted by NVIDIA is live on the NVIDIA API catalog for developer access using free credits, and will be offered as a downloadable NIM, giving users even more choice to deploy the most open enterprise LLM available on their preferred infrastructure.

Source: Businesswire

Exit mobile version