Saturday, December 28, 2024

Kinetica Launches Quick Start for SQL-GPT

Related stories

Dataiku: 2024 Gartner Customers’ Choice for DSML Platforms

Dataiku, the Universal AI Platform, announced its recognition as...

STMicroelectronics Enhances Edge AI with NPU-Driven STM32 Microcontrollers

STMicroelectronics, a global semiconductor leader serving customers across the...

Hive Wins DoD Contract for Deepfake AI Defense

Hive, a leading provider of enterprise AI solutions, has...

Upstream Launches AI Tool to Cut Vehicle Warranty Costs

Upstream, the leading provider of cloud-based cybersecurity and data...
spot_imgspot_img

Customers can be up and running with Language to SQL on their enterprise data within one hour for free

Kinetica, the real-time database for analytics and generative AI, announced the availability of a Quick Start for deploying natural language to SQL on enterprise data. This Quick Start is for organizations that want to experience ad-hoc data analysis on real-time, structured data using an LLM that accurately and securely converts natural language to SQL and returns quick, conversational answers. This offering makes it fast and easy to load structured data, optimize the SQL-GPT Large Language Model (LLM), and begin asking questions of the data using natural language. This announcement follows a series of GenAI innovations which began last May with Kinetica becoming the first analytic database to incorporate natural language into SQL.

“We’re thrilled to introduce Kinetica’s groundbreaking Quick Start for SQL-GPT, enabling organizations to seamlessly harness the power of Language to SQL on their enterprise data in just one hour,” said Phil Darringer, VP of Product, Kinetica. “With our fine-tuned LLM tailored to each customer’s data and our commitment to guaranteed accuracy and speed, we’re revolutionizing enterprise data analytics with generative AI.”

Also Read: Radix Hires Industry-Leader Jay Denton as its New Chief Economist

The Kinetica database converts natural language queries to SQL, and returns answers within seconds, even for complex and unknown questions. Further, Kinetica converges multiple modes of analytics such as time series, spatial, graph, and machine learning that broadens the types of questions that can be answered. What makes it possible for Kinetica to deliver on conversational query is the use of native vectorization that leverages NVIDIA GPUs and modern CPUs. NVIDIA GPUs are the compute paradigm behind every major AI breakthrough this century, and are now extending into data management and ad-hoc analytics. In a vectorized query engine, data is stored in fixed-size blocks called vectors, and query operations are performed on these vectors in parallel, rather than on individual data elements. This allows the query engine to process multiple data elements simultaneously, resulting in radically faster query execution on a smaller compute footprint.

SOURCE: GlobeNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img