Monday, July 1, 2024

Kinetica Launches Quick Start for SQL-GPT

Related stories

Beroe Makes Strategic Investment in Forestreet, Strengthening AI Capabilities and Driving Procurement Intelligence Innovation

Deal builds out Beroe's leadership in category-level procurement intelligence, with the addition...

Marvell Extends Connectivity Leadership With Industry’s First 1.6T PAM4 DSP for Active Electrical Cables

Marvell Technology, Inc., a leader in data infrastructure semiconductor...

Trellix Named an XDR Market Leader

Trellix Platform excels in scalability, flexibility, and integration capabilities...
spot_imgspot_img

Customers can be up and running with Language to SQL on their enterprise data within one hour for free

Kinetica, the real-time database for analytics and generative AI, announced the availability of a Quick Start for deploying natural language to SQL on enterprise data. This Quick Start is for organizations that want to experience ad-hoc data analysis on real-time, structured data using an LLM that accurately and securely converts natural language to SQL and returns quick, conversational answers. This offering makes it fast and easy to load structured data, optimize the SQL-GPT Large Language Model (LLM), and begin asking questions of the data using natural language. This announcement follows a series of GenAI innovations which began last May with Kinetica becoming the first analytic database to incorporate natural language into SQL.

“We’re thrilled to introduce Kinetica’s groundbreaking Quick Start for SQL-GPT, enabling organizations to seamlessly harness the power of Language to SQL on their enterprise data in just one hour,” said Phil Darringer, VP of Product, Kinetica. “With our fine-tuned LLM tailored to each customer’s data and our commitment to guaranteed accuracy and speed, we’re revolutionizing enterprise data analytics with generative AI.”

Also Read: Radix Hires Industry-Leader Jay Denton as its New Chief Economist

The Kinetica database converts natural language queries to SQL, and returns answers within seconds, even for complex and unknown questions. Further, Kinetica converges multiple modes of analytics such as time series, spatial, graph, and machine learning that broadens the types of questions that can be answered. What makes it possible for Kinetica to deliver on conversational query is the use of native vectorization that leverages NVIDIA GPUs and modern CPUs. NVIDIA GPUs are the compute paradigm behind every major AI breakthrough this century, and are now extending into data management and ad-hoc analytics. In a vectorized query engine, data is stored in fixed-size blocks called vectors, and query operations are performed on these vectors in parallel, rather than on individual data elements. This allows the query engine to process multiple data elements simultaneously, resulting in radically faster query execution on a smaller compute footprint.

SOURCE: GlobeNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img