Pinecone, a leading vector database company, launched Pinecone serverless into general availability. The state-of-the-art vector database designed to make generative artificial intelligence (AI) accurate, fast, and scalable is now ready for mission-critical workloads.
“Businesses are already building delightful and knowledgeable AI products with Pinecone,” said Edo Liberty, founder and CEO of Pinecone. “After making these products work in the lab, developers want to launch these products to thousands or millions of users. This makes considerations like operating costs, performance at scale, high availability and support, and security matter a lot. This is where Pinecone serverless shines, and why it’s the most trusted vector database for production applications.”
Confidently moving forward with AI
Pinecone serverless has been battle-tested with rapid adoption over the course of four months in Public Preview mode. More than 20,000 organizations have used it to date. Large, critical workloads with billions of vectors are also running with select customers, making up part of the collective 12 billion embeddings already indexed on the new architecture. Serverless users, large and small, include organizations like Gong, Help Scout, New Relic, Notion, TaskUs, and You.com. With Pinecone serverless, these organizations are eliminating significant operational overhead by reducing costs up to 50x, and building more accurate AI applications at scale.
Making AI knowledgeable
Pinecone research shows that the most effective method to improve the quality of generative AI results and reduce hallucinations – unintended, false, or misleading information presented as fact – is by using a vector database for Retrieval-augmented Generation (RAG). A detailed study from AI consulting services firm Prolego supports the findings that RAG significantly improves the performance of large language models (LLMs). For example, compared with the well-known GPT-4 LLM alone, GPT-4 with RAG and sufficient data reduces the frequency of unhelpful answers from GPT-4 by 50% for the “faithfulness” metric, even on information that the LLM was trained on. Moreover, as more data becomes available for context retrieval, the more accurate results become.
Also Read: Upland Qvidian AI Assist enhances the response and proposal process with generative AI
Making AI easy and affordable with the best database architecture
Pinecone serverless is architected from the ground up to provide low-latency, always-fresh vector search over unrestricted data sizes at low cost. This is making generative AI easily accessible.
Separation of reads from writes, and storage from compute in Pinecone serverless significantly reduces costs for all types and sizes of workloads. First-of-their-kind indexing and retrieval algorithms enable fast and memory-efficient vector search from object storage without sacrificing retrieval quality.
Introducing Private Endpoints
Security, privacy, and compliance are paramount for businesses as they fuel artificial intelligence with more and more data. Today, Pinecone is unveiling Private Endpoints in public preview to help ensure customer data adheres to these demands, as well as governance and regulatory compliance.
Private Endpoints support direct and secure data plane connectivity from an organization’s virtual private cloud (VPC) to their Pinecone index over AWS PrivateLink, an Amazon Web Services (AWS) offering that provides private connectivity between VPCs, supported AWS services, and on-premises networks without exposing traffic to the public Internet.
Building with the AI Stack
To make building AI applications as simple as possible, Pinecone serverless is launching with a growing number of partner integrations. Companies in Pinecone’s recently-announced partner program can now let their users seamlessly connect with and use Pinecone directly inside those users’ coding environments. These companies include Anyscale, AWS, Confluent, LangChain, Mistral, Monte Carlo, Nexla, Pulumi, Qwak, Together.ai, Vectorize, and Unstructured. Pinecone is also working with service integrator partners like phData to help joint customers onboard to Serverless.
Source: PRNewsWire