Saturday, November 23, 2024

Pinecone reinvents the vector database to let companies build knowledgeable AI

Related stories

Deep Instinct Expands Zero-Day Security to Amazon S3

Deep Instinct, the zero-day data security company built on...

Foxit Unveils AI Assistant in Admin Console

Foxit, a leading provider of innovative PDF and eSignature...

Instabase Names Junie Dinda CMO

Instabase, a leading applied artificial intelligence (AI) solution for...
spot_imgspot_img

Breakthrough serverless architecture delivers up to 50x cost reduction opening the path to dramatically better GenAI applications.

Pinecone, the leading vector database company, announced a revolutionary vector database that lets companies build more knowledgeable AI applications: Pinecone serverless. Multiple innovations including a first-of-its-kind architecture and a truly serverless experience deliver up to 50x cost reductions and eliminate infrastructure hassles, allowing companies to bring remarkably better GenAI applications to market faster.

One of the keys to success is providing large amounts of data on-demand to the Large Language Models (LLMs) inside GenAI applications. Research from Pinecone found that simply making more data available for context retrieval reduces the frequency of unhelpful answers from GPT-4 by 50%, even on information it was trained on. The effect is even greater for questions related to private company data. Additionally, the research found the same level of answer quality can be achieved with other LLMs, as long as enough data is made available. This means companies can significantly improve the quality of their GenAI applications and have a choice of LLMs just by making more data (or “knowledge”) available to the LLM. Yet storing and searching through sufficient amounts of vector data on-demand can be prohibitively expensive even with a purpose-built vector database, and practically impossible using relational or NoSQL databases.

Pinecone serverless is an industry-changing vector database that lets companies add practically unlimited knowledge to their GenAI applications. Since it is truly serverless, it completely eliminates the need for developers to provision or manage infrastructure and allows them to build GenAI applications more easily and bring them to market much faster. As a result, developers with use cases of any size can build more reliable, effective, and impactful GenAI applications with any LLM of their choice, leading to an imminent wave of incredible GenAI applications reaching the market. This wave has already started with companies like Notion, CS Disco, Gong and over a hundred others already using Pinecone Serverless.

“To make our newest Notion AI products available to tens of millions of users worldwide we needed to support RAG over billions of documents while meeting strict performance, security, cost, and operational requirements,” said Akshay Kothari, Co-Founder of Notion. “This simply wouldn’t be possible without Pinecone.”

Also Read: Patronus AI and MongoDB Partner to Boost Enterprise Confidence in Generative AI.

Key innovations in the breakthrough architecture of Pinecone Serverless include:

  • Separation of reads, writes, and storage significantly reduces costs for all types and sizes of workloads.
  • Industry-first architecture with vector clustering on top of blob storage provides low-latency, always fresh vector search over practically unlimited data sizes at a low cost.
  • Industry-first indexing and retrieval algorithms built from scratch to enable fast and memory-efficient vector search from blob storage without sacrificing retrieval quality.
  • Multi-tenant compute layer provides a powerful and efficient retrieval for thousands of users, on demand. This enables a serverless experience in which developers don’t need to provision, manage, or even think about infrastructure, as well as usage-based billing that lets companies pay only for what they use.

“From the beginning, our mission has been to help every developer build remarkably better applications through the magic of vector search,” said Edo Liberty, Founder & CEO of Pinecone. “After creating the first and today’s most popular vector database, we’re taking another leap forward in making the vector database even more affordable and completely hassle-free.”

To extend the ease of use that made Pinecone a developer favorite, Pinecone Serverless is launching with integrations to other best-in-class solutions in the GenAI technology stack, including Anthropic, Anyscale, Cohere, Confluent, Langchain, Pulumi, Vercel, and others to be announced soon.

“Vercel’s mission is to help the world ship the best products, and in the age of GenAI that requires Pinecone as the vector database component,” said Guillermo Rauch, CEO and Founder of Vercel. “That’s why we are announcing that all Vercel users can now add Pinecone Serverless to their applications in just a few clicks, with more exciting capabilities to come.”

“We’ve seen tremendous demand from our customers to connect Confluent to Pinecone in order to fuel real-time GenAI applications,” said Jay Kreps, CEO of Confluent. “Our Pinecone Sink Connector (Preview) allows organizations to send continuously enriched data streams from across the business to Pinecone so developers can build and scale real-time GenAI applications faster.”

SOURCE: PRNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img