Tuesday, November 5, 2024

Pinecone working with AWS to solve Generative AI hallucination challenges

Related stories

Absci and Twist Bioscience Collaborate to Design Novel Antibody using Generative AI

Absci Corporation a data-first generative AI drug creation company, and...

GreyNoise Intelligence Discovers Zero-Day Vulnerabilities in Live Streaming Cameras with the Help of AI

GreyNoise Intelligence, the cybersecurity company providing real-time, verifiable threat...

Medidata Launches Bundled Solutions to Support Oncology and Vaccine Trials

Medidata, a Dassault Systèmes brand and leading provider of...

Blend Appoints Mike Mischel as SVP of AI Consulting

Blend, a leader in data science and AI-powered solutions,...

Patronus AI Launches Industry-First Self-Serve API for AI Evaluation and Guardrails

Patronus AI announced the launch of the Patronus API, the first...
spot_imgspot_img

Integration with Amazon Bedrock can help enterprises overcome greatest challenge in bringing reliable GenAI applications to market

Pinecone, the vector database company providing long-term memory for artificial intelligence (AI), announced an integration with Amazon Bedrock, a fully managed service from Amazon Web Services (AWS) for building GenAI applications. The announcement means customers can now drastically reduce hallucinations and accelerate the go-to-market of Generative AI (GenAI) applications such as chatbots, assistants, and agents.

The Pinecone vector database is a key component of the AI tech stack, helping companies solve one of the biggest challenges in deploying GenAI solutions — hallucinations — by allowing them to store, search, and find the most relevant and up-to-date information from company data and send that context to Large Language Models (LLMs) with every query. This workflow is called Retrieval Augmented Generation (RAG), and with Pinecone, it aids in providing relevant, accurate, and fast responses from search or GenAI applications to end users.

With Amazon Bedrock, the serverless platform lets users select and customize the right models for their needs, then effortlessly integrate and deploy them using popular AWS services such as Amazon SageMaker.

Also Read: Saturn Cloud Launches New Tier of 150 Free Hours For Data Professionals

Pinecone’s integration with Amazon Bedrock allows developers to quickly and effortlessly build streamlined, factual GenAI applications that combine Pinecone’s ease of use, performance, cost-efficiency, and scalability with their LLM of choice. Pinecone’s enterprise-grade security and its availability on the AWS Marketplace allow developers in enterprises to bring these GenAI solutions to market significantly faster.

“We’ve already seen a large number of AWS customers adopting Pinecone,” said Edo Liberty, Founder & CEO of Pinecone. “This integration opens the doors to even more developers who need to ship reliable and scalable GenAI applications… yesterday.”

“With generative AI, customers have the ability to reimagine their applications, create entirely new customer experiences, and improve overall productivity,” said Atul Deo, general manager, Amazon Bedrock at AWS. “Latest personalization techniques like Retrieval Augmented Generation (RAG) have the ability to deliver more accurate generative AI responses that make the most of pre-existing knowledge but can also process and consolidate that knowledge to create unique, context-aware answers, instructions, or explanations in human-like language rather than just summarizing the retrieved data. This integration of Amazon Bedrock and Pinecone will help customers streamline their generative AI application development process by helping deliver relevant  responses.”

“We have AI applications in AWS and tens of billions of vector embeddings in Pinecone,” said Samee Zahid, Director of Engineering, Chipper Cash. “Connecting the two in a simple, serverless API is a game-changer for our development velocity.”

SOURCE: PRNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img