Friday, July 11, 2025

Arize AI Unveils Prompt Engineering and Retrieval Tracing Workflows For LLM Troubleshooting

Related stories

Diligent Robotics Names Rashed Haq as CTO

Diligent Robotics, pioneers of socially-intelligent, AI-native humanoid robots deployed...

Harmonic Raises $100M Series B to Advance Mathematical AI

Harmonic, the artificial intelligence lab leading the development of...

DigitalOcean Launches GradientAI for Scalable, Simple AI Solutions

DigitalOcean GradientAI Platform enables customers to create AI applications...

NimbleEdge Open-Sources DeliteAI: First On-Device AI for Mobile

NimbleEdge, the on-device AI platform built for performance, privacy,...

TandemAI & Perpetual Medicines Merge to Boost AI Drug Discovery

Combined Firm, Known as TandemAI, Will Build on Synergies...
spot_imgspot_img

Arize AI, a market leader in machine learning observability, debuted industry-first capabilities for troubleshooting large language models at Google Cloud Next ’23.

Arize’s new prompt engineering workflows, including a new prompt playground, enables teams to find prompt templates that need to be improved, iterate on them in real time, and verify improved LLM outputs.

Prompt analysis is an important component in troubleshooting an LLM’s performance. Often, LLM performance can be improved simply by testing different prompt templates, or iterating on one to achieve better responses.

With these new workflows, teams can:

  • Uncover responses with poor user feedback or evaluation scores
  • Identify the template associated with poor responses
  • Iterate on the existing prompt template
  • Compare responses across prompt templates in a prompt playground

Also Read: Protect AI Raises $35M in Series A Financing to Secure AI and Machine Learning from Software Supply Chain Threats

Arize is also launching additional search and retrieval workflows to help teams using retrieval augmented generation (RAG) troubleshoot where and how the retrieval needs to be improved. These new workflows will help teams identify where they may need to add additional context into their knowledge base (or vector database), when the retrieval didn’t retrieve the most relevant information, and ultimately understand why their LLM may have hallucinated or generated sub-optimal responses.

Building LLM-powered systems that responsibly work in the real-world is still too difficult, said Aparna Dhinakaran, Co-Founder and Chief Product Officer of Arize. “These industry-first prompt engineering and RAG workflows will help teams get to value and resolve issues faster, ultimately improving outcomes and proving the value of generative AI and foundation models across industries.”

Arize AI is a machine learning observability platform that helps ML teams deliver and maintain more successful AI in production. Arize’s automated model monitoring and observability platform allows ML teams to quickly detect issues when they emerge, troubleshoot why they happened, and improve overall model performance across both structured data and image and large language models. Arize is a remote-first company with headquarters in Berkeley, CA.

SOURCE: PRNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img