RunLLM, the AI‑first technical support platform, announced the general availability of RunLLM v2, a complete rebuild of its AI Support Engineer designed to deliver unmatched flexibility, reasoning, and control for enterprise support operations.
“Super excited to launch RunLLM v2 today!” said Vikram Sreekanti, Co‑founder & CEO, RunLLM. “When we started building expert systems on complex products 2 years ago, we had no idea it would turn into an AI Support Engineer. We started by focusing on delivering the highest‑quality answers possible, and that earned us the trust of teams at Databricks, Sourcegraph, Monte Carlo Data, and Corelight. But high quality answers are only one part of a good AI Support Engineer. Since our last launch, we’ve re‑built RunLLM from the ground up, introducing a new agentic planner, redesigning our UI, and giving our customers fine‑grained control over agent workflows. Today, we’re focused on the new agentic features, because, well, it’s 2025 and you have to have agents. Jokes aside, the post below outlines in full detail how we’ve rethought RunLLM’s core inference process, how we show our work, and what we’ve built to do log & telemetry analysis. Check it out!”
Key Features in RunLLM v2
Agentic Planning & Reasoning
A powerful new planner enables RunLLM to reason step-by‑step, perform tool use, ask clarifying questions, search knowledge bases, and analyze logs or telemetry—entering fully into agent‑style workflows.
Redesigned Multi‑Agent UI
A modern UI now supports creating, managing, and inspecting multiple agents—each tailored to specific team needs (support, success, sales)—with precise behavior and data control.
Python SDK for Workflow Control
Developers can now script and customize support flows. The Python SDK allows stakeholders to define response types, escalation triggers, and operational logic with full control.
Also Read: Functionize Named in Forrester Report on Autonomous Testing Platforms
Why This Matters
RunLLM v2 was designed to address core pain points in technical support: scalability, intelligence, multi‑agent flexibility, and deep observability. The new platform is purpose‑built to act as a collaborative teammate rather than a simple Q&A bot.
RunLLM’s original product quickly earned trust with sophisticated customers via its high‑quality AI responses. v2 extends that foundation by embedding agentic capabilities and tighter enterprise control.
Customer Impact & Early Results
Early adopters have reported powerful returns:
-
RunLLM handles 99% of community questions for vLLM deployments.
-
At DataHub it delivered $1 million in engineering time savings.
-
Arize AI reported a 50% reduction in support workload.
RunLLM v2 was built to feel like one of your best support engineers—anticipating issues, validating code, offering alternative solutions, and escalating intelligently as needed.
Source: RunLLM