In the increasingly competitive global quest for large-scale artificial intelligence, Keysight Technologies, Inc., a leading technology company that helps enterprises, service providers, and governments accelerate innovation to connect and secure the world, announced the Keysight AI Inference Builder (KAI Inference Builder), an advanced emulation and analytics solution designed to test and optimize inference-focused AI infrastructure under conditions of high concurrency.
The shift that has occurred from the training of AI to inference in the real world has created a major challenge, as the ability of synthetic testing to accurately predict the behavior of the infrastructure under the complex conditions of the real world has become clear. The KAI Inference Builder solves this problem by accurately replicating the patterns of inference and industry models, allowing AI cloud infrastructure, hardware, and applications to have the data they need to scale with confidence.
Bridging the Gap Between Simulation and Reality
It is no longer just about the raw compute power required by modern AI systems but also about the seamless integration of networking, security, and processing layers. KAI Inference Builder promises to provide a full-stack performance measurement from the initial request to the final response.
Key features of the new platform include:
Application-Specific Benchmarking: Moving beyond generic testing, the platform emulates Large Language Model (LLM) architectures tailored for specific verticals like healthcare and finance.
Subsystem Isolation: Engineers can perform client-only emulation to pinpoint exactly where bottlenecks occur—whether in the compute, network, or security layers—preventing overprovisioning and reducing operational costs.
Scalable Real-World Workloads: The solution validates full-stack deployments under real-world stress, ensuring that “AI factories” can handle high-diversity workloads without performance degradation.
Also Read: ASUS Introduces Advanced Liquid-Cooled AI Infrastructure Built on NVIDIA Vera Rubin Platform
Integration with NVIDIA DSX Air
A highlight of the launch is the platform’s turnkey integration with the NVIDIA DSX Air digital twin environment. This collaboration allows operators to model and optimize AI data center architectures in a simulated environment before a single piece of hardware is installed in a rack.
Amit Katz, Vice President of Ethernet Switch at NVIDIA, noted the importance of this synergy: “The integration of KAI Inference Builder with NVIDIA DSX Air provides the essential environment needed to eliminate performance volatility, and enables NVIDIA AI Factory partners and customers to emulate real inference workloads and preemptively resolve bottlenecks, ensuring optimized AI services reach the market faster.”
Driving the “Inference Era”
The launch of KAI Inference Builder expands the Keysight Artificial Intelligence (KAI) portfolio, reinforcing the company’s role in accelerating the lifecycle of AI innovation. By providing deeper visibility into the full-stack performance of inference engines, Keysight helps organizations mitigate the risks of costly post-deployment rework.
Ram Periakaruppan, Vice President and General Manager, Network Test and Security Solutions at Keysight, emphasized the strategic value of the platform: “KAI Inference Builder closes that gap by recreating realistic inference workload patterns and modeling industry-specific usage patterns to validate AI infrastructure, applications, and data center deployments. The platform gives AI cloud providers, hardware vendors, and application developers a scalable solution for measuring, validating, and optimizing real-world inference performance.”
Keysight is currently showcasing the KAI Inference Builder at NVIDIA GTC, demonstrating live how the platform models AI data center infrastructure and performance within digital twin environments.


