Wednesday, September 18, 2024

F5 & Intel Partner to Enhance AI Security and Delivery

Related stories

DeltaStream Secures $15M for Real-Time Cloud Processing

DeltaStream, Inc., the serverless stream processing platform, announced it...

Arzeda Raises $38M to Boost Protein Design Efforts

Arzeda, the Protein Design Company™, announced the closing of...

Intezer Secures $33M to Enhance AI for Security Teams

Intezer, a leader in AI-powered technology for autonomous security...

Cybord Secures $8.7M to Enhance Visual AI in Electronics

Cybord, the visual AI solution ensuring electronic component quality,...
spot_imgspot_img

Enhancing the protection and performance of enterprise AI inference solutions with F5 NGINX Plus, Intel OpenVINO, and Intel IPUs

F5 announced it is bringing robust application security and delivery capabilities to AI deployments powered by Intel. This new joint solution combines industry-leading security and traffic management from F5’s NGINX Plus offering with the cutting-edge optimization and performance of the Intel Distribution of OpenVINO toolkit and Infrastructure Processing Units (IPUs) to deliver superior protection, scalability, and performance for advanced AI inference.

As organizations increasingly adopt AI to power intelligent applications and workflows, efficient and secure AI inference becomes critical. This need is addressed by combining the OpenVINO toolkit—which optimizes and accelerates AI model inference—with F5 NGINX Plus, providing robust traffic management and security.

The OpenVINO toolkit simplifies the optimization of models from almost any framework to enable a write-once, deploy-anywhere approach. This toolkit is essential for developers aiming to create scalable and efficient AI solutions with minimal code changes.

F5 NGINX Plus enhances the security and reliability of these AI models. Acting as a reverse proxy, NGINX Plus manages traffic, ensures high availability, and provides active health checks. It also facilitates SSL termination and mTLS encryption, safeguarding communications between applications and AI models without compromising performance.

Also Read: SmartBear Appoint Ron Trackey as New SVP to Boost Product Innovation

To further boost performance, Intel IPUs offload infrastructure services from the host CPU, freeing up resources for AI model servers. The IPUs efficiently manage infrastructure tasks, opening up resources to enhance the scalability and performance of both NGINX Plus and OpenVINO™ Model Servers (OVMS).

This integrated solution is particularly beneficial for edge applications, such as video analytics and IoT, where low latency and high performance are crucial. By running NGINX Plus on the Intel IPU, the solution helps ensure rapid and reliable responses, making it ideal for content delivery networks and distributed microservices deployments.

“Teaming up with Intel empowers us to push the boundaries of AI deployment. This collaboration highlights our commitment to driving innovation and delivers a secure, reliable, and scalable AI inference solution that will enable enterprises to securely deliver AI services at speed. Our combined solution ensures that organizations can harness the power of AI with superior performance and security,” said Kunal Anand, Chief Technology Officer at F5.

“Leveraging the cutting-edge infrastructure acceleration of Intel IPUs and the OpenVINO toolkit alongside F5 NGINX Plus can help enable enterprises to realize innovative AI inference solutions with improved simplicity, security, and performance at scale for multiple vertical markets and workloads,” said Pere Monclus, Chief Technology Officer, Network and Edge Group of Intel.

Source: Businesswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img