The edge-native software platform simplifies development, deployment and management of edge AI applications.
What’s New: At MWC 2024, Intel announced its new Edge Platform, a modular, open software platform enabling enterprises to develop, deploy, run, secure, and manage edge and AI applications at scale with cloud-like simplicity. Together, these capabilities will accelerate time-to-scale deployment for enterprises, contributing to improved total cost of ownership (TCO).
“The edge is the next frontier of digital transformation, being further fueled by AI. We are building on our strong customer base in the market and consolidating our years of software initiatives to the next level in delivering a complete edge-native platform, which is needed to enable infrastructure, applications and efficient AI deployments at scale. Our modular platform is exactly that, driving optimal edge infrastructure performance and streamlining application management for enterprises, giving them both improved competitiveness and improved total cost of ownership.”
– Pallavi Mahajan, Intel corporate vice president and general manager of Network and Edge Group Software
Why It Matters: The amount of compute happening at the edge is growing fast because that is where data is generated. In addition, many edge computing deployments are incorporating AI. At the edge, businesses need to automate for many reasons: to achieve pricing competitiveness, to relieve the effects of labor shortages, to expand innovation, to add efficiency, to improve time to market and to deliver new services.
However, working at the edge is often complex and challenging for a variety of reasons:
- Difficulty building performant edge AI solutions with high return on investment (ROI) across a range of use cases in a specific industry.
- The diversity of hardware, software and even power requirements at the edge.
- Lack of secure and cost-effective methods to move and utilize high data volumes required by AI at the edge while maintaining low latency.
- Increasingly complex operations management of distributed edge devices and applications at scale.
Use cases – with examples including defect detection and preventive maintenance in industrial facilities, frictionless checkout and inventory management in retail, and traffic management and emergency safety in smart cities/transportation – typically require advanced networking and AI analytics at the edge with low latency, locality and cost requirements to meet stringent real-world needs. Additionally, a mix of on-premises analytics with aggregation and placement of some AI processing in the cloud to manage global deployment locations is common. These hybrid AI scenarios require a software platform built to handle them.
And while custom solutions to challenges are available today, they are often built on closed systems and specialized hardware. This makes integrating legacy systems and adding new use cases both costly and time-consuming.
Also Read: Lambda Raises $320M to Build a GPU Cloud for AI
How Intel’s Edge Platform Empowers Enterprises: The open, modular platform will enable ready-made solutions across industries. By leveraging Intel’s edge experience and broad ecosystem to make the most in-demand edge use cases available, enterprises can purchase a complete solution or build their own in existing environments. Enterprise developers can build edge-native AI applications on new or existing infrastructure, and they can manage edge solutions end-to-end for their specific use cases.
The platform provides infrastructure management and AI application development capabilities that can integrate into existing software stacks via open standards.
About Edge-Native Infrastructure: The platform’s edge infrastructure has built-in OpenVINO™ AI inference runtime for edge AI as well as a secure, policy-based automation of IT and OT management tasks. Intel’s OpenVINO has evolved over the past five years to help developers optimize applications for low latency, low power and deployment on existing hardware specifically at the edge, enabling standard hardware already deployed to run AI applications efficiently without costly upgrades or refactoring.
The platform has a single dashboard that enables IT and DevOps personnel to provision, onboard and manage a fleet of edge nodes, including edge servers, industrial controls, HMI devices and others. This is accomplished securely and remotely with zero touch, across day 0/1/2 operations.
Furthermore, closed-loop automation enables operators to leverage policies and observability to trigger business logic from operational alerts at the edge, optimizing operations across the network and improving TCO.
Deep, heterogeneous hardware awareness gives best-in-class capabilities to allocate resources for optimal efficiency, as well as zero-trust security features co-developed for Intel architecture.
About Edge AI + Applications Capabilities: The platform will provide enterprise developers with access to powerful AI capabilities and tools, including:
- Finely tunable application orchestration for remotely placing latency-sensitive workloads on exactly the right device for best application performance.
- Powerful low-code to high-code AI model/app development with hybrid AI capabilities from the edge to cloud.
- A range of horizontal edge services like data annotation services that leverage Intel® Geti™ to build AI models, as well as vertical industry-specific edge services to improve results in common industrial use cases using video and time series information and digital twin capabilities to track and manage environments.
About Intel’s Role in Proven Partner Ecosystem: Intel’s Edge Platform will come to market with industry leaders and broad ecosystem support that includes Amazon Web Services, Lenovo, L&T Technology Services, Red Hat, SAP, Vericast, Verizon Business and Wipro.
SOURCE: BusinessWire