At its “AI Everywhere” launch in New York City today, Intel introduced an unmatched portfolio of AI products to enable customers’ AI solutions everywhere — across the data center, cloud, network, edge and PC.
Highlights include:
- The Intel® Core™ Ultra mobile processor family, the first built on the Intel 4 process technology and the first to benefit from the company’s largest architectural shift in 40 years, delivers Intel’s most power-efficient client processor and ushers in the age of the AI PC.
- The 5th Gen Intel® Xeon® processor family is built with AI acceleration in every core, bringing leaps in AI and overall performance and lowering total cost of ownership (TCO).
- Intel CEO Pat Gelsinger showed for the first time an Intel® Gaudi®3 AI accelerator, arriving on schedule next year.
“AI innovation is poised to raise the digital economy’s impact up to as much as one-third of global gross domestic product1,” Gelsinger said. “Intel is developing the technologies and solutions that empower customers to seamlessly integrate and effectively run AI in all their applications — in the cloud and, increasingly, locally at the PC and edge, where data is generated and used.”
Gelsinger showcased Intel’s expansive AI footprint, spanning cloud and enterprise servers to networks, volume clients and ubiquitous edge environments. He also reinforced that Intel is on track to deliver five new process technology nodes in four years.
“Intel is on a mission to bring AI everywhere through exceptionally engineered platforms, secure solutions and support for open ecosystems. Our AI portfolio gets even stronger with today’s launch of Intel Core Ultra ushering in the age of the AI PC and AI-accelerated 5th Gen Xeon for the enterprise,” Gelsinger said.
Intel Core Ultra Powers AI PC and New Applications
Intel Core Ultra represents the company’s largest architectural shift in 40 years and launches the AI PC generation with innovation on all fronts: CPU compute, graphics, power, battery life and profound new AI features. The AI PC represents the largest transformation of the PC experience in 20 years, since Intel® Centrino® untethered laptops to connect to Wi-Fi from anywhere.
Intel Core Ultra features Intel’s first client on-chip AI accelerator — the neural processing unit, or NPU — to enable a new level of power-efficient AI acceleration with 2.5x better power efficiency than the previous generation2. Its world-class GPU and leadership CPU are each also capable of speeding up AI solutions.
As important, Intel is partnering with more than 100 software vendors to bring several hundred AI-boosted applications to the PC market — a wide array of highly creative, productive and fun applications that will change the PC experience. For consumer and commercial customers, this means a larger and more extensive set of AI-enhanced applications will run great on Intel Core Ultra, particularly compared to competing platforms. For example, content creators working in Adobe Premiere Pro will enjoy 40% better performance versus the competition3.
Intel Core Ultra-based AI PCs are available now from select U.S. retailers for the holiday season. Over the next year, Intel Core Ultra will bring AI to more than 230 designs from laptop and PC makers worldwide. AI PCs will comprise 80% of the PC market by 20284 and will bring new tools to the way we work, learn and create.
New Xeon Brings More Powerful AI to the Data Center, Cloud, Network and Edge
The 5th Gen Intel Xeon processor family, also introduced today, brings a significant leap in performance and efficiency: Compared with the previous generation of Xeon, these processors deliver 21% average performance gain for general compute performance and enable 36% higher average performance per watt across a range of customer workloads7. Customers following a typical five-year refresh cycle and upgrading from even older generations can reduce their TCO by up to 77%.
Xeon is the only mainstream data center processor with built-in AI acceleration, with the new 5th Gen Xeon delivering up to 42% higher inference and fine-tuning on models as large as 20 billion parameters. It’s also the only CPU with a consistent and ever-improving set of MLPerf training and inference benchmark results.
Xeon’s built-in AI accelerators, together with optimized software and enhanced telemetry capabilities, enable more manageable and efficient deployments of demanding network and edge workloads for communication service providers, content delivery networks and broad vertical markets, including retail, healthcare and manufacturing.
During today’s event, IBM announced that 5th Gen Intel Xeon processors achieved up to 2.7x better query throughput on its watsonx.data platform compared to previous-generation Xeon processors during testing 10. Google Cloud, which will deploy 5th Gen Xeon next year, noted that Palo Alto Networks experienced a 2x performance boost in its threat detection deep learning models by using built-in acceleration in 4th Gen Xeon through Google Cloud. And indie game studio Gallium Studios turned to Numenta’s AI platform running on Xeon processors to improve inference performance by 6.5x over a GPU-based cloud instance, saving cost and latency in its AI-based game, Proxi.
This kind of performance unlocks new possibilities for advanced AI – not only in the data center and cloud, but across the world’s networks and edge applications.
AI Acceleration and Solutions Everywhere Developers Need It
Both Intel Core Ultra and 5th Gen Xeon will find their way into places you might not expect. Imagine a restaurant that guides your menu choices based on your budget and dietary needs; a manufacturing floor that catches quality and safety issues at the source; an ultrasound that sees what human eyes might miss; a power grid that manages electricity with careful precision.
These edge computing use cases represent the fastest-growing segment of computing — projected to surge to a $445 billion global market by the end of the decade — within which AI is the fastest-growing workload. In that market, edge and client devices are driving 1.4x more demand for inference than the data center12.
In many cases, customers will employ a mix of AI solutions. Take Zoom, which runs AI workloads on Intel Core-based client systems and Intel Xeon based-cloud solutions within its all-in-one communications and collaboration platform to deliver best user experience and costs. Zoom uses AI to suppress the neighbor’s barking dog and blur your cluttered home office, and to generate a meeting summary and email.
To make AI hardware technologies as accessible and easy-to-use as possible, Intel builds optimizations into the AI frameworks developers use (like PyTorch and TensorFlow) and offers foundational libraries (through oneAPI) to make software portable and highly performant across different types of hardware.
Advanced developer tools, including Intel’s oneAPI and OpenVINO toolkit, help developers harness hardware acceleration for AI workloads and solutions and quickly build, optimize and deploy AI models across a wide variety of inference targets.
Sneak Peek: Intel Gaudi3 AI Accelerator
Wrapping up the event, Gelsinger provided an update on Intel Gaudi3, coming next year. He showed for the first time the next-generation AI accelerator for deep learning and large-scale generative AI models. Intel has seen a rapid expansion of its Gaudi pipeline due to growing and proven performance advantages combined with highly competitive TCO and pricing. With increasing demand for generative AI solutions, Intel expects to capture a larger portion of the accelerator market in 2024 with its suite of AI accelerators led by Gaudi.
SOURCE: BusinessWire