Intel and Google Partner to Advance AI, Improve Performance and Energy Efficiency

Source: Intel

Intel and Google announced a multi-year collaboration on the 10th to advance next-generation AI and cloud infrastructure. The companies stated that this partnership will strengthen the critical roles of CPUs and custom Infrastructure Processing Units (IPUs) in deploying large-scale, heterogeneous AI systems.

Intel stated, 'As AI adoption accelerates, infrastructure is becoming increasingly complex, leading to a greater reliance on CPUs for orchestration, data processing, and system-level performance.' Through this collaboration, Intel and Google will work together across multiple generations of Intel Xeon processors to improve performance, energy efficiency, and total cost of ownership throughout Google's global infrastructure.

Google Cloud has consistently integrated Intel Xeon processors across its workload-optimized instances, including C4 and N4 instances powered by the latest Intel Xeon 6 processors. These platforms support a wide range of workloads, from large-scale AI training orchestration to latency-sensitive inference and general-purpose computing.

At the same time, Intel and Google are expanding their joint development of ASIC-based IPUs. These programmable accelerators are designed to offload networking, storage, and security functions from the host CPU, thereby increasing utilization, improving efficiency, and enabling more predictable performance across hyperscale AI environments.

IPUs are a core component of modern data center architecture, handling infrastructure tasks previously managed by CPUs to provide more efficient computing performance. This allows cloud providers to scale more effectively without increasing system complexity. Together, Xeon CPUs and IPUs form a tightly integrated platform that balances general-purpose computing with purpose-built infrastructure acceleration, enabling more efficient, flexible, and scalable AI systems.

Lip-Bu Tan, CEO of Intel, stated, "AI is reshaping how infrastructure is built and scaled. Scaling AI requires more than just accelerators; a balanced system is essential. CPUs and IPUs play a critical role in delivering the performance, efficiency, and flexibility that modern AI workloads demand."

Amin Vahdat, Vice President and General Manager of Systems and Services Infrastructure at Google Cloud, said, "CPU and infrastructure acceleration remain core elements of AI systems, from training orchestration to inference and deployment. Intel has been a trusted partner for nearly 20 years, and Intel’s Xeon roadmap gives us confidence that we can continue to meet the growing performance and efficiency requirements of our workloads."

The companies stated that this expanded collaboration reflects their shared commitment to advancing the open and scalable infrastructure required for the AI era. By combining general-purpose computing with purpose-built acceleration for AI system design, Intel and Google are implementing a balanced approach to AI system design that increases utilization, reduces complexity, and enables efficient scaling.

Through this partnership, the two companies plan to strengthen the foundation for next-generation AI-based cloud services and support ongoing innovation for businesses, developers, and users worldwide.

AI-translated from Korean by NC AI for timely global news. The Korean original prevails, and foreign quotes may vary from exact original wording. [Read Original]

Sort by:

Comments :0

Insert Image

Add Quotation

Add Translate Suggestion

Language select

Report

CAPTCHA