
Meta Platforms recently agreed to a multiyear deal which would see them deploying “millions” of processors from Nvidia to power its next wave of AI infrastructure, further deepening ties between the two companies.
The agreement spans Meta’s on-premises and cloud environments, targeting hyperscale data centres designed to handle both AI training and inference workloads.
Meta CEO Mark Zuckerberg said the company plans “to build leading-edge clusters” using Nvidia’s next-generation platforms “to deliver personal superintelligence to everyone in the world”.
The deal covers Nvidia’s current Blackwell generation of AI accelerators and its upcoming Vera Rubin AI platform. Meta will also increase deployment of Nvidia’s Arm-based Grace CPUs across production data centres, aiming to improve performance per watt.
The move marks the first large-scale rollout focused solely on the chip maker’s Grace platform, with further expansion expected through next-generation Vera CPUs from 2027.
In addition, Meta will integrate Nvidia’s advanced networking devices across its infrastructure, supporting higher efficiency and throughput for large AI workloads. The social media giant will also adopt Nvidia’s Confidential Computing technology, which is designed to secure data and applications, to enable AI features on its WhatsApp platform while maintaining data confidentiality.
Billions
According to Bloomberg, Meta already accounts for about 9 per cent of Nvidia’s revenue and is the chip maker’s second-largest customer, spending roughly $19 billion on its products in the last fiscal year.
The deepened partnership comes as Meta races to expand AI investment as it builds out large-scale data centres and explores in-house chip development.
It has pledged to invest as much as $135 billion on AI infrastructure this year. The Financial Times reported the deal with Nvidia is worth billions of dollars.
Source: Mobile World Live
Image Credit: Nvidia





