
Meta Platforms recently expanded its existing partnership with Broadcom to jointly develop multiple generations of custom AI chips. The aforementioned chips are designed to power AI across the social media giant’s wide portfolio of apps and services.
Meta explained it will work with Broadcom on chip design, advanced packaging and networking to build out the “massive computing foundation” needed to deliver real-time AI experiences to billions of people.
It has committed to an initial deployment of 1 gigawatt of Meta Training and Inference Accelerators (MTIA), with plans to deploy multiple gigawatts of chips in the future, based on Broadcom technology.
The work will be built on Broadcom’s XPU platform, a technology creating custom AI accelerators, while also using the chip company’s ethernet technologies to enable high-bandwidth networking across Meta’s expanding AI compute clusters.
Meta said it has pursued a strategy to develop multiple generations of next-generation MTIA chips, matching the right processor “to achieve the best mix of performance and total cost of ownership”.
It recently announced plans to build four new-generations of MTIA chips within the next two years, supporting ranking, recommendations and generative AI workloads.
Meta CEO Mark Zuckerberg claimed the partnership will give it “greater performance and efficiency for everything we’re building”.
Source: Mobile World Live
Image Credit: Stock Image





