News, Technology, Vendor

Meta prepares for custom AI chips

Meta Platforms is increasing its focus on developing its own custom silicon for in-house chips by unveiling four new generations to power its AI workloads more efficiently.

The Facebook-parent’s MTIA 300, MTIA 400, MTIA 450 and MTIA 500 form part of a plan to reduce its reliance on outside chipmakers while optimising its silicon to its unique workloads.

In 2023, Meta Platforms detailed its Meta Training and Inference Accelerator (MTIA), a family of custom-built silicon chips.

The MTIA 300 is in production for ranking and recommendation training. The MTIA 400, MTIA 450 and MTIA 500 are to be capable of handling all workloads, but Meta Platforms stated it would primarily use the chips for generative AI inference production in the near future and into 2027.

It explained the silicon’s modular design enables the latest chips to plug directly into existing rack infrastructure, speeding deployment.

The company’s MTIA strategy centres on rapid, iterative development and involves releasing new chips every six months or less by employing modular, reusable designs

On an earnings call in January 2026, CEO Mark Zuckerberg explained the company would extend its custom silicon efforts to training workloads for ranking and recommendations this year.

Meta Platforms bought massive amounts of chips from Nvidia and AMD this year as it ups its AI ambitions, but Zuckerberg has long maintained developing its own silicon leads to cost efficiencies and better performance on its systems than off-the-shelf versions.

The social media giant expects capex this year of between $115 billion and $135 billion.

Reaction

Analyst Jack Gold told Mobile World Live Meta Platforms’ latest chips are not much of a surprise and noted hyperscalers including AWS, Google and Microsoft are also developing their own silicon.

“Meta and the others can optimise these custom chips for their specific implementations with optimised connectivity, power management and software stacks”, he explained.

“Designing their own also gets them away from a dependence on proprietary CUDA, which Nvidia owns. And finally, designing their own chips means they don’t have to pay the overly inflated costs for many of the Nvidia chips that are in so much demand. That can save them a lot of money in TCO and operating power”.

Source: Mobile World Live

Image Credit: Meta AI

Previous ArticleNext Article

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines