Meta and Broadcom say they are expanding their partnership to build more of Meta’s custom AI chips, called MTIA. Broadcom says the first phase already tops one gigawatt and that the companies plan to keep building across multiple chip generations through 2029.
That may sound like a mouthful, but the simple version is this: Meta wants more of its own AI engine parts. Custom chips can help a company run AI tasks faster and with better efficiency than using only general-purpose hardware. The scale here matters too. This is not a tiny lab test. It is a long-term plan to support a lot more AI computing.
Why this matters
AI is expensive. It eats chips, power, cooling, and money at an alarming rate. If Meta can use custom chips to lower the cost of training and running its systems, that could make future AI features across WhatsApp, Instagram, Threads, and other products cheaper to operate.
Think of it like a delivery company building its own trucks instead of renting whatever is left on the lot. It is harder at first, but the long-term math can get better if the trucks fit the job.
What to watch next
The promise is efficiency. The risk is scale. Big custom infrastructure projects can look great on slides and much messier in the real world. Over the next year, the key question is whether Meta’s custom silicon actually delivers better performance for the money.
Bottom line: This is really a cost-and-control story. If Meta can run more AI on chips built for its own needs, it could make future AI features cheaper and easier for the company to scale.



