Meta Platforms Inc., known for its ownership of Facebook, Instagram, and WhatsApp, is heavily investing in artificial intelligence (AI) infrastructure. The company has projected its 2025 expenses to fall between $114 billion and $119 billion, allocating up to $65 billion for AI-related developments, including custom chip production.

As part of its Meta Training and Inference Accelerator (MTIA) series, Meta has developed a new AI training chip in collaboration with Taiwan Semiconductor Manufacturing Company (TSMC). Designed specifically for AI tasks, the chip is expected to be more efficient than the GPUs traditionally used for AI workloads. Recent progress includes the completion of the chip’s “tape-out” phase—a critical step in silicon manufacturing—and the launch of test deployments. Meta’s AI chip initiative has faced challenges, including the cancellation of earlier designs. However, the company successfully introduced its first MTIA inference chip last year, which powers recommendation systems for platforms like Facebook and Instagram. Meta aims to expand its custom chip usage for training AI systems by 2026, with plans to support generative AI technologies such as chatbots.

Despite these efforts, Meta remains one of Nvidia’s largest customers, using GPUs extensively for tasks like AI training and inference across its platforms. However, questions have arisen about the scalability of large language models, with researchers exploring alternative approaches to improve computational efficiency.