In a research paper published on Monday, Apple revealed it utilized Google’s Tensor Processing Units (TPUs) to train two core components of its upcoming AI infrastructure, deviating from industry-leader Nvidia’s offerings.
This decision holds significance as Nvidia dominates the AI processor market, with its chips constituting roughly 80% of the market share when combined with offerings from cloud providers like Google and Amazon. Notably, Apple’s research paper omits any mention of Nvidia hardware, though it doesn’t explicitly state a decision against their products.
Apple employed two variants of Google’s TPUs—TPUv5p and TPUv4—to train its models. For the AI models that will run on iPhones and other devices, Apple used 2,048 TPUv5p chips. For its server AI model, Apple deployed 8,192 TPUv4 processors. These TPUs were organized into large clusters, demonstrating the scalability of Google’s cloud infrastructure..
Nvidia has traditionally been the go-to for AI training, thanks to its powerful GPUs. However, Google’s TPUs offer an alternative, particularly integrated into the Google Cloud Platform, making them accessible for scalable AI tasks.
Apple’s use of Google’s infrastructure is not just about hardware. The company also leverages Google’s software ecosystem, optimizing performance and efficiency in training its advanced AI models.
Unlike Nvidia’s model of selling chips as standalone products, Google offers access to TPUs through its cloud platform. This necessitates software development within the Google Cloud environment for customers to utilize the chips.
Apple recently released a portion of its “Apple Intelligence” suite to beta testers, hinting at an imminent wider release. This development underscores the growing importance of AI in the tech landscape and the evolving strategies companies employ to leverage this technology.