Review of AI Chips from Nvidia, AMD, Google, and Tesla
This article explores the AI chips developed by major players in the technology industry, including AMD, Google, and Tesla. It is possible that with advancements in its Dojo 3 chip, Tesla may surpass AMD in terms of AI chip performance and production volume.
The top line of Tesla's Abundance slide highlights AI compute, along with their initiatives for Optimus, Robotaxi, and Full Self-Driving (FSD). This suggests that the progress of the Dojo 2 and Dojo 3 training chips is crucial for refining FSD capabilities and facilitating the training of the Optimus robot. Furthermore, Tesla is rumored to be developing AI5 and AI6 inference chips for use in Optimus and Robotaxi.
In 2024, AMD is estimated to have shipped around 300,000 to 400,000 units of the Instinct MI300 chips, contributing to approximately $5 billion in revenue.
The average selling price (ASP) for these chips can be calculated as follows:
$5 billion ÷ 300,000 units = ~$16,667 per chip
$5 billion ÷ 400,000 units = ~$12,500 per chip
Looking ahead, AMD expects to boost sales to approximately 500,000 AI chips in 2025, which could generate around $7.5 billion in revenue.
Nvidia AI Chips in 2025
Nvidia’s growing data center revenue provides insight into its AI chip sales. Analysts project Nvidia will achieve around $110.36 billion in data center revenue for 2024. With continued growth in the AI sector, Nvidia’s revenue could rise to approximately $120 billion in 2025.
The total number of chips sold will depend on the ASP of Nvidia's offerings. The H100 GPUs are priced between $20,000 and $40,000 each. Assuming an average ASP of $30,000, we can estimate that:
$120 billion ÷ $30,000 = 4 million chips.
Google TPUs in 2025
Google's Tensor Processing Units (TPUs) are designed primarily for internal use, meaning that the production and installation figures refer to their deployment in Google’s data centers rather than external sales. In 2024, it is predicted that 3.45 million AI ASIC cloud accelerators will be shipped globally, with Google holding a dominant 74% market share, equating to about 2.55 million TPUs. If market growth continues at 20% in 2025, total shipments could rise to 4.14 million units, with Google likely contributing around 3.06 million TPUs through its established share.
Current TPU performance metrics show the TPU v4 offering 275 teraFLOPs (bfloat16), while the upcoming TPU v5e delivers 197 teraFLOPs (bfloat16). Looking ahead, the TPU v6, planned for release in 2025, is anticipated to reach an impressive 400 teraFLOPs per chip (bfloat16).
Tesla Dojo 2 in 2025
Tesla’s Dojo 2 AI training chip is expected to ramp up production by late 2025. Tesla estimates that the Dojo 1 chips, which offer 367 Tflops, equal around 5% of 50,000 to 100,000 Nvidia H100 chips, leading to an implied delivery of 15,000 to 30,000 Dojo 1 chips.
Reports suggest that Tesla invests about $500 million annually in its Dojo supercomputers. Assuming an ASP of $10,000 per chip, this equates to:
$500 million ÷ $10,000 = 50,000 chips.
Projected specifications indicate that Dojo 2 is expected to have a performance ten times higher than Dojo 1. This translates to potential performance benchmarks of 3-4 petaflops, potentially outperforming Nvidia H100s.
Looking ahead to 2026, Tesla plans to produce the Dojo 3 chip. If this new chip can deliver another tenfold increase in performance, it could reach around 40 petaflops—placing it on par with Nvidia’s B300s, especially when evaluating cost-performance ratios compared to Nvidia's Rubin chips.
As Tesla and another company, likely XAI, prepare to create massive AI data centers focused on producing a million chips by 2026, a successful rollout of the Dojo 3 chip could elevate Tesla to the second position in the AI chip market, surpassing AMD.
AI, Chips, Technology, Market