
New comments from Elon Musk and supply chain reports out of Korea provide fresh details on the production plans and context for Tesla’s next-generation AI5 chip.
According to reports, Samsung’s new foundry in Taylor, Texas, will begin critical equipment testing in March as it prepares to ramp into mass production of AI5 in the second half of 2026.
Tesla has said AI5 will debut in late 2026 in a small number of units, with high-volume manufacturing targeted for 2027.
Musk also indicated that AI5 is central to pushing Full Self-Driving forward.
Solving AI5 was Existential
While the current AI4 computer is producing strong results with FSD v14, Musk acknowledged that the company’s future effectively depends on the success of its successor chip.
Solving AI5 was existential to Tesla, which is why I had to focus both the teams on that chip and I’ve personally spent every Saturday for several months working on it.
— Elon Musk (@elonmusk) January 19, 2026
This will be a very capable chip. Roughly Hopper class as single SoC and Blackwell as dual, but it costs…
The urgency likely stems from the compute density needed for true unsupervised autonomy and scaling Optimus, Tesla’s humanoid robot. Although Musk and Tesla’s leadership remain committed to bringing FSD Unsupervised to existing AI4 hardware, the additional processing headroom from AI5 is expected to be highly valuable.
Tesla is also proceeding with AI4 in the near term: the company plans to build production Cybercab vehicles this year equipped with AI4 chips rather than AI5, and it’s unclear whether those vehicles will be upgradeable to AI5.
Comparable to NVIDIA’s $30K Chip
For the first time, Musk quantified AI5’s capability against a widely used benchmark: NVIDIA’s H100 (Hopper) data center GPU.
A single-SoC AI5 is expected to deliver performance roughly equivalent to one NVIDIA H100, while a dual AI5 configuration would be comparable to an NVIDIA B100/B200 Blackwell.
For context, the NVIDIA H100 is a 700-watt, roughly $30,000 server processor intended for chilled data centers. Tesla says it can achieve similar inference performance for its FSD workload in a module that fits behind the glovebox, runs from a vehicle’s low-voltage system, and costs far less.
If borne out, this would mean future Tesla Cybercabs and consumer vehicles carry a supercomputer-class capability rivaling many of today’s powerful AI server nodes.
Made in Texas
With the AI5 architecture finalized and heading to fabrication, Tesla’s silicon team is already looking ahead—signaling a “return” to Dojo, including Dojo 3 (AI6).
AI5 is intended to power vehicles (and will likely be the last major in-vehicle chip), while Dojo underpins the server side and training infrastructure. The implication: the runtime hardware is nearly ready, and focus is shifting back to the systems that train those neural networks.













































Teilen:
Tesla’s Q4 2025 Earnings Call: When to Listen & Top Investor Questions
Tesla App Update 4.53: Powershare Limit Control, Automatic Damage Detection, Brake-by-Wire