4

Days

15

Hours

59

Mins

9

Secs

At the TERAFAB launch event in Austin, Elon Musk outlined a silicon roadmap intended to serve the combined needs of Tesla, xAI, and SpaceX over the coming decade. While AI5 and AI6 drew most of the attention as unified brains for millions of Robotaxis and Optimus humanoids, a third, highly specialized chip also appeared on the presentation slide and in the Lunar Mass Driver video: D3.

D3, short for Dojo 3, marks a redirection of Tesla’s custom silicon program. Instead of competing with NVIDIA in terrestrial supercomputers, the Dojo architecture is being aimed at running the majority of human computation in space.

The Pivot from Earth to Orbit

The Dojo effort began with the D1 chip and the Dojo supercomputer, which targeted an Earth-based video-training cluster for FSD. When the D2 chip later entered mass production, it delivered substantial performance gains.

As the architectures for FSD and the Optimus robot increasingly converged under the upcoming AI5 and AI6 processors, and as xAI assembled massive, GW-scale clusters using commercially available GPUs, many industry observers concluded that Dojo was effectively finished.

The TERAFAB presentation demonstrated that the program isn’t ending; it has shifted to address a larger constraint: Earth’s limited electricity supply for the accelerating AI wave.

Globally, only about 100 to 200 gigawatts of compute capacity are added per year, restricted by local grids, cooling, and physical real estate. Achieving a terawatt, and eventually a petawatt, requires moving hardware off-world. That is where D3 comes in.

Unconstrained by Earthly Limits

D3 differs from AI5 and other ground-based processors because it is designed for space’s harsh but enabling conditions.

On Earth, chip design must devote major effort to heat and power dissipation. In space, with an effectively endless thermal sink and no dependence on a fragile local power grid, D3 can be a far higher-power chip that safely operates at temperatures beyond typical terrestrial processors.

The architecture is also heavily radiation-hardened. Outside Earth’s magnetic field, intense cosmic radiation can induce bit flips and catastrophic hardware failures in standard silicon. D3 is built from the ground up to operate reliably in that environment.

The Economics of Space Data Centers

Why place D3 in orbit? Musk predicted that within a few years it will actually be cheaper to launch chips into space than to construct a traditional data center on the ground.

The plan couples D3 with SpaceX’s heavy-lift capability. D3 processors will be installed in large, 100-kilowatt orbital server racks—AI Sat Minis—each weighing roughly one ton, with Starship deploying them to orbit.

Once on orbit, operating costs drop sharply. Positioned for 24/7, uninterrupted sunlight, these satellites do not require heavy, expensive battery backups to keep systems running. Space-based solar arrays are also significantly cheaper to manufacture than terrestrial panels because they do not need heavy glass or thick aluminum framing to withstand weather and gravity.

Founding a Galactic Civilization

D3 serves as a key link between Tesla’s AI objectives and SpaceX’s interplanetary aims.

While AI6 will steer vehicles and power physical labor through Optimus, D3 will provide the unseen backbone. Large constellations of D3-powered AI Sat Minis operating in orbit will take on the immense compute needed to scale xAI’s intelligence, support a Martian internet, and ultimately help guide expansion into deep space.