Tesla's complete AI for robots and robotaxis
At ICCV, Tesla's AI chief described how the company's camera-only FSD system uses end-to-end neural nets to evaluate enormous fleet data in order to drive vehicles. The company intends to use the same brain for both robotaxis and Optimus robots.
The specifics:
Millions of cars send multi-camera data into a neural network that generates direct driving commands as part of Tesla's visionary approach.
In addition to real-time 3D scene reconstruction that maps the environment while driving, the system has "auxiliary heads" that reveal its reasoning process.
Rare, hazardous situations, such as pedestrians swerving into traffic, are produced by a learning world simulator that may take years to naturally capture.
For robotaxis, Tesla intends to scale the same neural architecture worldwide and transfer it straight to the Optimus humanoid.
One model that powers robotaxis today and humanoids tomorrow might be shipped by Tesla using a single vision-only brain that has been trained on millions of real-world drives. However, every Optimus that enters a plant will be impacted by those blind spots if camera-only perception and simulated edge cases overlook safety-critical malfunctions.