Tesla is accelerating its AI ambitions

Tesla has unveiled a new $300 million AI supercomputer, Oppenheimer analyst Rick Schafer said, noting that it will use up to 10,000 Nvidia H100 GPUs.

Nvidia owns the hardware and software primarily responsible for AI computing, and Tesla uses Nvidia products to develop self-driving cars.

Tesla is dramatically increasing its computing power to develop fully self-driving technology faster. Tesla plans to spend more than $2 billion in 2023 and another $2 billion in 2024 on training in fully autonomous driving technology.

Self-driving technology represents significant added value for electric vehicle manufacturers. Musk's vision is to turn all Tesla cars into self-driving taxis at the touch of a button, where customers only have to pay an initial or annual fee. Technology can be used.

The new system will help Tesla process data collected about its vehicles and accelerate the development of fully self-driving features, as the company dreams of fully self-driving its electric vehicles and is investing heavily in artificial intelligence infrastructure to achieve this goal.

Tesla has been promoting fully self-driving capabilities since 2016, and has previously offered driver assistance systems other than Autopilot, which require a human to keep their hands on the wheel.

The company's CEO, Elon Musk, isn't afraid to spend money to achieve its goal of fully autonomous driving.

And last month, Tesla announced that it would invest $1 billion to build a Dojo supercomputer by the end of 2024 to accelerate development of its self-driving project.

The new AI-enabled supercomputer uses large 15-kilowatt Dojo training boxes to house the chips, six of which make up the Dojo V1 system. Each box contains a D1 chip designed by Tesla and manufactured by TSMC.

Tesla still uses thousands of GPUs in its infrastructure. In 2021, the automaker deployed a PC with 720 GPU nodes, each equipped with eight of its then-advanced A100 accelerators, for a total of 5,760 GPUs.

Its Dojo supercomputer provides up to 1.8 exaflops of FP16 performance for AI applications. Musk has previously said, "If Nvidia gives us enough GPUs, we probably won't need Dojo."

The supercomputer uses the latest generation of NVIDIA H100 graphics processors, meaning there are 1,250 nodes, each containing 8 GPUs, or 39.5 exaflops of FP8 performance for graphics applications. artificial intelligence.

According to the company, the system supports a storage capacity of more than 200 petabytes. Tesla doesn't rent many GPUs from a cloud provider like Microsoft or Google, but the entire system is housed at Tesla's headquarters.

Tesla may consider expanding its data centers to accommodate the additional capacity. The automaker earlier this month announced the hiring of a data center engineering program manager who will lead the design and overall engineering of Tesla's first data center.



Save 80.0% on select products from RUWQ with promo code 80YVSNZJ, through 10/29 while supplies last.

HP 2023 15'' HD IPS Laptop, Windows 11, Intel Pentium 4-Core Processor Up to 2.70GHz, 8GB RAM, 128GB SSD, HDMI, Super-Fast 6th Gen WiFi, Dale Red (Renewed)
Previous Post Next Post