Elon Has Been Dropping Hints About Real-World AI — Here’s What That Could Mean
For a few years now, I’ve had a question written down that I told myself when we get to interview Elon (available for you any time, btw 😉), we should ask him. Where is the robot that builds the robot?
This article will explain that question and our recently renewed interest in this topic. For a quick introduction, the headline question is based on the combination of a few very specific puzzle pieces. The first is the methodologies Tesla uses to solve the problem of self-driving, the second is Elon’s ambitions for the machine that builds the machine, the third is his desire to get humanity to Mars, and our renewed interest comes from Elon’s recent subtle hints about real-world AI in a couple of his tweets.
First, let’s start with Autopilot. As Elon has described it multiple times, a human is just two cameras (the eyes) placed on a gimbal (the neck) with an on-board computer (the brain), and we are capable of driving a vehicle with so few accidents that insurance can cover the difference and makes this complex world of transportation that we have today possible. A Tesla vehicle has cameras and multiple other sensors on all sides and should theoretically be able to do the same job way better than 2 cameras on a gimbal. The biggest problem, however, is that AI just hasn’t been able to master an understanding of the real world surrounding it. For AI, this is one of the hardest problems out there and a solution to it could completely change the world in many more ways than just fully autonomous driving, and that is what Elon has recently been hinting at.
Tesla’s approach is the key
Rather than training an AI to think like a person or a driver, competitors like Waymo train an AI by first driving it down a road many many times, a process by which they teach the car how to drive down that road. After that, it is able to do it on its own. It will generally not be able to work on streets it has not been trained on and is by comparison not able to deal with unexpected situations. In some ways, the technologies and unique methodologies for training AI that Tesla is developing is what matters more than anything. Most journalists will be compelled to point out the amount of data a neural net was trained on, but in reality, when you have as many vehicles on the road as Tesla has, you don’t save all that driving footage — you just can’t.
As Andrej Karpathy has pointed out in numerous presentations, the Autopilot team chooses an area to improve upon, like STOP signs, and then they task all vehicles with sending them clips of what they think might be stop signs. That data is collected for a while, trained upon, and in the end deleted. The team then moves on to the next task. If they see some very good and rare corner cases, they might save those, but that is likely a rather small percentage of what they receive. Using this strategy, step by step step, they teach the AI new things. Sort of like an assembly line, they keep addressing the weakest link.
The most important thing is to reduce the amount of time needed to train these specific aspects and situations the vehicles will encounter. When Dojo eventually comes online, that again will be kicked up a notch, but a supercomputer alone won’t make as much of an impact without an automated improvement process.
To paraphrase the words of Andrej Karpathy and Elon Musk, they are trying to automate as much of the process as possible and that is precisely why the core Autopilot team only has 10–20 people. Coincidentally, when CleanTechnica was on tour at the Tesla factory in Fremont, from about 10 meters away, we saw Elon meeting with that team of approximately 15 people.
The more it’s automated, the faster the learning time, the faster the rate at which the AI improves. There are two important constraints to this problem. Those are the number of cars on roads that can collect data and, with a large enough fleet you get the next problem, processing power. Below are 3 examples of what this does or could look like. Do you collect enough footage of STOP signs that:
- The footage can be processed by the computer within a reasonable amount of time, improve the reliability
a few percentage pointsa reasonable amount to where it’s no longer the weakest link, and move on to the next?
Tesla is developing a NN training computer called Dojo to process truly vast amounts of video data. It’s a beast! Please consider joining our AI or computer/chip teams if this sounds interesting.
— Elon Musk (@elonmusk) August 14, 2020
- Collect as much footage as your entire fleet can give you over a longer period of time and process that in a reasonable amount of time with the Dojo supercomputer?
- Collect as much data as you do now but process it much faster than before and move on to the next aspect faster?
- A middle ground between the last two.
Dojo isn’t needed, but will make self-driving better. It isn’t enough to be safer than human drivers, Autopilot ultimately needs to be more than 10 times safer than human drivers.
— Elon Musk (@elonmusk) January 1, 2021
This perfectly falls in line with what Elon was saying about Dojo not being a constraint to reaching level 4 or 5 autonomy, but that it can supercharge the process. The weakest link in the process is actually the human that needs to do something. However, if you just keep expanding the team, there will be no end to it. Automating is the only way to get a rate of improvement fast enough to eventually reach the kind of reliability needed for level 4 and 5 autonomy.
Elon’s hints about real-world AI
Now we get to the really exciting part. Elon recently published some tweets that imply something I have been wondering about for a very long time and that bring us back to the question of the robot that builds the robot. Here are the tweets:
FSD beta build V8.1 normally drives me around with no interventions. Next version is a big step change beyond that. Tesla is solving a major part of real-world AI. This is not widely known.
— Elon Musk (@elonmusk) March 3, 2021
We’re upgrading all NNs to surround video, using subnets on focal areas (vs equal compute on all uncropped pixels) & many other things, so more time needed to write & validate software. Maybe something next week.
This is evolving into solving a big part of physical world AI.
— Elon Musk (@elonmusk) February 24, 2021
— Elon Musk (@elonmusk) February 24, 2021
— Elon Musk (@elonmusk) February 24, 2021
Real-world AI, physical world AI. What Elon means is the step where we go from object recognition to truly understanding the surrounding world. Ideally, we would like to place more robots in warehouses, factories, and many other places. Those examples are a very different real world than the streets a car drives upon. The same can be said about a hypothetical robot maid/butler at home or a robot chef in a kitchen. Those are all real-world AI issues and automating the learning process of AI is the only realistic solution if you want to go beyond basic functionality where a team needs to hand-hold the AI every step of the way towards basic functionality. In almost all of those applications, collecting a ton of data is exceedingly difficult because you don’t have millions of people volunteering to babysit an AI like they do with Tesla’s Autopilot.
If someone could fill an empty warehouse with robot chef arm prototypes and kitchens and have them all learn day and night automatically, continually improving and getting a sellable product in a couple of years, investors would throw so much money at them that the developers would drown in it. The less human input is required, the more scalable it becomes.
Yeah, we will open Dojo for training as a web service once we work out the bugs
— Elon Musk (@elonmusk) September 20, 2020
Everything Tesla has learned and developed for FSD could one day be used to train all kinds of real-world AI. It’s funny because some people are already a bit skeptical about whether FSD will ever work and whether a Tesla FSD Robotaxi future is already included in the current stock valuation. However, what if I told you that Tesla could be the key to the real-world AI problem? If Tesla continues to automate this process, then the real-world AI market might have to brace for a major disruption. As for the tweet above, it depends on how you interpret it. If our theory is correct, what Tesla could offer might be a lot more than just raw processing power. In some ways, this will work like Google Cloud’s AutoML (automatic machine learning), but instead of a small machine learning operation, which is good for learning something useful from a bunch of images, this instead will be a toolkit to train an AI for operation in the real world, whether that be a kitchen or a warehouse delivery system.
Its not the robot, its the AI
We have by now all seen the amazing things that the walking robots of Boston Dynamics are capable of doing, whether it be with four legs or two legs. In fact, SpaceX recently used one to inspect the Starship SN10 crash site. The advancement there is that Boston Dynamics managed to crack the problem of walking like animals, staying balanced, not falling over, and more recently, being able to understand where they can walk and how to overcome obstacles.
However, when it comes to actually understanding their surroundings and interacting with them, they are not all that smart and are in desperate need of training. They might also be lacking the onboard processing power needed, but mainly, the AI is the problem, as no AI has truly cracked the real-world AI problem, which in essence is a learning problem. Solving this in the case of robots can make unbelievable new things possible.
The robot that builds the robot
Tesla has been working for years on the machine that builds the machine, which in essence is a highly automated factory, but if we get to a point that a robot can build another robot, then exponential growth becomes possible. Unfortunately, direct robot multiplication in such a fashion is still a very long way off. However, if a group of robots can be programmed to build a factory on the moon or on an asteroid from local raw materials with minimal required input from Earth, a factory that can build more robots, suddenly massive Kardashev 1* scale projects become possible. In some ways, it’s like the von Neumann probe*, but now in the form of worker robots able to turn the raw materials of the solar system into whatever we need.
*Kardashev scale is a measure of how advanced a civilization is where right now we are type 0 and type 1 is technically capable of using and storing all of the energy available on its planet.
*Von Neumann probe — the idea of a self-replicating spacecraft that upon reaching a new solar system uses local raw materials to replicate and sends replicas to nearby stars to do the same. With conventional slow space travel, the probes could completely explore our galaxy in just half a million years. When applied to construction, self-replication or even regular replication enables exponential growth and unbelievable mega projects.
For years I have been wondering why Elon doesn’t have a company like Boston Dynamics that is focused on the issue of the robot that builds the robot. Again, though, assuming that he is working towards this, he may have foreseen the larger issue, which was not hardware but the AI that will power it, in which case he found the most profitable and realistic way to fund the development of the methodologies needed to crack real-world AI. Though, I really wonder how much of that was actually part of the plan when he decided to pursue Autopilot back in 2012/2013.
How real-world AI could fit into Elon’s grand plan
Not needed, but will be very useful on Mars due to light speed latency
— Elon Musk (@elonmusk) February 2, 2020
Elon Musk’s ambition, as he has stated a few times now, is to preserve the existence of consciousness. This can be broken down into just 3 different sub-goals and various technologies needed to achieve the goal. The easiest one to define is making life multiplanetary using SpaceX’s rapidly reusable rockets to build a civilization on Mars. The second is to preserve the earth and humanity on it. That would be done with Tesla’s help, which will help transition the world to renewable energy and make sure this world remains habitable. Then the last one has to do with AI taking over.
OpenAI was originally intended to prevent AI from extinguishing consciousness. Though, instead of being the savior we need, they themselves may have become the villain they were supposed to save us from, quite a lot of irony there. What is it that Elon keeps saying? Fate loves irony? The other side of that coin, founded at around the same time, is that Elon started Neuralink so that we can join AI if we can’t beat it, something for consciousness to at least remain relevant next to AI minds.
The Boring Company and Starlink can both be attributed to helping Tesla and SpaceX achieve their respective goals. Starlink by providing funding, and The Boring Company by helping avoid traffic on Earth and possibly by making places for us to live underground on Mars, because all the radiation makes the surface hazardous.
Nonetheless, if we can solve the issue of real-world AI, then we can make robots do a lot of the dirty work. Populating Mars without them would take a lot longer. Once again, robots and real-world AI can supercharge progress.