SAE Clarifies Autonomous Driving Levels
SAE International (previously known as the Society of Automotive Engineers) recently released a new version of its J3016 standards, widely known as the levels of driving automation. The goal? To clarify the differences between the levels, and to make it clear that Levels 0 through 2 are not autonomous.
Chart Did Not Change In Latest Release
Before we get into the document itself, let’s take a quick look at how much the visual chart has changed over the years. This latest version hasn’t changed since 2019.
Here’s the current visual chart (download a PDF and/or get more details here):
The visual chart hasn’t changed since 2019, but we can see that it has changed a lot since 2014, when it was first released. Here’s that oldest chart:
The older versions seem to be made more for engineers and others involved in the development of autonomous driving systems, but as journalists, automakers, and others have spread the terminology, they decided to make it more public friendly and understandable by the average driver. The terminology is simpler, and they’re very, very clear on who’s in charge of safety (driver or car) in the later iterations of the chart.
They make it very clear that Levels 0-2 are only for “driver support” while 3-5 is “automated driving.” Who the driver is, what a driver must do (if anything), and who maintains safety are all greatly easier to see in the newest versions of the chart.
What Was Recently Updated
Within the J3016 document itself, they did make some important changes.
First, they clarified the difference between Levels 3 and 4. From what I read in the full document, they made it more clear that it’s largely a difference in fallback. A Level 3 system has to hand control back to a driver if it cannot continue due to malfunction or leaving the conditions in which it’s supposed to work. Unlike Level 2, though, Level 3 handoffs aren’t immediate, giving an inattentive driver some time to get ready for takeover.
Level 4, on the other hand, doesn’t ever fall back to a driver, warning or no warning. It has to be able to take care of itself within its intended operating conditions and do something other than ask for a driver if it encounters a problem.
They added two new defined terms: “remote assistance” and “remote driving.”
They define “remote assistance” as guidance given to an AV in situations where it is unsure what to do on its own. A person providing remote assistance doesn’t take over the driving task, but can tell the vehicle’s systems where to go if the situation has left the system unable to make that kind of a decision on its own. The overall goal here is to keep the trip going.
“Remote driving,” on the other hand, is when a human takes over the whole driving task. This means that a human is ready to take over as the fallback, but isn’t in the vehicle itself. They make it clear that this doesn’t make it a real form of automated driving.
Another thing they’re doing is calling Level 1 and 2 systems “driver support” systems. This helps avoid confusion and keeps people from thinking that these systems are automated.
Finally, they added a definition for “failure mitigation strategy.” It’s pretty broad, and could include a vehicle stopping in place with hazard lights should someone not respond to a request to take over or the automated L4 or L5 system malfunction and be unable to continue to drive. It can also include safer mitigation strategies, like pulling over using a simpler backup system of some kind.
Terms Still Avoided
While not new, the document still has terms it avoids using because they are imprecise, potentially misleading, or even deceptive in public use.
They avoid the term “autonomous” because it has been used in different ways in popular language, state laws, and in the industry. The lack of a clear definition makes it tough for them to use it, because they wouldn’t be giving much precision to the reader. They also avoid using “autonomous vehicle” because they would prefer to refer to the task being automated instead of the vehicle itself. This allows for clarity of whether a vehicle is merely equipped with such features, or whether such features are active.
They also avoid “self driving” because it is commonly used to refer to systems with different capabilities and not just Levels 3-5. Once again, they want to be precise and not have confusion in the document.
It’s important to keep in mind that the J3016 document isn’t really meant for public consumption. By that I don’t mean that the document is hidden or contains secrets that they’re keeping from the rest of us, but it is meant for people working in the industry. Obviously, anyone can read it (they don’t even charge for it), but they don’t tailor anything but the graphic for the broader public.
While they don’t specifically mention things like Autopilot or Full Self Driving, it seems pretty clear that they are following the rest of the industry in avoiding “autonowashing.” They’re making it increasingly clear that “driver support features” are there to support the driver and not replace them. They don’t want the the industry or the public to confuse support systems with automated systems.
This is important because it could lead to increased abuse of support features.
As I’ve argued in the past, support features with safety lockouts to keep the driver in the loop show that people abusing them must know they’re doing something wrong. The seatbelt sensor, Tesla’s “wheel nag,” etc., are all there to keep people from doing stupid things like climb out of the driver’s seat, and require a conscious decision to bypass.
On the other hand, some people take this to mean that the systems are ready to go, and that it’s only government or overly cautious corporations that keep people from fully enjoying them. They don’t realize that while Autopilot works pretty damned well, it isn’t perfect and will screw things up from time to time. It also isn’t meant to do everything the way FSD eventually will.
Avoiding confusion and being clear about these systems is important, and it’s up to all of us to be clear about them.