As the United States moves increasingly toward automated vehicles, a number of cars on the road are more automated than before, even if not fully self-driving. Both Tesla and Volvo, for example, sell models that have an automated feature, called Autopilot and Pilot Assist, respectively, that allow drivers to set the cars to drive autonomously, although they can override what the car is doing.
It’s one step beyond existing cruise controls, you might say.
Except that they may not be fully safe. A Tesla on Autopilot recently hit a southern California fire truck stopped on a busy southern California freeway. The fire truck had been called to the scene of an accident and was stationary. The Tesla, by contrast, was going 65 miles per hour on impact. Fortunately, there was no one in back of the fire truck, or there could have been injuries if not fatalities. The driver said the vehicle was on Autopilot.
Designed to Sense Moving Vehicles, not Stationary Ones
Does this mean that autonomy in a vehicle is not a safe thing? Well, not really.
As Wired points out, the driver’s manuals of both Teslas and Volvos with these features point out that the driver must intervene for stationary objects and that their vehicles cannot detect all objects. In this case, it was the driver who had not followed best practices for safety, not the car.
Wired goes on to explain that autonomous vehicle engineers don’t design the cars to react to stationary objects by braking. If they did, the cars would be squealing to a stop for the many non-moving objects on roads, like speed limit signs and city limit signs, to name only two.
Instead, engineers use radar, which makes the cars sense moving objects. They can slow down or speed up depending on surrounding traffic.
Because of this, the potential safety impacts of autonomous vehicles, like eliminating accidents due to speeding or drunk driving, are likely to be realized when the vehicles become reality.
There are several levels between human and auto mode.
Interim Stages Between Human Drivers and Self-Driving Vehicles
But the accident does point out that most people’s perceptions of human-driven cars and self-driving cars, with no steps in between, is not the way the evolution is happening.
In fact, The National Highway Traffic Safety Administration and Society of Automotive Engineers have developed six levels of driving automation, according to USAToday.
At 0, a vehicle is being completely driven by a human. At 5, a vehicle is driving itself.
The Tesla that crashed likely was at an autonomy level of 2 or below. At these levels, the driver can always intervene, should intervene in many instances, and is fully liable in case of an accident.
But at some point, regulators are likely to approve cars at levels 3, 4, and 5. The higher up the scale they rise, the more liability may rest with the car. It is unclear whether self-driving cars will always build in human override as a feature. This is especially true since many safety forecasts assume that the cars will override poor human judgment — still a leading cause of traffic accidents.
Perhaps the solution is to publicize that a more autonomous car is not the same as an autonomous car — and that they don’t brake for stationary objects.