The shuttle wasn’t at fault, but it might have been able to avert the accident.
Instead, within hours, the project was greeted with the worst possible headline: “Self-driving shuttle bus in crash on first day.”
The headline, and lots of others like it, were technically accurate but a little misleading. Officials say the other vehicle—a semi truck that bumped into the shuttle while backing up—was at fault. And it was such a low-speed collision that no one was injured. The accident “merely dented the plastic panels on the front of the shuttle,” according to Jeff Zurschmeide of Digital Trends.
So the implication of a lot of headlines—that the shuttle malfunctioned and caused a serious crash—is totally wrong.
What’s not clear is whether the shuttle could have done more to prevent the collision.
Zurschmeide happened to be on the bus at the time the crash occurred, and explains just what happened:
On the other side, the shuttle did exactly what it was programmed to do, and that’s a critical point. The self-driving program didn’t account for the vehicle in front unexpectedly backing up. We had about 20 feet of empty street behind us (I looked) and most human drivers would have thrown the car into reverse and used some of that space to get away from the truck. Or at least leaned on the horn and made our presence harder to miss. The shuttle didn’t have those responses in its program.
But Keolis, the company operating the self-driving shuttle, told reporter Pete Bigelow the opposite:
2. Shuttle is capable of reversing directions. It didn’t in this case, because cars were stopped behind it. It was essentially boxed in, says Chris Barker, VP of new mobility at Keolis.
— Pete Bigelow (@PeterCBigelow) November 9, 2017
The police report, due out next week, might help to clear things up.
Either way, a clear lesson here is that self-driving vehicles need to be able to do more than just avoid causing accidents. They also need to be programmed to take the kind of common-sense steps human drivers would take to prevent accidents even when they’re technically the fault of another driver.
A 2015 study looking at the first 1.2 million miles of self-driving car data found the vehicles (which at the time mostly meant Google cars, though Audi and Delphi also had some cars on the road) seemed to actually get in accidents more often than human-driven cars. The accidents were mostly minor fender-benders, and all of them were the fault of the other vehicle. Even a rather dramatic accident that caused one of Uber’s self-driving research vehicles to roll over earlier this year was caused by driver error in the other car.
One possible explanation for those 2015 findings is that human drivers routinely fail to report minor accidents of this type. But another possibility is that these early self-driving cars drive in ways that human drivers can find confusing or unexpected—like slowing down at a stop light earlier than a human driver would. If self-driving cars are legally at fault in no accidents but still drive in a way that leads to other drivers hitting them at an elevated rate, that becomes a problem in its own right.
The good news is that these kinds of accidents rarely if ever lead to human injuries. So while this is certainly something that Navya and other driverless vehicle companies should work on, it’s not something to freak out about. It shouldn’t be difficult to train a driverless car to notice an impending accident and take action—like backing up or moving forward—to prevent it. Someone at each driverless car company just needs to do the work.