State Of Autonomy: March Recap
Every month, I recap the news articles I’ve consumed around autonomous vehicles, calling out the highlights and keeping track of market projections. This is also your chance, dear readers, to nominate a topic for discussion in the following month.
Well now, I assume you’re all here for the train wreck. You can find plenty of garbage rhetoric out there on your own, so I’ll take a few paragraphs to cover the situation with as little spin as possible.
Order of events:
- An Arizona pedestrian attempted to walk her bike across a multi-lane road outside of a crosswalk on a clear night.
- Uber’s autonomous suite failed to classify the pedestrian as a legitimate obstacle, and so no evasive maneuver was performed by the AI.
- Uber’s safety driver was passively operating the vehicle (it was running in autonomous mode) while partially distracted from the road, and failed to recognize the pedestrian in time to take control and affect the outcome.
- The vehicle collided with the pedestrian, who later died as a direct result.
The technical question we all have is still an unknown: how is it possible that Uber’s software didn’t pick this up? It should have been a classic case in point illustrating how autonomous technology outperforms human drivers, and the failure of Uber’s suite is a tragic embarrassment to the tech’s reputation. As a result, we saw a few companies distance themselves from Uber by stating their tech either wasn’t part of the problem (Velodyne, Aptiv) or would have handled the problem had it been involved (Waymo, Mobileye).
On the heels of the collision, Uber immediately suspended testing, Toyota declared it would suspend testing, NVIDIA claimed several days later that it had suspended testing, and strangest of all, the city of Boston demanded that nuTonomy suspend testing there. Remember: none of these other companies use Uber’s driving software, so pulling your own public testing because of an unrelated company’s tech issues is like… well, I said I wouldn’t spin anything here, so create your own analogy. To be fair, it’s possible — though unlikely — that the investigation could point to some industry-wide shared piece of hardware, like Velodyne’s LIDAR unit, as the point of failure. Of course, none of these other parties would know this to be the answer, so the point stands.
There is much speculation over whatever legal outcomes may unravel, and that too is unknown — although Uber has already settled with the victim’s family. But, barring any groundbreaking judgment, what actually happened here from a legal perspective was that an inattentive driver, working for a corporation, struck a pedestrian outside of a crosswalk. We have well-established laws and processes for reacting to all of that. It’s not a very sexy story when you write it that way, of course… but humans struggle with facts. Stories make a lot more sense.
Like many (most?) man-made catastrophes, a convergence of poor decisions, both systemic and opportunistic, led to a disastrous outcome. The failure of the self-driving suite to process or act successfully is notable, but mostly as fuel for what ought to be much broader reflection on how far cars have come, and how far our sense of personal responsibility has not.
One can certainly sympathize with this safety driver, who was exposed to a highly-automated vehicle and failed to maintain total vigilance as a result — it’s even possible part of her job description involved looking away from the road with regularity. That said, the divide between driving software and human drivers implied by the media is imagined, not real. This vehicle, likely less advanced than a Waymo self-driving vehicle, was only somewhat more advanced than a Tesla, which in turn is more advanced than a Lexus, which is more advanced than a Ford, which is more advanced than a Ford from 5 years ago, which is more advanced than a Ford from 15 years ago… if you’re looking for a line to draw, you won’t find a border. You’ll find a timeline. Ignoring this reality will create a lot of embarrassing hypocrisy in conventional drivers, carmakers, and lawmakers.
To that end, my posts next month will speak to this broader conversation: 1) why making comparisons between humans and robots is complicated, and 2) what the widely adopted SAE Autonomy Levels mean and why we should question them.
This Month’s Highlights:
- Uber Vehicle Kills Pedestrian While Operating In Autonomous Mode
- Waymo To Introduce Jaguar I-Pace Into Robotaxi Fleet, Ramping Up To 20,000 Units In 2020
- Toyota Investing $2.8B In Autonomous Vehicle R&D Venture
- CHJ Automotive (“Tesla Of China”) Lands $473MM Investment To Work With Didi Chuxing (“Uber Of China”) On EV Production, Self-Driving Tech
- Magna And Lyft Collaborating To Produce Autonomous Mobility Solution
- Starsky Robotics Will Begin Unmanned Freight Hauls In 2018
- Beijing OKs Public AV Testing For Baidu
- UK Govt To Plan For 2021 Commercialization Of AVs
- Tokyo Trials Driverless Mail Trucks With Aim To Deploy In 2020
- German Researchers Are Latest To Attempt Developing A “Driver’s License” For Autonomous Vehicles
- Tesla Driver Dies After Vehicle Crashes With Autopilot Engaged
Market Predictions (No Major Change):
Coming In April:
- Why Is It So Hard To Compare Man And Machine?
- Your suggestion? Send a tweet to Mitch Turck