California’s caveat requiring self-driving tech companies to report on disengagement rates — the occasions wherein a safety driver is forced to take control of the vehicle during public testing — has become the de facto statistic for measuring industry progress and competitiveness. Flawed as this metric may be, the alternative is to have no statistical frame of reference at all. Surely, something is better than nothing… right?
No, not quite right. Optimizing an error mitigation metric in such a dynamic and vague environment as “public roads” is a recipe for statistical success at the expense of a viable product. Here are some of the tactics companies could pursue, intentionally or otherwise, to build a worse autonomous car with a better disengagement rate.
The denominator of disengagements is miles traveled, which implies that AV firms ought to cover as much ground as possible, with as little interference as possible, to improve disengagements. Where might one find relatively predictable, high-speed miles waiting to be racked up? Highways.
By definition, highways are limited access roads with high margins for error, making them horrendous testbeds for a technology that also has to operate in obstacle-ridden, split-second environments — but prime real estate for beating a disengagement benchmark.
This is all to say nothing of the many other road-going variables companies can control to better their disengagement results: re-routing to avoid construction, limiting tests to fair-weather conditions, and attempting to re-run the same scenarios ad nauseam, to name a few.
AV testing in California is ramping up to a point that the public has become keenly aware of the technology and its habits. Could this be an opportunity to optimize disengagement rates? Certainly.
One might find more success testing in an area like Palo Alto, where it behooves the locals to play nice in the local tech ecosystem, than a tourism-heavy town where curiosity leads to edge cases. Or, research might reveal that commuters are too aggressive to handle, and as such, tests should be conducted during off-peak hours. It goes without saying that public trials near college campuses, bars and nightlife are a no-no if the goal is to marginalize unpredictable scenarios.
This would all be fine should consumer-grade driverless technology never go near tourists, campuses, commuters or social scenes — but that’s precisely where their market fit lies.
As long as humans are behind the wheel of self-driving tech — literally and figuratively — there can be no standard for disengagement. Most obviously, an in-car safety driver’s choice to disengage autonomous mode is a personal and circumstantial one: employing driver teams who are less likely to grab the controls is an effective route to decreasing disengagements, but the Elaine Herzberg fatality is a natural result of such a decision.
Worth noting, then, that such decisions about disengagement do not fall solely on the drivers. Each company working on this technology has its own strategy and perspective on risk:reward. Apple initially reported all disengagements (including planned ones), which artificially inflated its figures. Uber previously ran a single-driver test team with dubious monitoring practices; by contrast, Karl Iagnemma clearly takes up a position in Aptiv’s recently-filed report which bodes poorly for the company’s disengagement statistics, despite resonating with nearly everyone who might cross the path of an Aptiv vehicle:
The goal of public testing is to improve the fitness and reliability of autonomous vehicles through real-world exposure. But with no means of standardization or human comparison, disengagement rates are a distraction at best, and a detriment at worst. Sometimes, nothing is better than something.
Originally published on Forbes.com.