A.I. and the Problem of Context

Machines are notoriously awful at classification—but so are humans

Mitch Turck
3 min readDec 3, 2018
Composite image. Portraits: MoreISO/Getty Images; Background: batuhan toker/Getty Images

Classification is the brain fuel that powers behavior. People often prefer to call it by more grandiose names—identification, judgment, prejudice, conclusion—but whatever the analog, human brains rarely take a conscious step without first employing it.

Unsurprisingly, such a deep-seated mental operation has proven difficult to pass down to inventions, like A.I. and self-driving cars, for instance. A machine’s worldview—its contextual frame of reference—is so difficult to build out to human scale that we find them classifying pedestrians as bicycles and advertisements as pedestrians. Perhaps practical A.I. isn’t ready for the real world after all.

Or perhaps the real world is not as easily classified as we’d like to believe.

Consider the example of a curbside advertisement and ask a relatively simple question: Is this a pedestrian?

An algorithm without a strong worldview may indeed ignore the context and classify this as a pedestrian when it is clearly just a print advertisement of a pedestrian. Except, it isn’t.

--

--

Mitch Turck

Future of work, future of mobility, future of ice cream.