← LOGBOOK LOG-182
EXPLORING · CREATIVITY ·
ARTIFICIAL-INTELLIGENCEAUTONOMOUS-VEHICLESTECHNOLOGYTACIT-KNOWLEDGESYSTEMS-ENGINEERINGINNOVATIONLABOR

Waymo & the Rise of Robotaxis — To Build a Driver

What Waymo is really building is not a car that drives itself. It is a complete synthetic cognition of a human task so deeply embodied, so s

The Central Argument

What Waymo is really building is not a car that drives itself. It is a complete synthetic cognition of a human task so deeply embodied, so saturated with tacit knowledge, that most engineers initially assumed it could be reduced to software within a few years. The central argument threading through this account is that autonomous driving forced its pioneers to confront a brutal epistemological problem: you cannot write rules for what humans do intuitively. The driver is not following an algorithm. The driver is perceiving, predicting, improvising, and negotiating — all simultaneously, all in real time, all against a backdrop of infinite environmental variation. To build a driver is, in a strange way, to build a mind for a narrow slice of the physical world.

Why This Problem Needed a New Frame

The context here matters enormously. Waymo emerged from Google X, which was itself a bet-everything-on-moonshots culture. The original Sebastian Thrun-era DARPA challenge vehicles were celebrated as proof that the problem was basically solved. That optimism curdled slowly, then suddenly. The early highway demos worked because highways are comparatively sterile environments — lanes are clear, behavior is predictable, the social negotiation between drivers is minimal. The moment the car entered an uncontrolled urban intersection, it encountered something qualitatively different: not a harder version of the same problem, but a different kind of problem entirely.

This is the context that makes the Waymo story necessary to understand. The company was not simply iterating on prior engineering. It was being forced, repeatedly, by reality itself, to abandon the assumption that driving is a rule-following task. Every edge case the car encountered — the cyclist gesturing to proceed, the construction worker waving vehicles through a red light, the child chasing a ball from behind a parked truck — was a reminder that human driving is a continuous act of social and physical interpretation. Rules cannot anticipate interpretation. That gap is where Waymo had to live.

The Depth of the Engineering Insight

The key insight that this account keeps circling is the relationship between data scale and genuine understanding. Waymo accumulated billions of simulated miles. But simulation has a fundamental ceiling: it can only model what you already know to model. The adversarial value of real-world miles is precisely that reality generates scenarios no simulator would have thought to include. The pedestrian who jaywalks while looking at their phone but pauses mid-street, neither completing the crossing nor retreating — a human driver reads that body language in milliseconds. Building a system that can do the same requires not just sensor data, but a learned model of human intention and micro-behavior.

This is where the hardware and software complexity become genuinely humbling. LIDAR, radar, and cameras are not redundant backups of each other; they are complementary modalities solving different parts of the perception problem. LIDAR gives you geometric precision in low light. Cameras give you semantic richness — reading signs, interpreting painted road markings, distinguishing a plastic bag from a boulder. Fusion of these modalities under real-time latency constraints is itself a formidable systems engineering challenge. And that is before you reach the prediction and planning layers, which must model not just where other vehicles are but where they are likely to be in four seconds given their current trajectory and the apparent intentions of everyone nearby.

Connections to Adjacent Fields

The parallels to other domains are genuinely instructive here. Autonomous driving is, at its epistemological core, a problem in tacit knowledge — the kind of knowing that Michael Polanyi described as “we know more than we can tell.” Driving expertise is exactly this sort of knowledge. Experienced drivers cannot fully articulate their decision rules because their decisions emerge from pattern recognition below the threshold of conscious reasoning. The challenge of encoding that into a machine is the same challenge that defeated early expert systems in medicine and law: symbolic AI could not capture what practitioners actually did, because practitioners themselves did not know in explicit terms what they did.

There are also resonances with labor economics and the theory of automation. The conventional assumption, reinforced by decades of industrial robotics, was that physical, routine tasks are easier to automate than cognitive ones. Driving appeared physical and routine. It proved to be neither. This inversion has real implications for how we think about which jobs are genuinely automatable — and it suggests that the folk taxonomy of skilled versus unskilled labor is poorly calibrated to the actual cognitive demands embedded in physical work.

Why It Matters

I keep returning to the timescale question. Waymo has been at this for roughly fifteen years, with resources most companies will never see, and the product is only now approaching something like commercial viability in geofenced urban areas. That is a sobering data point for any technology optimist who treats autonomy as a near-term default outcome. The honest reading of the Waymo story is that building a driver was harder than building a search engine, harder than building a social network, possibly harder than anything consumer technology had previously attempted. Not because the engineering was unsophisticated, but because the target — human perceptual and social competence in an unstructured environment — turned out to be extraordinarily deep.

What the story ultimately illuminates is the value of not retreating from hard problems when early timelines collapse. The companies that abandoned the space when 2020 came and went without full autonomy missed something important: the difficulty was the lesson. Waymo stayed precisely because the problem kept getting more interesting.