The second AI winter arrived not with a dramatic report or a single damning verdict, but with a slow, grinding realization that the expert systems revolution was not going to deliver what it had promised. Between 1987 and 1993, the AI industry contracted sharply. Companies failed. Funding evaporated. Researchers learned, for the second time, the cost of overpromising.

The Collapse

The unraveling began with the Lisp machine market. Symbolics, the crown jewel of the AI hardware industry, had been founded in 1980 and quickly became one of the most celebrated technology companies in the world. Its specialized computers were the workhorses of AI research — elegant machines designed from the ground up to run Lisp, the language of AI.

But general-purpose workstations were getting faster and cheaper at a ferocious pace. By the late 1980s, Sun and Hewlett-Packard workstations could run Lisp programs nearly as fast as Symbolics machines, at a fraction of the price. Why pay $100,000 for a dedicated Lisp machine when a $10,000 workstation could do the same job?

Symbolics' revenue collapsed. The company filed for bankruptcy in 1993. Lisp Machines Inc. had already failed. Texas Instruments exited the market. The specialized hardware that had defined the AI industry disappeared almost overnight.

The software side of the AI industry suffered a parallel decline. Expert system shells — the tools used to build expert systems — had been sold with extravagant promises. Vendors claimed that business users could build their own expert systems without programming expertise. In practice, building a useful expert system still required months of knowledge engineering by skilled specialists. Many companies that had bought expert system tools never built anything useful with them.

The consulting firms that had sprung up to build custom expert systems found that their projects took longer, cost more, and delivered less than their sales teams had promised. Projects that were supposed to take months dragged on for years. Systems that worked in prototypes failed when exposed to real-world data. The gap between the demo and the deployment was vast, and clients were not willing to keep paying to cross it.

The Damage

Between 1987 and 1993, the commercial AI industry lost approximately three-quarters of its value. Hundreds of AI startups failed. The major AI companies — Symbolics, IntelliCorp, Teknowledge — either went bankrupt or shrank to a fraction of their former size.

The cultural damage was worse than the financial damage. "Artificial intelligence" became a toxic label. Companies that were doing genuine AI work rebranded their products. Expert systems became "business rule engines." Pattern recognition became "analytics." Machine learning became "data mining." Natural language processing became "text processing." Anything to avoid the three letters A-I.

This rebranding was not merely cosmetic. The term "artificial intelligence" had become so associated with broken promises that using it in a grant application or a business proposal was a liability. Researchers who continued working on AI problems often described their work in other terms to avoid the stigma.

The academic job market in AI contracted painfully. Departments that had hired aggressively during the boom now had more AI faculty than they could support. Young researchers were advised to position their work as machine learning, or computational linguistics, or robotics — anything but AI.

What Went Wrong This Time

The second AI winter had different immediate causes from the first, but the underlying pattern was the same: the technology could not deliver what had been promised.

The knowledge engineering bottleneck proved intractable. Building expert systems required extracting knowledge from human experts and encoding it in rules. This was slow, expensive, and error-prone. The dream of rapid knowledge capture — of sitting down with an expert for a few days and emerging with a working system — never materialized. Large expert systems took years to build and required constant maintenance.

Brittleness remained unsolved. Expert systems worked within their prescribed boundaries and failed outside them. Real-world problems had a frustrating tendency to fall outside prescribed boundaries. A system designed to diagnose equipment failures would encounter a failure mode that no one had thought to program rules for. A configuration system would receive an order that did not match any of its rule patterns.

Integration was harder than expected. An expert system, no matter how accurate, was useless if it could not be integrated into existing business workflows. Many systems that performed brilliantly in isolation could not exchange data with existing software, could not handle the volume of real-world transactions, or required input formats that were impractical for actual users.

The common-sense problem persisted. The central challenge of symbolic AI — how to give machines the vast background knowledge that humans take for granted — remained unsolved. Doug Lenat's Cyc project, launched in 1984, attempted to solve this problem by manually encoding millions of common-sense facts and rules. Decades later, Cyc had accumulated millions of assertions but still could not match the common-sense reasoning of a child. The project demonstrated, more than anything, the staggering depth of human common-sense knowledge.

The Survivors

Not everything died in the second AI winter. Several lines of research not only survived but grew stronger, laying the groundwork for the eventual AI renaissance.

Machine learning quietly gathered momentum. Researchers developed new algorithms that could learn patterns from data without being explicitly programmed. Decision trees, which learned to classify data by asking a series of questions, became popular in business applications. They were practical, interpretable, and they worked.

Neural networks made a comeback. In 1986, David Rumelhart, Geoffrey Hinton, and Ronald Williams published a paper describing the backpropagation algorithm — a method for training multi-layer neural networks. This directly addressed the limitation that Minsky and Papert had identified: multi-layer networks could learn functions like XOR that single-layer perceptrons could not. Backpropagation gave neural networks a second life.

The renewed interest in neural networks was partly inspired by a 1982 paper by John Hopfield, a physicist who showed that networks of simple units could exhibit interesting collective behavior — including the ability to store and retrieve memories. Hopfield's work attracted attention from physicists and engineers who brought new mathematical tools to the study of neural networks.

Statistical methods began to infiltrate natural language processing. Instead of trying to encode the rules of grammar explicitly, researchers started using statistical models that learned patterns from large collections of text. These approaches were less elegant than symbolic methods, but they were more robust and scalable. The shift from rules to statistics in language processing would prove to be one of the most important transitions in AI history.

Robotics continued to advance, particularly in industrial applications where the environment was controlled and the tasks were well-defined. Robot arms on factory floors did not need general intelligence. They needed precision, speed, and reliability — qualities that engineering could deliver without solving the hard problems of AI.

The Cultural Shift

The second AI winter catalyzed an important cultural shift within the research community. The grand ambitions of the golden age — building machines that could truly think, that could match human intelligence across the board — gave way to a more pragmatic approach.

Researchers began to focus on specific, well-defined problems rather than general intelligence. Instead of asking "How can we make a machine think?" they asked "How can we make a machine recognize handwritten digits?" or "How can we make a machine filter spam email?" These problems were modest, but they were tractable. Progress could be measured, benchmarked, and compared.

This shift toward measurable, specific goals would prove crucial. It transformed AI from a field driven by philosophical ambition into one driven by empirical results. And it set the stage for the methodological approach — careful experimentation, standardized benchmarks, reproducible results — that would eventually power the machine learning revolution.

The View from the Valley

By the early 1990s, AI was at its lowest ebb. The commercial industry was in ruins. The name itself was radioactive. The grand vision of thinking machines seemed further away than ever.

But beneath the surface, the conditions for a renaissance were forming. Computers were getting dramatically faster and cheaper, following Moore's Law with relentless predictability. The internet was emerging, creating vast new sources of digital data. Machine learning algorithms were improving. Neural networks had a viable training method. Statistical approaches to language were producing better results than symbolic methods.

The next wave of AI would look nothing like the expert systems era. It would not be driven by hand-coded rules or knowledge engineering. It would be driven by data, by statistical learning, and by the realization that if you gave a learning algorithm enough examples, it could discover patterns that no human programmer could have anticipated.

The quiet revolution was about to begin.