In the early 1980s, artificial intelligence came back from the dead. The resurrection was not driven by a theoretical breakthrough or a charismatic visionary. It was driven by something more prosaic: money. Corporations discovered that AI could solve real business problems, and they were willing to pay handsomely for it.

The technology that fueled this revival was the expert system — a program that encoded the knowledge of human specialists and used it to make decisions, diagnose problems, or configure complex products. Expert systems were narrow, practical, and profitable. They were also the vehicle for AI's second great wave of hype, and ultimately, its second winter.

The Rise of Expert Systems

The idea behind expert systems was straightforward. Human experts in fields like medicine, geology, and engineering carry enormous amounts of specialized knowledge in their heads. This knowledge is valuable but fragile — it retires when the expert retires, and it cannot be in two places at once. What if you could capture that knowledge in software and make it available to anyone, anywhere, anytime?

The approach was called "knowledge engineering." A trained interviewer — the knowledge engineer — would sit with a domain expert for weeks or months, extracting their decision-making rules and encoding them in software. The result was a system of IF-THEN rules: if the patient has fever AND the white blood cell count is elevated AND the culture shows gram-positive cocci, THEN the likely diagnosis is staphylococcal infection with probability 0.7.

MYCIN, developed at Stanford in the mid-1970s, was the prototype. It contained about 600 rules for diagnosing bacterial infections and recommending antibiotics. In controlled tests, MYCIN performed as well as infectious disease specialists and significantly better than general practitioners. It was a genuine achievement — a program that could match human experts in a real-world medical task.

But MYCIN itself was never deployed clinically. Legal and ethical concerns about computer-assisted diagnosis, combined with the practical difficulty of integrating it into hospital workflows, kept it in the laboratory. Its importance was as a proof of concept: expert systems could work.

R1/XCON and the Corporate Gold Rush

The system that transformed expert systems from a research curiosity into a business phenomenon was R1, later renamed XCON, developed at Carnegie Mellon University for Digital Equipment Corporation (DEC).

DEC sold minicomputers — complex machines that could be configured in thousands of different ways. Each order required a human expert to verify that the customer's configuration would actually work — that all the components were compatible, that the power supply was adequate, that the cables would reach, and hundreds of other technical details. DEC employed a small team of experts to do this work, and they were perpetually overwhelmed. Incorrect configurations cost the company millions in returns and customer dissatisfaction.

XCON automated this process. By 1986, it had over 10,000 rules and could configure orders with 95-98% accuracy. DEC estimated that XCON saved the company $40 million per year — an enormous return on a relatively modest investment.

XCON's success was the spark that ignited a corporate AI boom. If a rule-based system could save one company $40 million, what could it do across an entire industry? Corporations rushed to build their own expert systems. Consulting firms emerged to help them. AI startups proliferated.

The Business Boom

The numbers were staggering. The AI industry, which barely existed in 1980, was generating over $1 billion in annual revenue by 1985. Companies like IntelliCorp, Teknowledge, Applied Intelligence Systems, and dozens of others sold expert system tools, consulting services, and custom development.

DuPont reportedly built over 100 expert systems across its operations. American Express used expert systems to detect credit card fraud. Schlumberger used them to interpret oil well data. General Electric used them to diagnose locomotive engine problems.

A new class of hardware appeared: Lisp machines. These were specialized computers designed to run Lisp, the programming language that dominated AI research. Companies like Symbolics, Lisp Machines Inc., and Texas Instruments built and sold these machines at premium prices — $50,000 to $100,000 each. Symbolics, at its peak, was one of the hottest technology companies in the world.

Governments joined the frenzy. Japan launched its Fifth Generation Computer Project in 1982, a $400 million effort to build massively parallel computers that could perform AI tasks. The project aimed to leapfrog Western computing and establish Japan as the world leader in AI. Its announcement sent shockwaves through the American and European technology establishments.

In response, the UK launched the Alvey Programme, and DARPA created the Strategic Computing Initiative, each pouring hundreds of millions of dollars into AI research. The first AI winter was emphatically over.

What Expert Systems Could Actually Do

At their best, expert systems were genuinely useful. They could:

Preserve expertise — When a veteran engineer retired, their decades of knowledge did not have to leave with them. It could be encoded in an expert system and made available to their successors.

Democratize knowledge — A small-town doctor in rural America could access the diagnostic reasoning of a specialist at a major teaching hospital. A junior engineer could benefit from the hard-won knowledge of a senior colleague.

Ensure consistency — Human experts are subject to fatigue, mood, and inconsistency. An expert system applied the same rules every time, producing reliable and repeatable results.

Handle complexity — For problems involving many interacting variables — like configuring a complex computer system — expert systems could track details that overwhelmed human working memory.

These were real benefits, and they justified genuine commercial investment. The problem was not that expert systems were useless. The problem was that their limitations were severe, and the hype machine was already running faster than the technology could keep up.

The Hidden Fragility

Expert systems had fundamental weaknesses that became apparent as companies tried to deploy them more widely.

The knowledge acquisition bottleneck was the most crippling. Extracting knowledge from human experts was extraordinarily time-consuming and difficult. Experts often could not articulate their own reasoning. They used intuition, pattern recognition, and unconscious knowledge that they could not put into words. A knowledge engineer might spend months interviewing an expert and still miss crucial rules.

Brittleness was equally serious. Expert systems worked well within their defined domain but failed catastrophically at the edges. A medical diagnosis system that knew about bacterial infections would be useless — and potentially dangerous — if presented with a viral infection it had no rules for. Unlike human experts, who can recognize when they are out of their depth and seek help, expert systems had no awareness of their own limitations.

Maintenance was a nightmare. As rules accumulated, expert systems became increasingly difficult to modify. Adding a new rule could interact unpredictably with existing rules, producing incorrect results. XCON's 10,000 rules required a dedicated team of engineers to maintain and update. The system that was supposed to eliminate the dependence on scarce experts created a dependence on scarce expert system maintainers.

Common sense was absent. Expert systems knew everything they had been told about their narrow domain and nothing else. A medical system might correctly diagnose a rare disease but would not know that patients generally prefer treatments that do not kill them. A configuration system might produce a technically valid computer setup that no human would ever want. The common-sense knowledge that humans bring to every decision — the vast background of understanding about how the world works — could not be programmed into rules, because no one had ever written it all down. No one could.

The Fifth Generation Fizzle

Japan's Fifth Generation Computer Project, which had seemed so threatening in 1982, provides a cautionary tale about top-down AI development.

The project aimed to build computers that could perform inference — logical reasoning — at unprecedented speeds. The idea was that if you could process enough rules fast enough, you could achieve something approaching general intelligence.

Ten years and $400 million later, the project had produced some interesting computer science but nothing resembling the intelligent systems it had promised. The fundamental problem was the same one that plagued all symbolic AI: you can process rules as fast as you like, but if you do not have the right rules — if you cannot capture the full complexity of human knowledge in logical form — speed alone will not produce intelligence.

The Fifth Generation Project was quietly wound down in the early 1990s. Its failure took some of the urgency out of the Western AI arms race and contributed to the growing sense that something was fundamentally wrong with the symbolic approach.

The Seeds of the Second Winter

By the mid-1980s, the cracks in the expert systems boom were widening. Companies that had invested heavily in expert systems were discovering that the technology was harder to deploy and maintain than the vendors had promised. The Lisp machine market collapsed as general-purpose workstations from Sun Microsystems and others became powerful enough to run AI software at a fraction of the cost.

The pattern was distressingly familiar. Once again, AI had made promises it could not keep. Once again, impressive demonstrations in controlled settings had failed to translate into robust, deployable systems. Once again, the gap between the vision and the reality was about to trigger a painful reckoning.

The second AI winter was approaching, and it would be, in some ways, more damaging than the first — because this time, real companies had real money on the line.