In the summer of 1956, ten men gathered at Dartmouth College in Hanover, New Hampshire, for a workshop that would give a new field its name, its agenda, and its audacious confidence. They had received funding for what their proposal described as a study based on "the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."
That single sentence — ambitious to the point of hubris — launched the field of artificial intelligence.
The Proposal
The workshop was the brainchild of four men, each of whom would leave a deep mark on the field.
John McCarthy was a young mathematician at Dartmouth. Restless, brilliant, and impatient with incrementalism, McCarthy wanted to build machines that could truly think — not merely calculate, but reason, learn, and solve problems the way humans do. He coined the term "artificial intelligence" for the proposal, a name that stuck despite its audacity. McCarthy later said he chose the term partly because he wanted a name that would not be associated with any existing field. He got his wish.
Marvin Minsky was a Harvard-trained mathematician who had built one of the first neural network machines — the SNARC (Stochastic Neural Analog Reinforcement Calculator) — as a graduate student in 1951. Minsky was a polymath whose interests ranged from mathematics to music to psychology. He would become one of AI's most influential thinkers and its most quotable provocateur.
Nathaniel Rochester was a senior engineer at IBM who had designed the architecture of IBM's first commercial scientific computer, the 701. He brought the perspective of someone who actually built computing hardware and understood its capabilities and limits.
Claude Shannon was already legendary. His 1948 paper "A Mathematical Theory of Communication" had founded the field of information theory. Shannon had shown that all information — text, images, sound, anything — could be represented as sequences of binary digits. This insight was foundational. If intelligence involves processing information, and all information can be reduced to bits, then in principle a machine that processes bits could process anything an intelligent being processes.
Together, these four wrote a proposal to the Rockefeller Foundation requesting funding for a two-month summer research project at Dartmouth. The proposal is a remarkable document — both for what it got right and for what it got spectacularly wrong.
The Audacious Conjecture
The proposal's core claim deserves to be quoted in full:
"We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."
Read that again. They proposed that a group of ten people, working for one summer, could make significant progress on: natural language processing, concept formation, problem solving, and self-improvement in machines.
This was not modesty dressed up as ambition. They genuinely believed that the fundamental problems of AI could be cracked quickly. McCarthy later recalled: "We thought a group of bright people could make significant advances in many of these fields in a summer if they worked hard enough."
The Summer Workshop
The actual workshop was less dramatic than the proposal suggested. It did not run as a continuous two-month seminar. Participants came and went throughout the summer, and attendance was never all ten people at once. Most participants stayed for a few weeks, and the work was more a series of informal discussions and individual projects than a coordinated research effort.
The ten core participants, in addition to the four organizers, were:
Allen Newell and Herbert Simon from Carnegie Mellon (then Carnegie Institute of Technology) arrived with something none of the others had: a working program. Their Logic Theorist, completed just before the workshop, could prove mathematical theorems from Bertrand Russell and Alfred North Whitehead's Principia Mathematica. It was, by some definitions, the first artificial intelligence program.
Arthur Samuel from IBM was working on a checkers-playing program that could learn from its own experience — one of the first machine learning systems. Samuel's program would eventually beat the fourth-best checkers player in the United States.
Ray Solomonoff was developing a theory of machine learning based on probability and algorithmic information theory. His work anticipated much of modern machine learning by decades.
Oliver Selfridge was working on pattern recognition — how machines could learn to identify objects in images. His "Pandemonium" model proposed that recognition could emerge from many simple processes competing to identify patterns, an idea that prefigured neural networks.
Trenchard More contributed expertise in formal languages and their properties.
What They Actually Accomplished
In terms of concrete results, the Dartmouth workshop produced surprisingly little. There was no breakthrough moment, no dramatic demonstration, no unified theory of intelligence. The participants talked, argued, demonstrated their individual projects, and went home.
But the workshop's real achievement was cultural, not technical. It accomplished three things that shaped the next several decades of research.
First, it created a community. Before Dartmouth, the people working on machine intelligence were scattered across different departments — mathematics, psychology, engineering, philosophy — with no shared vocabulary, no shared conferences, and no shared identity. After Dartmouth, they were "AI researchers," part of a recognizable field with a name and an agenda.
Second, it established an approach. The Dartmouth participants overwhelmingly favored what would later be called "symbolic AI" — the idea that intelligence could be achieved by manipulating symbols according to rules, much as humans seem to manipulate concepts and words when they reason. This approach would dominate AI research for the next thirty years.
Third, it set expectations. The confidence of the Dartmouth proposal — the belief that the fundamental problems of intelligence could be solved in a summer — established a pattern of optimism that would characterize AI research for decades. This optimism would drive extraordinary energy and funding, but it would also set the field up for painful disappointments when reality proved more stubborn than the vision.
The Road Not Taken
There is one aspect of the Dartmouth workshop that, in retrospect, was fateful. The participants largely dismissed the approach that would ultimately prove most successful: learning from data.
Frank Rosenblatt, who was not at Dartmouth but was working nearby at Cornell, was developing the Perceptron — a simple neural network that could learn to classify patterns by adjusting its connections based on examples. Arthur Samuel's checkers program was already demonstrating that machines could improve through experience. Ray Solomonoff's theoretical work pointed toward statistical learning.
But the dominant voices at Dartmouth — McCarthy, Minsky, Newell, Simon — favored symbolic approaches: programming machines with rules, logic, and explicit representations of knowledge. They believed intelligence was fundamentally about reasoning with symbols, not about learning from data.
This was not an unreasonable position. In 1956, computers were tiny, data was scarce, and the mathematical theory of learning was undeveloped. Symbolic approaches produced immediate, impressive results. Neural networks and statistical methods seemed crude and limited by comparison.
But the long-term consequences were significant. The symbolic AI paradigm dominated funding and prestige for decades, while statistical and neural approaches were marginalized. The researchers who worked on learning from data — approaches that would eventually produce deep learning and large language models — spent years in the wilderness, underfunded and underappreciated.
The full consequences of this fork in the road would not become clear for half a century. But the seeds of both AI's greatest triumphs and its most painful winters were planted at Dartmouth in the summer of 1956.
The Name That Stuck
Of all the decisions made at Dartmouth, perhaps the most consequential was the simplest: what to call the field.
McCarthy's choice of "artificial intelligence" was controversial from the start. Some participants preferred "complex information processing" or "automata studies" — names that were more precise and less provocative. But McCarthy understood that names matter, and he wanted a name that would capture imaginations.
"Artificial intelligence" did exactly that. It promised something thrilling and slightly dangerous. It implied that machines could genuinely think, not just calculate. It drew media attention and public fascination. And it set up a tension that persists to this day: the gap between what the name promises and what the technology actually delivers.
Every AI winter — every period of disillusionment and funding collapse — has been, in part, a consequence of that ambitious name. When you call your field "artificial intelligence," people expect intelligence. When the machines fall short of that expectation, disappointment follows.
But the name also attracted brilliant people, enormous funding, and intense public interest. For better and worse, "artificial intelligence" is what John McCarthy called it in 1956, and artificial intelligence is what we call it still.