The first time most people encounter AI in healthcare, it is through a symptom checker. You type your symptoms into an app, and it tells you what might be wrong. These tools have been around for years, but the latest generation — powered by large language models — is dramatically better than what came before. They are also dramatically more dangerous if you use them wrong.

This chapter covers the full landscape of AI diagnostics: from the clinical-grade imaging systems that doctors use, to the consumer symptom checkers on your phone, to the emerging field of early disease detection. We will look at what works, what does not, and how to be a smart user of these tools.

AI in Medical Imaging

Medical imaging is where AI diagnostics is most mature and most impressive. AI systems have been trained to read X-rays, CT scans, MRIs, mammograms, retinal scans, and pathology slides, and in specific tasks, they match or exceed human specialists.

The way these systems work is conceptually straightforward. Researchers take hundreds of thousands or millions of medical images that have been labeled by expert clinicians — this scan shows a tumor, this one does not — and train a deep learning model to recognize the patterns that distinguish normal from abnormal. Once trained, the model can analyze a new image and flag areas of concern.

In radiology, AI tools are being used as a "second reader." A radiologist reviews a scan and makes their assessment, and then the AI provides its own assessment. If the two disagree, the radiologist takes a closer look. This approach catches things that a tired or rushed radiologist might miss, without replacing the human entirely.

For diabetic retinopathy — a leading cause of blindness — the FDA has approved AI systems that can screen patients without a specialist present. A technician takes a photo of the patient's retina, the AI analyzes it, and the result comes back in minutes. This is particularly valuable in areas where eye specialists are scarce.

In pathology, AI systems can analyze tissue samples at a microscopic level, identifying cancer cells and grading tumors. Some studies have shown these systems catching cancers that human pathologists missed on initial review.

What Makes Imaging AI Work Well

Several factors explain why AI performs so well in medical imaging.

The task is well-defined. Looking at a medical image and identifying abnormalities is fundamentally a pattern recognition problem, and pattern recognition is what deep learning does best. There is usually a clear ground truth — the patient either has the condition or does not — which makes it possible to train and evaluate models rigorously.

The data is visual and structured. Medical images follow standardized formats and protocols. An X-ray of a chest looks roughly similar whether it was taken in Tokyo or Toronto. This consistency makes it easier for models to generalize across different hospitals and patient populations, though this remains an active challenge.

There is a clear clinical need. Radiologists are in short supply globally, and the volume of medical imaging is growing faster than the workforce. AI helps bridge this gap by handling routine screenings and flagging images that need expert attention.

Symptom Checkers: The Good and the Dangerous

Now let us talk about the AI tools you are more likely to use yourself: symptom checkers.

The old generation of symptom checkers worked by matching your symptoms against a database of conditions using branching decision trees. They were often frustratingly generic and tended to either send you into a panic (every headache was a brain tumor) or dismiss serious symptoms.

The new generation, powered by large language models, is significantly better at understanding nuance. You can describe your symptoms in natural language — "I have had a dull ache behind my left eye for three days, and it gets worse when I bend over" — and the AI can ask follow-up questions, consider multiple possibilities, and provide a more thoughtful differential diagnosis.

But here is where you need to be careful.

These tools are not diagnostic. They do not have access to your medical history, your lab results, your physical exam findings, or your imaging. They are making probabilistic guesses based on the text you provide. They will always give you an answer, even when the right answer is "I do not know" or "you need to see a doctor immediately."

Large language models are also prone to hallucination in medical contexts. They might cite studies that do not exist, recommend medications at wrong dosages, or fail to recognize symptom combinations that indicate an emergency. They sound confident regardless of whether they are right.

How to Use Symptom Checkers Wisely

Despite these limitations, AI symptom checkers can be genuinely useful if you approach them correctly.

Use them as a starting point, not an endpoint. If you have symptoms and you want to understand what they might mean before calling your doctor, a symptom checker can help you organize your thoughts and know what questions to ask. But it should never be the final word.

Be specific and honest with your inputs. The quality of the output depends entirely on the quality of the input. Include relevant details: your age, sex, relevant medical history, when symptoms started, what makes them better or worse, and any medications you are taking.

Pay attention to red flags. If a symptom checker says your symptoms could indicate something serious, take that seriously even if it seems unlikely. It is better to make an unnecessary doctor's visit than to dismiss a warning that turns out to be real.

Use multiple sources. Do not rely on a single symptom checker. Try two or three, and compare the results. If they all flag the same concern, that is worth paying attention to.

Never use a symptom checker in an emergency. If you are experiencing chest pain, difficulty breathing, severe bleeding, sudden weakness on one side of your body, or any other emergency symptoms, call emergency services immediately. An AI chatbot is not going to save your life in an acute emergency — trained paramedics and emergency physicians will.

Early Detection and Screening

One of the most promising applications of AI in diagnostics is early disease detection — identifying conditions before symptoms appear, when treatment is most effective.

AI models are being developed to detect early signs of cancer, cardiovascular disease, neurodegenerative conditions, and other diseases from routine data. Some examples that are currently in development or early deployment include detecting lung cancer from low-dose CT scans earlier and more accurately than current methods, identifying early signs of Alzheimer's disease from speech patterns and cognitive tests, predicting cardiovascular events from electrocardiogram data that appears normal to human readers, and screening for colorectal cancer using AI-enhanced colonoscopy that spots polyps a human operator might miss.

The common thread is that AI can sometimes detect subtle patterns in data that are invisible to human observers. A cardiologist looking at an ECG sees the obvious waveforms. An AI model, trained on millions of ECGs and their outcomes, might detect tiny variations in the signal that correlate with future heart problems.

The Limitations of AI Diagnostics

For all its promise, AI diagnostics has real limitations that you should understand.

Dataset bias is a serious problem. If an AI system was trained primarily on images from one demographic group, it may perform poorly on others. Several studies have shown that dermatology AI — systems designed to identify skin conditions from photos — performs significantly worse on darker skin tones because the training data was disproportionately sourced from lighter-skinned populations.

Overdiagnosis is a risk. AI systems can be so sensitive that they flag abnormalities that would never have caused problems. This can lead to unnecessary biopsies, surgeries, and anxiety. In breast cancer screening, for example, there is an ongoing debate about whether more sensitive detection always leads to better outcomes, or whether it sometimes leads to treating cancers that would never have become dangerous.

Context matters, and AI often lacks it. A shadow on a chest X-ray might be a tumor, or it might be a rib fracture that the patient forgot to mention, or an artifact from the imaging equipment. A human radiologist can call the patient, check their history, and use clinical judgment. An AI model sees only the image.

Validation gaps are common. Many AI diagnostic tools have been tested in controlled research settings but not in the messy reality of everyday clinical practice. A model that performs brilliantly on a curated research dataset may struggle when deployed in a busy hospital with different equipment, different patient populations, and overworked staff.

The Bottom Line

AI diagnostics is real, it is improving rapidly, and it is already saving lives in some clinical settings. For you as an individual, the most practical applications right now are AI-enhanced symptom checkers and screening tools — and the most important skill is knowing how to use them wisely.

Trust the technology enough to listen to it. Distrust it enough to verify what it tells you with a qualified healthcare professional. And remember that a diagnosis — real or AI-generated — is the beginning of a conversation with your doctor, not the end of one.