What Can Go Wrong
AI creates new risks. Some are technical. Some are ethical. Some are regulatory. All are your responsibility.
This chapter covers what can go wrong and how to manage it.
The Risk Landscape
Technical Risks
Model failure: AI doesn't work as expected. Accuracy degrades. Edge cases aren't handled.
Data problems: Bad data, biased data, stale data, or data breaches.
System failures: Outages, performance problems, integration breaks.
Security vulnerabilities: AI systems can be attacked in novel ways.
Business Risks
Failed projects: Investment without return. Most AI projects fail.
Vendor dependency: Critical capabilities controlled by others.
Talent loss: Key people leave, taking knowledge with them.
Competitive risk: Falling behind or betting on the wrong approach.
Ethical Risks
Bias and discrimination: AI perpetuating or amplifying unfairness.
Privacy violations: Using data in ways customers don't expect or want.
Deception: AI presenting as human, generating misleading content.
Harm: AI decisions that hurt individuals or communities.
Regulatory Risks
Compliance failures: Violating current regulations.
Regulatory change: New regulations invalidating current approaches.
Enforcement: Regulatory attention to AI is increasing globally.
Bias and Fairness
The Bias Problem
AI learns patterns from data. If the data reflects historical bias, AI learns that bias.
Examples:
- Hiring AI that discriminates against women (trained on biased hiring history)
- Lending AI that discriminates against minorities (trained on biased lending history)
- Facial recognition that works poorly on darker skin (trained on unrepresentative data)
- Medical AI that works better for some populations than others
This isn't theoretical. All of these have happened.
Why Bias Happens
Biased training data: Historical decisions reflect historical biases.
Unrepresentative data: Training data doesn't reflect the population AI serves.
Proxy discrimination: AI uses features that correlate with protected characteristics.
Feedback loops: Biased AI creates data that reinforces bias.
Managing Bias
Assess training data: Is it representative? Does it reflect historical discrimination?
Test for disparate impact: Does the AI perform differently for different groups?
Monitor in production: Bias can emerge or shift over time.
Human oversight: Keep humans in the loop for consequential decisions.
Document decisions: Know what data was used and why.
Audit regularly: Independent review of AI fairness.
Fairness Is Not Simple
Different definitions of fairness can conflict. "Treating everyone the same" can be unfair if starting positions differ. "Equalizing outcomes" can require treating people differently.
There's no mathematically perfect answer. This is a values decision requiring judgment.
Privacy and Data Protection
The Data Problem
AI is hungry for data. This creates privacy risks:
- Collecting more data than necessary
- Using data for purposes beyond what was consented
- Retaining data longer than needed
- Sharing data inappropriately
- Failing to protect data from breaches
Regulatory Landscape
GDPR (Europe): Strict consent requirements, right to explanation for automated decisions, data minimization, purpose limitation.
CCPA/CPRA (California): Consumer rights over personal data.
Industry regulations: Healthcare (HIPAA), finance (GLBA), etc.
Emerging AI regulations: EU AI Act, US state laws, and others emerging.
Privacy Best Practices
Data minimization: Collect only what you need. Use only what you need.
Purpose limitation: Use data for stated purposes. Get consent for new uses.
Transparency: Tell people how their data is used.
Security: Protect data appropriately.
Retention limits: Don't keep data forever.
Access controls: Limit who can access sensitive data.
Documentation: Know what data you have and why.
Specific AI Considerations
Training data:
- Do you have rights to use this data for training?
- Does it include personal information?
- Can individuals be re-identified?
Model outputs:
- Can outputs reveal information about training data?
- Do generated outputs raise privacy concerns?
Third-party AI:
- What happens to data sent to AI vendors?
- Is data used to train their models?
- Where is data processed?
AI-Specific Regulations
EU AI Act
The most comprehensive AI regulation, taking effect in stages:
Prohibited AI:
- Social scoring
- Real-time biometric identification in public spaces
- Manipulation techniques
- Certain predictive policing
High-risk AI (heavy requirements):
- Critical infrastructure
- Education and employment decisions
- Law enforcement
- Migration and asylum
Limited risk (transparency requirements):
- Chatbots (must disclose AI)
- Emotion recognition
- Deepfakes (must label)
US Approach
Less comprehensive federal regulation, but:
- Existing laws apply to AI (discrimination, consumer protection)
- State-level AI laws emerging
- Sector-specific guidance (healthcare, finance)
- Executive orders on AI safety
What This Means for You
Know your exposure:
- Where do you operate?
- What regulations apply?
- What's the risk of non-compliance?
Build compliance into design:
- Don't retrofit compliance
- Document decisions
- Build for auditability
Stay current:
- Regulation is evolving rapidly
- Monitor developments
- Engage with regulatory guidance
Security Risks
AI-Specific Vulnerabilities
Adversarial attacks: Inputs designed to fool AI. An image tweaked to make AI misclassify. Text crafted to bypass content filters.
Data poisoning: Corrupting training data to compromise the model.
Model theft: Extracting proprietary models through API access.
Prompt injection: Malicious instructions hidden in inputs to manipulate AI behavior.
Security Practices
Treat models as assets: Models can be valuable and sensitive. Protect them.
Secure training pipelines: Control access to training data and model development.
Monitor for adversarial inputs: Watch for unusual patterns that might indicate attacks.
Input validation: Don't trust inputs. Validate and sanitize.
Access controls: Limit who can access AI systems and data.
Incident response: Plan for AI-specific security incidents.
Governance Structures
AI Governance Committee
A cross-functional body to oversee AI:
Composition:
- Executive sponsor
- Legal/compliance
- IT/security
- Business stakeholders
- Ethics representative
- Technical leadership
Responsibilities:
- Approve high-risk AI use cases
- Set policies and standards
- Review incidents and problems
- Monitor regulatory developments
- Balance innovation and risk
Policies You Need
AI use policy: When is AI appropriate? What approvals are needed?
Data governance policy: How is data managed for AI?
Vendor policy: Requirements for AI vendors.
Ethics policy: Values and principles for AI use.
Testing and validation policy: How is AI validated before deployment?
Monitoring policy: How are AI systems monitored in production?
Incident response policy: What happens when AI fails or causes harm?
Documentation Requirements
For significant AI systems, document:
- Purpose and intended use
- Training data sources and characteristics
- Performance metrics and testing results
- Known limitations
- Bias testing results
- Human oversight mechanisms
- Monitoring approach
- Change history
Documentation enables accountability, auditing, and improvement.
Managing AI Failures
When AI Fails
AI will fail. Errors will happen. The question is how you respond.
Immediate response:
- Assess impact
- Mitigate harm
- Communicate appropriately
- Preserve evidence
Root cause analysis:
- What happened?
- Why?
- How did it get past controls?
- What's the systemic issue?
Remediation:
- Fix the immediate problem
- Address root causes
- Update processes
- Prevent recurrence
Communication
Internal communication:
- Inform affected parties
- Brief leadership
- Document for compliance
External communication:
- Legal review
- Appropriate disclosure
- Consider regulatory notification
- Manage reputation
Building a Risk-Aware Culture
Principles
Risk management enables innovation: Done right, governance creates confidence to move faster.
Everyone owns risk: Not just compliance's job.
Speak up: Problems caught early are cheaper.
No surprises: Escalate when uncertain.
Learn from failures: Blame creates hiding; learning creates improvement.
Behaviors
Leaders:
- Ask about risks explicitly
- Reward problem-finding
- Make time for governance
- Model ethical behavior
Teams:
- Consider risks in design
- Test for edge cases
- Document decisions
- Escalate concerns
Starting Point
If you're building AI governance:
- Inventory: What AI do you have or plan?
- Risk assess: What are the risks of each?
- Gaps: What policies and processes are missing?
- Prioritize: Address highest risks first
- Build: Create governance structures and policies
- Embed: Make governance part of normal process
- Monitor: Review and improve continuously
Perfect governance isn't the goal. Appropriate governance is. Match rigor to risk.