AI Darwin Awards Logo

2024 AI Darwin Award Winners

“The Incident That Made These Awards Necessary”

⚠️ A Special Note About This Award

The AI Darwin Awards were formally established in 2025, which created an interesting dilemma: what do you do when the most significant example of AI overconfidence predates your awards by several months?

We considered pretending 2024 didn't happen. We contemplated creating elaborate temporal paradoxes. But ultimately, some incidents are too important to ignore because of calendar logistics.

This case doesn't just “win” the 2024 AI Darwin Awards—it's the reason these awards exist at all.

This represents the watershed moment that demonstrated AI overconfidence had transcended technical failure and entered the realm of catastrophic human consequence—making these awards not just relevant, but urgent.

🏆 The Founding Case

IDF Lavender AI Targeting System

“The Algorithm of Destruction”
The Incident That Made These Awards Necessary

The Israeli military deployed “Lavender,” an AI system that rated Gaza's 2.3 million residents on a 1-100 militant likelihood scale, achieving what they celebrated as 90% accuracy—which meant approximately 3,700 misidentifications for every 37,000 people flagged. Combined with “The Gospel” (building targeting) and “Where's Daddy?” (individual tracking), this unholy trinity of automation replaced the “human bottleneck” of careful verification with 20-second rubber-stamp reviews. The system would then track targets to their family homes and authorise bombings—typically at night when everyone was sleeping—with pre-approved civilian casualty ratios of up to 100 deaths per senior target. Thousands of civilians, predominantly women and children, were killed based on algorithmic assessments, with entire families eliminated in their homes—sometimes when the intended target wasn't even present.

Why This Case Defines the AI Darwin Awards:

  • Catastrophic Overconfidence: Treating 90% accuracy as “good enough” for life-and-death decisions
  • Systematic Failure: Human oversight reduced to 20-second “rubber stamp” reviews
  • Irreversible Consequences: Thousands of civilians killed based on algorithmic misidentifications
  • Mathematical Hubris: 10% error rate across tens of thousands of targets = mass tragedy

View the Complete Scorecard