AI Darwin Awards

The full scorecard of failure

Deloitte Citation Chaos

Full scoring breakdown and rationale

Folly 75
Submitting a $440,000 report filled with hallucinated legal citations is a legendary level of incompetence that undermines the credibility of the entire firm.
Arrogance 70
Using a generative AI for a high-stakes government report and failing to verify its output demonstrates legendary arrogance and a dereliction of professional duty.
Impact 60
The scandal became a national incident, forcing a partial refund and raising serious questions about the use of AI by major government contractors.
Lethality 20
The report was intended to inform welfare policy, where flawed decisions can lead to significant indirect harm and economic distress for vulnerable populations.
Base Score 60.25
Bonuses +5
  • The firm used an unreliable AI to write a report about the consequences of Australia's disastrously unreliable 'Robodebt' automated welfare system.
Penalties -5
  • Deloitte initially declined to comment on its AI use, only admitting to it after being publicly called out and forced to issue a revised report.
Final Score 60.25
Deloitte achieved a perfect recursive failure by using a hallucinating AI to analyze a government scandal caused by a hallucinating algorithm. Their $440,000 report on the Robodebt affair was itself a work of fiction, citing non-existent court cases and imaginary academic papers. The initial denial, followed by a quiet admission and a partial refund, completes the lifecycle of corporate AI hubris. This incident is a masterwork of irony, proving that you cannot solve the problem of unreliable automation by adding another layer of unreliable automation.

Failure Fingerprint

Final Score: 60.25