"Not all intelligence can be artificial now, so if we make a mistake, the consequences are no longer simply located within an institution or a national culture"
About this Quote
Thompson’s line lands like a warning shot at the exact moment “AI” starts functioning as a cultural alibi. By insisting “not all intelligence can be artificial now,” he’s puncturing the techno-utopian fantasy that cognition is a cleanly engineering problem. The phrase “can be” matters: it’s not just a technical limit, it’s an ethical one. Human judgment, embodied experience, and moral accountability still sit stubbornly outside the machine - which means you can’t outsource responsibility and call it progress.
The second clause is where the stakes expand. Mistakes used to be containable: a bureaucracy botches a policy, a nation misreads a crisis, an institution pays the price (or at least can be pressured to). Thompson suggests we’ve crossed into a reality where error propagates through networked systems that don’t respect borders or organizational walls. When intelligence is distributed - part human, part machine, part infrastructure - the blast radius of failure becomes planetary. A biased model isn’t only a bad “tool”; it’s a cultural force multiplier. A misaligned automated decision isn’t merely a corporate hiccup; it can rewire access to work, healthcare, and security at scale.
Subtextually, he’s diagnosing a shift in where power lives. Institutions used to be the obvious locus of accountability; now intelligence is braided into platforms, supply chains, and invisible feedback loops. The intent isn’t to fearmonger about AI so much as to demand a new moral geography: if consequences aren’t local anymore, neither can our ethics be.
The second clause is where the stakes expand. Mistakes used to be containable: a bureaucracy botches a policy, a nation misreads a crisis, an institution pays the price (or at least can be pressured to). Thompson suggests we’ve crossed into a reality where error propagates through networked systems that don’t respect borders or organizational walls. When intelligence is distributed - part human, part machine, part infrastructure - the blast radius of failure becomes planetary. A biased model isn’t only a bad “tool”; it’s a cultural force multiplier. A misaligned automated decision isn’t merely a corporate hiccup; it can rewire access to work, healthcare, and security at scale.
Subtextually, he’s diagnosing a shift in where power lives. Institutions used to be the obvious locus of accountability; now intelligence is braided into platforms, supply chains, and invisible feedback loops. The intent isn’t to fearmonger about AI so much as to demand a new moral geography: if consequences aren’t local anymore, neither can our ethics be.
Quote Details
| Topic | Artificial Intelligence |
|---|
More Quotes by William
Add to List





