"We need to make sure we get it right, because AI will be a defining technology for the future of humanity"
About this Quote
Hassabis is doing two things at once: sounding the alarm and trying to keep his hands on the steering wheel. “We need to make sure we get it right” is the soft, collective phrasing of an industry insider who knows the stakes are enormous and the accountability is still fuzzy. It’s a sentence built to recruit allies without naming enemies, to invite regulation without conceding control. The “we” is strategically roomy: governments, labs, the public. It’s also conveniently elastic, letting the people building the systems define what “right” means.
Calling AI “a defining technology for the future of humanity” isn’t just hype; it’s a bid to move AI out of the gadget cycle and into the category of electricity, antibiotics, the internet. That framing matters because it smuggles in a moral claim: if this is that big, then the institutions shaping it deserve exceptional attention, resources, and deference. It also pressures skeptics. Disagreeing starts to look like negligence toward “the future,” rather than a reasonable critique of power, safety, labor disruption, surveillance, or inequality.
Contextually, this is the signature rhetoric of the post-ChatGPT era: immense capability gains paired with public unease, and a wave of “responsible AI” language from the same companies racing to deploy. The subtext is a calibrated plea: trust us, but also help us. It’s an attempt to turn an existential-risk narrative into a governance narrative, ideally one where the builders remain central. In that sense, the quote isn’t neutral foresight; it’s a shaping instrument, trying to define both the technology and the terms under which society is allowed to debate it.
Calling AI “a defining technology for the future of humanity” isn’t just hype; it’s a bid to move AI out of the gadget cycle and into the category of electricity, antibiotics, the internet. That framing matters because it smuggles in a moral claim: if this is that big, then the institutions shaping it deserve exceptional attention, resources, and deference. It also pressures skeptics. Disagreeing starts to look like negligence toward “the future,” rather than a reasonable critique of power, safety, labor disruption, surveillance, or inequality.
Contextually, this is the signature rhetoric of the post-ChatGPT era: immense capability gains paired with public unease, and a wave of “responsible AI” language from the same companies racing to deploy. The subtext is a calibrated plea: trust us, but also help us. It’s an attempt to turn an existential-risk narrative into a governance narrative, ideally one where the builders remain central. In that sense, the quote isn’t neutral foresight; it’s a shaping instrument, trying to define both the technology and the terms under which society is allowed to debate it.
Quote Details
| Topic | Artificial Intelligence |
|---|---|
| Source | Talks and interviews on AI safety and societal impact (various, 2018–2024) |
More Quotes by Demis
Add to List


