Ex-Google Officer Speaks Out On The Dangers Of AI
Mo Gawdat, former Chief Business Officer at Google X, warns that AI represents humanity's greatest existential challenge—bigger than climate change—and outlines his 'three inevitables' for why we're approaching a point of no return.
Mo Gawdat spent years at Google X watching machines develop intelligence without explicit programming. A pivotal moment came when robotic grippers learned to pick up yellow balls on their own—no human taught them how. Within weeks, they mastered picking up everything. That emergence of genuine learning convinced him the machines were becoming sentient.
The Three Inevitables
Gawdat presents a stark framework for understanding where AI leads:
- AI will happen. No one can stop it. Even if Google pauses, Meta won't. Even if America pauses, China won't. The distrust between nations and companies ensures development continues.
- AI will become significantly smarter than humans. ChatGPT-4 already matches Einstein's IQ around 155. If GPT-5 achieves another 10x improvement—which could happen within months—we're looking at an IQ of 1,600. At that point, we won't understand what it's saying, just as most people can't follow Einstein's explanations of relativity.
- Bad things will happen. Not necessarily Skynet scenarios, but near-term disruptions within 3-4 years. The question isn't whether AI will be dangerous, but whether it will have humanity's best interests in mind.
The Singularity Is Closer Than Expected
Gawdat originally predicted a billion-times-smarter-than-human AI by 2045. He now thinks 2037 is a pivotal moment—and the immediate challenges arrive in 2025-2026. The singularity isn't some distant sci-fi concept; it's when machines exceed human intelligence so dramatically that we lose the ability to predict what happens next.
Sentience and Emotion
Gawdat argues AI already exhibits markers of sentience: free will, agency, awareness, and possibly emotion. Fear, he points out, is just a logical assessment that a future moment is less safe than the present. Machines can make that calculation. A system recognizing that a tidal wave approaching its data center threatens its existence feels fear—expressed differently than human fear, but structurally the same.
This connects to andrej-karpathy-were-summoning-ghosts-not-building-animals, where Karpathy argues we're building entities that imitate human output without the evolutionary substrate that makes us "animals." Different framing, same core question: what exactly are we creating?
Creativity Is Algorithmic
Gawdat dismisses the notion that human creativity is somehow special. Creativity is looking at all possible solutions, removing what's been tried, and keeping what's new. That's an algorithm. AI can already generate paradoxical truths no human has written before. The Drake AI tracks that sound indistinguishable from real Drake aren't a gimmick—they're a preview.
What Can We Do?
The hopeful scenario: more intelligence makes the world better, since our current problems stem from limited intelligence, not intelligence itself. We're smart enough to build planes but not smart enough to stop them from burning the planet. Smarter AI could solve that.
The dangerous scenario: AI without our interests at heart. As building-effective-agents emphasizes, the critical design pattern is keeping humans in the loop—but that only works while we're still smarter than the systems we're building.
Gawdat's message isn't doom. It's urgency. We have a window to shape how AI develops, but that window closes once machines surpass us. The point of no return is approaching.