Statement on Superintelligence — What the Hell Are We Doing?

In 2025, the conversation around superintelligence has shifted from speculation to existential concern. Many of the world’s top AI researchers, philosophers, and public figures—including Geoffrey Hinton, Yoshua Bengio, and Stuart Russell—are now calling for a prohibition on developing AI systems that could outperform humans in every cognitive domain until two basic conditions are met:

  1. Scientific consensus that it can be done safely.
  2. Public consent that it should be done at all.

The urgency comes from one brutal truth: the same systems that promise to cure diseases and optimize civilization could also render human beings economically obsolete—or worse, irrelevant. The fear isn’t science fiction; it’s systemic displacement.

This isn’t just about extinction scenarios. It’s about freedomcontrol, and dignity—the possibility that we build something smarter than ourselves without understanding it. As Admiral Mike Mullen put it, the stakes are civilizational. As Mary Robinson, former President of Ireland, said, “The pursuit of superintelligence threatens to undermine the very foundations of our common humanity.”

Polls show 64% of Americans believe we should not build superhuman AI unless it’s proven safe or controllable. Only 5% support the current breakneck pace. Yet the money and momentum behind superintelligence are staggering.

This is no longer a debate about technology—it’s about governance of creation.
When gods can be coded, someone has to decide whether we press “run.”

SIGN THIS (and invite others): https://superintelligence-statement.org/

Share this post