This is a link post for The Compendium.
My colleagues and I work on the problem of catastrophic risk from AGI. We have recently put together a (mostly) complete account of this topic.
The Compendium is a bit different from most other texts in its thoroughness, accessibility to non-technical readers and a focus on the political, social and ideological aspects of the current state of affairs.
It’s style is far more matter of fact than this blog, but I still hope you consider giving it a read.
I will republish here the foreword I wrote for the Compendium, written in a style more familiar to this blog’s audience:
Compendium - Foreword
A few million years ago, something very strange happened.
Through minor genetic tweaks, an ancestor of the modern chimpanzee split into a new line of species: Homo, humans. This new chimp variant was odd in several ways: it learned to stand upright, lost most of its fur, and grew a bigger brain. This bigger brain was not really all that different from that of his chimp cousins, just scaled up by a factor of about three.
If you had seen this odd, half naked chimp with a brain three times bigger than its cousins’, and you would have to guess what this new chimp will do, what would you have said?
Maybe you would have expected it to be a bit better at collecting termites, or throwing rocks more accurately, or have more complicated status hierarchies. But that 3x scaled up chimp ends up building nuclear weapons and going to the moon. Chimps don’t go one third of the way to the moon, they go zero to the moon; humans go all the way.
We still don’t exactly know how or why this happened, but whatever it is that happened, we call the result General Intelligence. It is what has allowed our species to build the magical glowing brick that you are looking at right now to transmit the words of another chimp descendant located halfway across the world to your eyes and brain.
This is crazy.
General Intelligence is what separates human from animal, industrial civilization from chimpanzee band. It is unlikely to be a discrete all-or-nothing property, but it sure is suspicious that you go from “zero going to the moon” to “all of going to the moon” within a 3x difference in brain size. Things can change quickly with scale.
Our intelligence makes us the masters of the planet. The future of chimpanzees is utterly dependent on what humans want to do with them. If we want to give them infinite food, incredible medicines they can’t hope to understand, and safety from any predators, we can. If we want to keep them in zoos, or hunt them for sport, we can. If we wanted them extinct, their habitats paved over with parking lots and solar cells, we could.
This kind of relationship, of complete domination over another, is the natural balance of power between a much more intelligent creature and a less intelligent one. It’s the kind of power an adult has over a small child, or an owner over their pet. The arrangement may or may not be beneficial to the weaker party, but ultimately, the more intelligent and powerful agent decides the future. A pet doesn’t get a say in whether they get spayed or not.
Luckily, there are no other species out there running around that might be even smarter than us.
-
But that is changing.
Currently, the future belongs to humanity, for better or for worse. The planet and stars are ours to do with as we decide. If we want to drown ourselves in pollutants and a warming climate, we can. If we want to annihilate each other in nuclear war, we can. If we want to become responsible stewards of our environment, we can. If we want to build global abundance, limitless energy, interstellar travel, transcendent art and a rule of just law, we can.
If a new, more intelligent species were to appear on Earth, humanity would surrender its choice over what future we want to make manifest. The future would be in the hands of the successor, and humanity would be relegated to a position no more admirable than the one chimpanzees inhabit today.
No such more intelligent species exist today, but they are being built.
Since its inception, the field of artificial intelligence has aspired to construct artificial minds and species as smart as, and then even smarter than, humans. If they succeed, and such systems are built, humanity will no longer be in control of the future, and the decisions will be in the hands of the machines.
-
If you don’t do something, it doesn’t happen.
This might seem so obvious it’s barely worth bringing up. Yet, you might be surprised how often people, probably including you, don’t really believe this.
If we want the future to go well, someone needs to make it so. The default state of nature is chaos, competition, and conflict, not peace. Peace is a choice we must strive for, a delicate balance on the edge of entropy that must be lovingly and continuously maintained and strengthened. Good intentions are not enough — it demands calm, cooperative, and decisive action.
This document is a guide to what is happening with AI, and offers a playbook for nudging the future into the direction you want it to go. It is not a solution, but a guide. A book cannot be a solution, only a person can.
What is AI? Who is building it? Why? And is it going to be a future we want? (Spoiler: No) There are so many things happening every single day in the field of AI, not to speak of geopolitics, that it seems impossible to keep up with, or to keep focused on what really matters: What kind of future do we want, for ourselves, and for our children?
We must steady our focus on this, and not let ourselves be distracted by all the noise and demoralizing nihilism pelting down on us from all sides. We need to understand where we want to go, chart a path there, and then walk this path.
If we don’t do something, it doesn’t happen.
-
The default path we are on now is one of ruthless, sociopathic corporations racing toward building the most intelligent, powerful AIs as fast as possible to compete with one another and vie for monopolization and control of both the market and geopolitics. Without intervention, humanity will be summarily outcompeted and relegated to irrelevancy by such machines, as we did with our chimp cousins.
A species of intelligent beings born from the crucible of sociopathic market and military competition will not be one of loving grace, and will have far fewer qualms about paving over humanity’s habitat with solar cells and parking lots. Despite humanity’s flaws, we still have a heart, we have love, even for our chimpanzee cousins, somewhere, sometimes. Machines of hateful competition need not have such hindrances.
And then that’s…it. Story over. Humanity is no more.
There is no one is coming to save us. There is no benevolent watcher, no adults in the room, no superhero that will come to save the day. This is not that kind of story. The only thing necessary for the triumph of evil is for good people to do nothing. If you do nothing, evil triumphs, and that’s it.
If you want a better ending for the Human Story, you must create it. Can we forge a good, humanist future, one that is just, prosperous, and leaves humanity sailing into a beautiful twilight, wherever its final destination may lie? Yes. But if you don’t do it, it doesn’t happen.
The path we are on is one of going out with a whimper, not of humanist splendor. It is embarrassing to lose out on all of the future potential of humanity because of the shortsightedness and greed of a few. But it wouldn’t be surprising. A human story if there ever was one.
-
The ending to the Human Story isn’t decided yet, but it will be soon.
We hope you join us in writing a better ending.
- Connor Leahy, October 2024