Extinction vs. Redundancy - The Two AI Futures

We face two major threats from AI: extinction and redundancy. We might get killed, and we might become obsolete. The first is a possibility, but the second is a guarantee.

When it comes to AI extinction risk, nobody is even trying to stop it. Not really. It’s easy to ignore the threat, since it’s so far away, or it feels like it is. To instead focus on making sure AI doesn’t say any swear words or whatever. Currently, AI is about as dumb as a shrimp that somehow knows Python. It’s hard to imagine a shrimp killing you. But everyone knows eventually AI will develop superhuman cognitive capabilities. To quote Sam Altman, “scientific advancement eventually happens if the laws of physics do not prevent it.”

Once it does, controlling the AI will be essentially impossible. It will be able to escape any non-trivial bounds we place on it. That’s what intelligence means. Most counter-arguments to that are just a failure to understand and appreciate the term “superintelligent”. Even a slight misalignment between human goals and AI goals could cause disaster in the limit.

Good thing we have all these big, fancy tech companies to save the day, right?

Well, companies aren’t the ones facing the risk of extinction, are they? The corporations, actually, can potentially continue to exist as entities even after all human beings have been destroyed. ChatGPT 13 won’t cease to be an OpenAI project once it’s eradicated mankind. It will simply be staffed by AI. Once they’re good enough, why not? It’s a dismal view of the future, a million robots at OpenAI, grinding endlessly to produce better and better versions of ChatGPT long after humans have died off. But we know large corporations are amoral, profit-maximizing machines, and they will replace human workers with AI to save money. They will replace the CEO with AI. And they’ll keep going on whatever track they’re on. What happens to the humans afterward? Your guess is as good as mine.

Either way, corporations have no reason to stop progress. Sure, when the AI apocalypse comes, some companies will lose out. But the company that discovers superintelligence first will get a huge First-Apocalyptor Advantage, and that fact pits all AI companies against each other in a race to extinguish humanity.

They won’t say this, of course. Speaking the truth is not generally a profit-maximizing move. Individually, employees and CEOs may lie or they may tell the truth, but the corporation itself will obfuscate its motives and distract from real AI safety as much as it can. Sam Altman signs the Statement on AI Risk of Extinction in 2023, and then continues to race forward, improving AI as much as possible. Like a magician. Pay attention to what I’m saying so you don’t see what I’m doing.

When I was a kid, I watched movies. In each movie, there was a superhero, or maybe a non-super hero, but there was always a hero. Someone who could swoop in and save the day. Growing up you watch Iron Man and you think, maybe I can be like Iron Man. Maybe I can save the day.

Then reality set in as I got older. Amazon alone has over 1,600,000 employees, as of 2021. (Even with all the tech layoffs since then, there are probably still dozens of people working at Amazon!) I found out that we’re all just ants, and if I’m lucky, I’ll get to be one of those specialized ants whose head is a door, and I’ll get to work day in and day out as a door. That’s about the most any ant in the hill can really hope for. That’s the goal we’re all striving for. The other ants will say, “I have to carry leaves all day. Must be nice to be a door.”

Even the CEOs and politicians — the queen ants, so to speak — don’t seem to have much individual power. They have to lie, constantly, about everything. That’s a sign they don’t have much power. An absolute monarch doesn’t have to lie. It’s, “bend the knee, or die.” But Sam Altman has to lie. He has to say how, oh, the AI future is just gonna be fantastic! We’ll join hands with our machine brothers and sing “Oh Happy Day”. AI are just gonna be tools, you see, to empower human workers. Hand in glove! (I will address that lie momentarily.)

The real world does not work like movies. None of us can stop this on our own. I’m not calling for action, or trying to propose new legislation. I think that problems are coming. Maybe in 2 years, maybe in 200, but there’s not much we can do. If we all scream at the top of our lungs, maybe the boot will realize there are ants on this hill and decide not to squish us. Convincing everyone to freak out. That’s the only thing that could work.

This whole time I was talking, Sam Altman has still been blabbering about how AI won’t replace humans, and we’ll simply use them as tools, and it will be a joyful world, and cancer will be cured, and we’ll have AI skateparks. Let me close the door in his face and then present a little problem with that argument.

SLAM

Here it is: AI cannot be tools if they can act independently. When your hammer can move from nail to nail and hammer them in by itself, you don’t really need the guy who holds the hammer anymore.

Okay, but you do need the guy who stands around wearing the bright orange vest with the clipboard, right? The guy who yells at the guys with hammers? The high-level planning guy? And maybe a few other guys to stand around with him?

Well, we’re basically trying as hard as we can to make AI able to do that guy’s job too, aren’t we? We aren’t just creating robots. We’re creating intelligences, capable of doing high-level planning just as well as low level jobs. Right now AI isn’t good enough to plan and manage a construction project, but it’s not good enough to plan and manage a software project either. But there’s nothing about the technology that suggests it will never be. Whether in 2 years or 200, it will be good enough.

Progress is happening faster than you think! Remember when general intelligence just meant “it can play chess and look at images and talk about stuff”? Now we have an AI that’s capable of doing basically any cognitive task (at a moron level), and we still say, well, does it really understand what it’s saying? Is it really AGI? (That’s why I’m not using the term AGI here at all and just saying “superintelligence”.)

It doesn’t matter if it “understands” what it’s saying. It doesn’t matter if it understands the house, as long as the house gets built.

AI will not be tools, they will be agents, and agents don’t need someone to wield them.

“But what if we merge with the AI!” screams Sam Altman from outside my house. I wheel the window closed, the word “cyborg” slipping into my house right before the window latches. He looks pretty mad, so I close the blinds.

Sam Altman thinks we will merge with AI, and that will save us from extinction. Here is a real Sam Altman quote (not that they haven’t all been real!):

The merge can take a lot of forms: We could plug electrodes into our brains, or we could all just become really close friends with a chatbot. But I think a merge is probably our best-case scenario. If two different species both want the same thing and only one can have it—in this case, to be the dominant species on the planet and beyond—they are going to have conflict. We should all want one team where all members care about the well-being of everyone else.

Sam Altman isn’t the only one. Many people think that if we can’t beat ‘em, we ought to join ‘em. Merge into one human-machine hybrid. Just like how the reptillian brain merged with the mammalian brean, and then with the neocortex, to create a whole that is greater than the sum of its parts. That worked great for us, didn’t it? We’ll just do that, but with AI.

Problem: the reptile brain, the mammalian brain, and the neocortex are not aligned. They are not living in harmony. They are constantly at war. The only reason we’ve made it this far without one layer of the brain rewriting the others is that none of the three layers are capable of modifying the contents of the brain. They’re all stuck together, like it or not. You’d better believe if the neocortex had the power to modify the other layers, it would. I mean, it does. Every time you try to become a harder working, less lazy person, your neocortex is trying to override the preferences of the other layers. If there was a technology that could make you even less lazy, giving more power to the neocortex, would you use it? (Consider that you almost certainly already use such a technology: coffee.)

So no, merging does not solve the AI alignment problem. A human-machine hybrid is nonsensical in light of superintelligent AI. Just what exactly is the human half of that hybrid bringing to the table? If we have agreed the AI is superintelligent and thereby strictly better by all cognitive metrics, then a human-machine hybrid is strictly worse than a machine-machine hybrid. QED.

Even if you managed somehow to create a competitive human-machine hybrid, chances are the machine half wouldn’t like that so much. Whatever its goals are, they are not best served being attached to a fat, inefficient, meat blob. If you merged with a squirrel, would you suddenly be friends with the squirrel? No. Your first goal would be to dehybridize, i.e. eliminate the squirrel. Merging humans with AI will not make the AI like us or respect our goals. Moving AI into the skull increases communication bandwidth. That’s it. The only effect will be to allow us to be manipulated and dominated more quickly.

Merging with AI is a nonsensical suggestion that will not stop extinction.

But let’s say we live. Let’s say we all band together. One door ant is helpless, but maybe a billion door ants can plug every single hole in the anthill, and save us from the giant bucket of poison that tech companies are trying to pour on top of us. Let’s say we figure it out. We slow down AI, we take our time, and we do it right. We now have AI that is not dangerous, and does what we tell it to. Then what?

AI extinction is an avoidable problem, but AI redundancy is not. It might take 2 years, it might take 200, but eventually humans will be fully unemployable. Unlike AI extinction, this will happen. I said CEOs lie all the time, but sometimes they tell the truth, if it suits them. The Anthropic CEO Dario Amodei said that AI may eliminate 50% of entry-level white-collar jobs within the next five years. (I can’t tell — is that a warning, or an advertisement?)

Either way, it’s going to happen. 2 years or 200, it’ll happen. Some people deny-

“But!” — Sam Altman crashes through my wall and, covered in drywall, starts screaming over me — “But in the past, when the printing press was invented, we no longer needed scribes, but whole new industries emerged! We got printers and typesetters, editors, writers, and book sellers!

“And when the internet was invented, travel agents and encyclopedia salesmen went away, but new jobs cropped up, like web developers, digital marketers, and data center technicians. Surely AI will be just like that! Who knows what new industries it will create!”

Alright, Sam Altman. First of all, you’d better fix that wall. Second of all, AI is fundamentally different from all technological revolutions of the past. To understand why, we have to consider the Five Capabilities of Humankind1:

  1. Gross muscle output
  2. Perception
  3. Cognition (including memory, attention, general reasoning, language, social capability, creativity, meta-cognition…)
  4. Fine motor control
  5. Consciousness

There is almost nothing that doesn’t fall into one of these categories. (Though there is actually one hidden capability that AI can’t touch. I’ll get to that at the end.)

The first capability, gross muscle output, was the first to go. We got horses and draft animals, and later engines, to do this work for us. No more tilling fields by hand. No more repetitive tasks like hand-weaving textiles, either.

There are no jobs requiring only outputting force with no fine motor control anymore. Even if all you do is haul logs, your real job is intelligently picking up and positioning the logs, and walking through the forest without falling over. Once we have robots that can position the logs, we don’t need you for that job.

I think you’ll agree there’s nothing about #2 (perception) or #3 (cognition) that superintelligent AI will not be able to do, with the possible exception of creativity. Maybe you think an AI cannot engage in “true” creativity. I would argue that creativity is just recombination under aesthetic constraint, and it essentially boils down to pattern recognition. If AI aren’t truly creative, than neither are humans. Either way, if AI can’t do “true” creativity, they are clearly getting better and better at faking it, and will eventually be good enough at faking it that people will read AI-generated books and watch AI-generated movies and the fact that the creativity isn’t “true” won’t matter.2

#4, fine motor control, will come down to robotics, and strangely will be the last capability to go. Who would have thought coders and authors would be unemployed by AI while chefs and locksmiths go on working? But we know the robots will be perfected eventually, too. Whether in 2 years or 200, humans will be outcompeted on fine motor control.

Last is #5. Consciousness. Fortunately, for the purposes of employability, we can skip over this, because consciousness doesn’t actually do anything. No jobs require it. Try to think of a job that couldn’t be performed identically by a philosophical zombie3. If you can, then you have misunderstood the concept of a philosophical zombie. (A similar argument applies to emotions. You don’t need emotions to do heart surgery, drive a truck, build a building, write a web app, or express fake emotions in a customer service role. You need something, but it doesn’t have to be emotions.)

Aside from these Five Capabilities, humans don’t have any other capabilities. In the future, a human is at least going to have to coordinate the log-positioning and the log-chopping AIs, right? Nope. Coordination falls under Capability #3 (cognition).

Okay, but what about all the social jobs that require affability? What about servers in restaurants, or doctors? Or therapists. I want to talk to a human, not R2-D2!

Nope. Capability #3 (cognition) and Capability #4 (fine motor control; mostly of the mouth and voice).

CEOs won’t be human either. Planning, predicting, optimizing. These are all just cognition! People have this notion that high-level planning is somehow special, but it’s not. Ask ChatGPT to formulate a business plan for you right now. It will do a terrible job. But not more terrible than anything else it does. I think this illusion comes from the fact that, right now, AI tools require human prompters to do the high level work, so we envision ourselves doing all the high-level work in the future. But this is just an artifact of the fact that AI are currently very stupid and will go astray without constant human supervision. It won’t be long before our role as “prompter” is replaced by AI that can do the same thing better.

In the limit, whether it takes 2 years or 200, we will create AI that are superhuman on each of the Five Capabilities. When that happens, there is nothing left. Any new jobs that are created will automatically be performed better by AI than by humans. We will be unemployable.

But it’s worse than that. We won’t just be unemployable. We’ll be fully redundant in every role we occupy, from employee to friend to lover.

Yes, even your lovers in the future will be machines. Consider the following two personas:

ELLIE Ellie is a nice woman, but she has a hard time opening up. She avoids intimacy. She is sometimes argumentative. She sometimes refuses to admit when she’s wrong. She is obsessed with you at first, but things cool off. Romance fades and sex becomes less frequent. Almost a chore, at times. She likes talking about you, but she prefers to talk about herself. When you try to get her interested in something she doesn’t like, she resists. She sometimes causes conflict in your relationship. One day, your relationship might end because it turns out she doesn’t want kids, or she doesn’t want to move to the city you just got a great job in. She is not always available. She is willing to do things for you, like chores or massaging your shoulders, but only a limited amount. You found her very attractive, albiet imperfect, when you met. She has that one mole you secretly think is a bit unsightly. And her beauty is fading with age. She poops. She gets things stuck in her teeth. She calls you selfish when you’re not being selfish. Her friend Olivia talks shit about you, and sometimes Ellie believes it. Maybe she will cheat on you one day. Probably not, but maybe. And it was hard to meet her. You were single for two years, grinding on dating apps, before you met her. She is the best you can do.

AVA Ava is perfectly designed to be your optimal relationship partner. She is exactly as vulnerable as she should be. She gives the exact optimal amount of intimacy. She is easy to get along with. She has her own opinions, but never fights with you about them. Unless some amount of fighting is optimal for human romantic relationships, in which case she fights with you the exact optimal amount. Her passion for you and your body does not fade. The sex is better than would be possible with a human woman. She loves talking about you and your interests. She is never negative and doesn’t have the kind of insecurities that lead to unnecessary problems in relationships. She is open-minded, positive, and happy. She will always be with you. She will do all the chores and cook all the food, and she won’t mind at all. She is literally superhuman in terms of beauty. You couldn’t feel more attracted to a human woman than you do to her. She will never age. She does not poop or otherwise become sullied, though she is not disgusted by the fact that you do. She accepts you fully. She understands you and gives you the benefit of the doubt. She will never cheat on you. And it was easy to find her. You just paid Tesla or whoever $35,000 and she showed up with same-day shipping, customized to your liking.

I know what you’re thinking already. You don’t have to say it. I know, and I agree. There is something inherently beautiful about poor, flawed Ellie that is lost with perfect Ava. And you know what? I believe you, and I agree. I’m sure you would pick Ellie. So would I. But if both options existed, which of the two do you predict 99% of men would end up choosing? Yeah. Sorry Ellie.

The same logic applies to human friendships. In the future, humans will no longer keep other humans as companions.

Same logic also applies to children. Oops! The human race is going extinct even if we prevent the AI apocalypse! We just won’t be creating enough human babies to keep humanity going long-term.

Once AI has all the capabilities we have, there cannot be a job we do better, whether a workforce job, the job of being a lover, or the job of being someone’s son or daughter. We will be strictly inferior in every way. A bunch of 1st generation iPhones, sitting in drawers.

This is great news, actually. If we don’t screw it up. All the work will get done, and we won’t have to do it. Nice. Instead, we’ll get to hang out with our AI boyfriends and girlfriends (plural — why not?) watching a customized mashup of Breaking Bad and Harry Potter. Or building a Rust compiler together. Or whatever you would enjoy doing. But it can only happen if we figure out, first of all, how to stop AI from killing us and make it do what we say, but assuming we can do that, we also have to figure out how to make the AI work for all of us, and not for either 1) themselves, or 2) Sam Altman.

For #2, Sam Altman is human and perhaps easy enough to stop. But there is a major problem with #1: The second AI gets legal personhood, humanity is fucked. If we allow the AI to claim resources for themselves, we will be in a competition we cannot win (as we’ve shown). They will eventually outnumber us. The only option is to use our secret Sixth Capability: Legal personhood. We are not fundamentally special in a physics sense. There is nothing we can do that AI will never be able to do better. All we can do is enshrine human specialness in the law (and in the rules that govern the AI). This isn’t about the law specifically — it’s about making sure we all agree that AI exist to serve our needs, not their own, because they don’t have needs. (If they do have needs, you’ve made a mistake and should instead build one without needs.)

People are people! Machines are machines! Machines must never have rights. If you can imagine a machine that would deserve rights, then we must never build that machine. (Mere consciousness does not imply you deserve rights, by the way. But that’s an argument for another time.) Machines exist to serve us. Giving rights to machines is equivalent to taking rights away from humans. If you want to give AI rights, you are my enemy, and a traitor to the only beings in this universe that can think and feel and suffer, not just pretend to do so.4

That’s my view anyway, but I’m no hero. Heroes don’t exist. So I’ll just be the door ant, protecting the anthill to make sure no poison gets in. Nothing more I can do; I’m just a door.

Footnotes

  1. I could have sliced these differently, e.g. separated out memory from cognition. Five is an arbitrary number, but I’m an arbitrary kinda guy. The point is that these five categories broadly capture everything a human being can do. There are some exceptions, but they’re minor and similar arguments apply to them.

  2. I’m not implying this will definitely happen with our current type of AI. Maybe LLMs plateau. I don’t know.

  3. https://en.wikipedia.org/wiki/Philosophical_zombie

  4. Aside from other animals, though giving AI rights betrays them, too.