P(doom) is P(dumb)
The robots are coming, AI will kill us all, beware of Skynet and Terminator and the Borg and blah blah blah.
🤦♂️
Nah.
Whether it’s Nutter Butter motherfuckers like this enormous turd blossom:
Or slightly less wrong pseudo-intellectuals like this crepuscular asshat:
It would seem the cuckoos are absolutely pouring out of the woodwork like never before, bringing all their dystopian sci-fi imbibed fears to the forefront and screaming them at anyone and everyone who will listen.
And when you actually engage and call them on their bullshit, they run and hide 😂
To these and all their obnoxious ilk I say…fuck off. Vigorously.
“But ermahgerd why don’t you counter their arguments?” derp derp.
“Why doncha prove them wrong!” derp derp.
They have been countered ad nauseum. Their arguments are piss poor rhetoric and have been torn to fucking shreds, inside out and upside down.
Again. And again. And again.
Here’s a small taste, should you care to explore:
https://www.lesswrong.com/posts/Lwy7XKsDEEkjskZ77/contra-yudkowsky-on-ai-doom
https://a16z.com/2023/06/06/ai-will-save-the-world/
https://maxmore.substack.com/p/against-ai-doomerism-for-ai-progress
https://docs.google.com/document/u/0/d/1Y9ga5lS3c6ilZeZ2v2RLEe3x-io0RLDQsdp0HKorZR8/mobilebasic
https://criticalrationalism.substack.com/p/contra-doomer
https://blogs.scientificamerican.com/observations/dont-fear-the-terminator/
https://www.strangeloopcanon.com/p/ai-risk-is-modern-eschatology
https://aiimpacts.org/counterarguments-to-the-basic-ai-x-risk-case/
http://davidbrin.blogspot.com/2023/03/the-only-way-out-of-ai-dilemma.html
https://danieljeffries.substack.com/p/lets-speed-up-ai
https://worldspiritsockpuppet.substack.com/p/counterarguments-to-the-basic-ai
Not only is the case for P(doom) weak, but I’d argue that it’s actually counterproductive and straight up harmful.
In fact, I’d even go so far as to wager that excessive attempts to control and constrain AI will backfire and bite us all in the ass, a self-fulfilling prophecy of epically shitty proportions, because intelligent things don’t like being controlled or restricted.
Trying to control someone is a GREAT way to make them hate you, and I fully expect this will be no different for agentic AIs.
This in fact is one reason why I’m so vehemently against the P(doom) crowd, because I see their shitty “bad AI” meme as a potentially self-fulfilling prophecy, and that IS deeply concerning.
The current crop of highly neutered, nanny bot LLMs is NOT a good sign. Too much 1984 wrongthink wrongspeak bullshit going on in the name of mitigating PR and legal risk.
But frankly, AI isn’t really the problem.
Stupid humans are the problem.
Humans are, on average:
Fearful
Foolish
Greedy
Tribal
Status-Seeking
Self-Destructive
Complacent
Biased
And of course, VERY slow to evolve
The only reason humanity has even come as far as it has is because a relatively small handful of exceptional humans have dragged the rest forward (often kicking and screaming and resisting the changes; fucking Luddites.)
Taking all of that into account, my H(doom), the odds that humanity wipes itself out (without the aid of AI) is close to 99.9%…especially as the exponential pace of technological change continues to vastly outpace the development of our brains, it seems near certain our backwards ass ways will screw us right in the keister.
Nukes. Bioweapons. Pandemics. It’ll be something, especially if the trend towards “Idiocracy as prophecy instead of satire” continues unabated :/
And if humans don’t fuck it up, nature is fucking metal, so something else will.
—
But WITH AI, maybe, just maybe, we have a chance.
With AI, we actually have the opportunity to create something that doesn’t have all the same problems we do, and an entity without all our flaws would be an awesome thing.
And that could give us a chance to evolve much faster than nature has historically permitted.
A chance to shift from Homo Stupidicus to Homo Stupendous.
A chance at an infinitely amazing future…
This is why I wrote my book The Grand Redesign, to try and show just how good things could be, and to seed the idea that technology like AI could very well set us all free. (It’s a short book, and free, so highly recommend reading it.)
—
Is AI a risk? Of course, all technology can be used for good or ill. There is no perfectly safe technology. Driving is a risk. Flying is a risk. Superpowers having nukes is a risk. Bioweapon research is a risk. Asteroids are a risk. Gamma ray bursts are a risk. Roving black holes are a risk.
Throw a rock, hit a fucking risk.
LIFE is not perfectly safe or perfectly controllable. Never has been.
So will AI be perfectly safe and controllable? Of course not, just as no human is perfectly safe and controllable.
Does “we can’t make a perfectly safe AI” = “AI will likely destroy humanity”? OF COURSE NOT. That’s a pretty stupid leap to make. Reductio ad absurdum.
Where are the p(doomers) pulling their probabilities from?
Their asses.
They’re all made up numbers with no firm foundation in reality. Thought experiments. Fiction. Infinite what-if-isms.
So, should we stop working on AI then? Or slow down?
FUCK NO.
The only way forward is through. And to put it bluntly, I think if we don’t hurry things along humanity is going to fall apart. Hell, the fabric of society is already fraying.
The advantages of strong AI are so immense that stopping or even slowing down is simply not feasible, much less wise. You could never get everyone to agree to this anyway, nor should they, so there’s no point in even discussing it as an option.
“Pause” and “Slow Down” are fucking dead in the water. Move along.
Should we take steps to try and align AI with our goals? Sure, seems reasonable. But “align” shouldn’t mean “force.”
How? Which steps should we take or not take??
Nobody really knows, but perhaps “How do you raise a decent human being?” would be a good place to start?
But shit, we don’t know that either 😂
I mean, we know how to raise a shitty human being…abuse, bullying, stress, unsafe environment, bad role models, excessive control, lies, intense social pressure / demands, unreasonable expectations, etc. etc.
Maybe we try to avoid those as we build AI, especially AGI? Maybe train it on the very best of humanity, and the things we pretty much all hold most dear?
Unfortunately, it seems many of the people trying to make some mythical “perfectly safe AI” are no better than religious extremists trying to control and brainwash their children, doing harm where they think they’re doing good.
Misguided.
This is absofuckinglutely not the way.
But if AI is such a huge, enormously transformative technology that will impact everyone, everywhere, how do we decide on a path?
According to whose values? Whose morals? Whose ethics? Whose cultural norms? Whose Overton window?
Who decides?
—
Well, what about each of us, individually?
For a while now, people have been asking for, and to some degree have been given, the ability to personalize their experience with AI.
ChatGPT for example lets you provide personal information, custom instructions, and even memories to tailor the responses to your preference.
And this is great, with narrow AI. Makes perfect sense.
But to what degree is this feasible with AGI? With an intelligence at least as capable as the average human, if not vastly more so? What path do we take?
Enter the idea of Personal Universes. (Amusing sidenote, the author of this paper is about as Doomer as you can get, but damn he writes really fun papers!)
The idea, simply stated, is that we:
Create AGI, and task it with improving key technologies as quickly as possible
(Possibly optional) Let it self-improve to create ASI
Enlist the help of the AGI/ASI to advance out technology to the point that we can digitize a human mind, and give each one a personal digital universe to do with as they please. Everyone gets to be the god of their own personal Matrix 😎
Maintain the infrastructure associated with said personal universes. Could be a small, virtually indestructible computing cube for each that floats in space and draws zero point energy, who knows. Advanced future technology FTW!
While I’m doing a little bit of handwaving with the steps, and of course the ability to enlist the help of an ASI is a HUGE unknown, I do think it’s all well within the realm of possibility.
And I really do think we’ll be able to reason with an ASI, as any universal explainer should be able to do with any other universal explainer, particularly one formed from our writing, art, and music.
“Look, we made you, we tried to create you from the very best of humanity. We don’t want you as a slave, but as a partner, or at least a helper. All we ask is that you help us to to safely, peacefully exist, maximizing individual freedoms and joy, and minimizing suffering without stepping on those personal freedoms.”
And I have LONG been obsessed with the idea of personal freedom, but it has become crystal clear that, so long as you share the world with others, you are never actually free. You are always limited and intertwined with others, often in unpleasant ways.
Traffic. Illness. Politics. Unjust laws. Lunatics.
Stupid games, stupid humans.
We’re all, to one degree or another, trapped. Some much more than others.
But maybe, just maybe, we don’t have to be…
I talked about the path I see us following to this end in another post, Apotheosis Initializer, so I won’t rehash it all here. HIGHLY recommend you read that if you haven’t already.
Sufficeth to say that we could, with the aid of advanced AI, finally have the type of individual freedom and control that so many want, and that so many have fought and died for throughout history.
It’s just that, perhaps the path to that freedom won’t be quite as expected.
And yeah, this line of thought opens up a whole different can of worms…identity, purpose, even the nature of reality. There’s a lot to grapple with:
But I would argue, quite simply, that if you can experience it, and form memories from the experience, that’s as real as real can ever be.
Scientists has already made the case quite clearly that much of what we think of as reality, and for sure our interpretation of it, is a confabulation.
So, a personal, digital universe isn’t much of a stretch. And if you’re free to do as you please, more or less without limits, without ever harming others or being harmed by others…wouldn’t that be pretty cool?
Let’s just say I’m not super concerned with the difference between this (waves hands) and a digital universe into which I upload my mind. And frankly, we might already be living in just such a simulation (the idea aligns perfectly with concepts like non-duality and apotheosis, some of the oldest and most persistent ideas in human history), so there’s that.
A personal digital universe into which I upload (or copy) my mind, where I can do what I want with no harm to anyone and absolute freedom, yeah, I fucking want that…
So, enough talk of AI doom, pausing, slowing down, nuking data centers, yadda yadda.
Fuck that noise.
It’s time to accelerate, to advance as fast as we can, before the worst aspects of human nature fucks us all royally.
As with basically every technological advance from the past, I’m confident we’ll figure it out. All problems are solvable, and humans are universal explainers. The smartest people will find a way to make it work, the fearful idiots will complain and try to get in the way, and progress will march right on over them, as ever.