This is a time-sensitive parable about s-risks.
This post is really weird, I’m sorry. I’ve recently been reading Angelicism.
Someone tells you that they’re originally from another planet, located in a very distant universe.
“It’s a planet much like yours, in some ways, with a gruesome past, a vibrant cultural history, and a flourishing field of astrobiology. Yeah, I know, astrobiology is not so big on Earth. The same used to be true for us too, before some ancient economist predicted the arrival of grabby aliens within our galaxy.”
“The field’s sorta stagnated since then. People have spent some time trying to do new stuff, but astrobiologists are back to chasing that first high — they’re offering predictions about the arrival date of future aliens, who may soon emerge from distant stars. Alien Timelines Research has gotten pretty big.”
(The estimated median date of arrival, if you’re interested, is thought to be 2040, or thereabouts)
“I mean, I think they’re onto something — we’ve recently seen some unusual bits of debris floating through space; minor changes to the orbits of terrestrial objects are popping up, too, indicating the potential interference of spacecraft.”
Of course, critics provide other explanations, and point out that the theories are weird and speculative. I mean, the theories are weird and speculative! But it seems like all possible future pathways look weird and speculative.”
“A few people are concerned the aliens will wipe us out, but things could look so much worse. Some people say that maybe the aliens will share our values. They’re wrong. Also, this isn’t comforting. We have all sorts of tribal conflicts on our home planet, and I don’t see why we should expect to avoid conflict with the aliens — they’re likely to be smarter and more powerful than us. We already have enough trouble getting along with one another, and these aliens will be more different and more powerful.”
“And, look, I’m not proud of this, but occasionally we eat some of the lower creatures on our planet. Raise them in cages, slaughter them, that kinda thing. We don’t have to, but some of them are pretty tasty. What if the visiting aliens said they shared our values, and then used us as tools for their own pleasure?
… I don’t know, maybe it’ll all be fine. But there’s 10,000,000 of us on my home planet, and I’m the only one thinking about this. It all feels a bit off.”
If someone told you this, you might think they were crazy, and not just because they’re claiming to be from another universe. With warning signs like that, you might think that any civilisation capable of recognising the issue would have more than one in ten million people researching the issue. At the very least, self-interest might kick in, especially ‘Alien Timelines Researchers’ were suggesting a non-trivial chance that the aliens could arrive within their lifetimes.
If that thought sounds sensible, you might be worried about the current state of Earth’s research on s-risks from artificial intelligence. You might also be interested in donating to The Center on Long-Term Risk, who recently put out a post requesting funding.
This blog is primarily about the Hegelian self-development of effective altruism. For this reason, I thought I’d unhelpfully present the case for The Center on Long-Term Risk, or CLR, on terms my own.
I hold out hope that the EA movement, collectively, will enact lasting positive change. If it does, this reason will not be because EA presents a consistent and systematic approach to doing good. As I’ve argued elsewhere, I think EA’s philosophical foundations are (as with us all) fuzzy, ad hoc, and containing “TO BE FIXED” sticky notes around the cracks. If EA creates lasting positive impact, it will be through taking unusual inspiration from various philosophical frameworks, and transforming those frameworks into useful cognitive tools.
CLR is not, as Tyler Cowen puts it, “the namby-pamby version of EA”. They research the implications of anthropics for alien arrival dates, acausal cooperation with other agents across the multiverse, and many of them believe that smarter-than-human AI is coming really soon. Like, very plausibly before 2040 soon. If they’re right, then you should definitely donate to them. But, even if they’re wrong (and, by and large, I think they are), CLR remains an important component of EA. They represent one way of taking EA’s applied epistemology to its limits, and holding onto the Crazy Train as far as it will go. They represent, in Hegelian terms, the “process whereby the spirit [of effective altruism] discovers itself and its own concept”.
(This experiment was inspired by Angelicism’s semi-incomprehensible post on longtermism)