[Hello everyone! This email is part of a feed where I update my close friends ∼ weekly. You may read it, skip it, delete it, or tape it to the wall and pray to it every night.
If you want to support me: reply with a one-sentence email, even if you didn’t read the entire update. You can literally just answer with “hello whee” and I’ll be happy.]
This episode of Neil’s updates was sponsored by Human CivilisationTM
"Building up to Artificial General Intelligence since -2500 BC"
I’d like to thank Human CivilisationTM for:
Supplying my
Water
Food
Shelter
Giving me access to
Books
Websites
Data
For inventing
Vaccines
Transistors
Bread
Human CivilisationTM’s generous support was essential to the writing of this post. Sign up to Human CivilisationTM today: givewell.org/charities/amf
I. What I've done this past week
Published Astronomical Waste’s French translation! It’s not yet on Nick Bostrom’s website, but should be soon. Reminder: it’s a paper by Oxford philosopher Nick Bostrom about the opportunity cost we pay for every second that we are not colonizing the universe.
Ran a survey through Sciences Po Reims students. I have 20 responses, and new intuitions on how people think about AI. The average student gives a 20% probability of human extinction by AI or so, which is higher than I thought it would be. Asking them a bunch of questions about how well humanity could coordinate or how much they trusted AI engineers did not change their probability of doom. That surprised me as well!
I was accepted the “AI safety fundamentals course” by BlueDotAI. It’s run by people who are doing their best to stop the Doom Machines from being built. [1]
I sent an official comment to the US government about a piece of legislation coming up. As it turns out, government officials use regulations.gov to get advice from citizens about the potential impact of their laws. What I sent them is a basic piece on how open-sourcing models is a Terribly Bad Idea.
[1] Your regular reminder that adults don’t exist. There is no official position for “decreasing existential risk”. That’s why I didn’t use the word “expert” here. This course is not run by “experts”. Unfortunately, we don’t live in a world where there’s an academy made up of people who’ve been “decreasing existential risk since 1812”. It’s all improv.
Oh and by the way, this is why I’m almost exclusively working on AI risk.
II. How you can help
Some behavior in my fellow humans is behavior I find irrational.
Now the appropriate reaction to seeing other people act strangely isn't to say "Gah! I don't understand them!". When you hear about a school shooter you might go "I don't understand how anyone could be like that!", and then go about your day until you read the next headline. But while "I don't understand" successfully conveys your shock and outrage, it doesn't... help you reduce the amount of school shootings? When there's school shootings—or genocides—the way you stop them from ever happening again isn't to shake your head sadly and go "how could human nature be like this?". No! It's to understand human nature and fix the damn problem. You shouldn't be proud of not understanding how a Nazi thinks. I remember having a debate with a communist once, and I was explaining how thermonuclear bombs worked from a technical perspective. He interrupted my explanation and said "yeah, but I'm against that kind of weapon". I was shocked for a second and said "me too?" He responded: "yeah well, you know a lot about them for someone who's against them!" Dude. You don't solve problems by being ignorant about them!
Anyhow. Some ways in which humans act still surprise me. So I'm asking you a question. You don't have to answer in many sentences, or give this too much thought. I value your time. But you've read thus far (thanks!) So here you go:
"Are you working on the most important problem you know about?"
If not, why not? Has it never occurred to you spontaneously to work on the most important problem you know about? Or do you know of an important problem, but have decided to solve another one instead, for reasons x y and z? If you haven't thought of this before, then would you like to try? You could write down every problem you know about, rank them by total Badness, and then figure out how you could solve them, by either pivoting your life around them or solving it as a hobby. Before you actually do this—what result do you expect will come out of the exercise?
I asked this question in my survey to both Magendie and Sciences Po. A majority of people answered that they were not working on the most important problem they knew about. That's interesting.
II. Post of the week
Think like reality, by Eliezer Yudkowsky. A three-minute read or so.
AH yes, and remember to:
Ω
[That’s all folks! You’ve been reading the Neil’s Updates Times. To unsubscribe, either mark my emails as spam or declare a karate duel with me and if you win I’ll stop sending you emails. Beware though: the color of my belt is so advanced, it's not even on the visible color spectrum. Birds flap flap away as fast as they can when they see me, and they call me the Great Nightmare of the Mantis Shrimp.
I encourage you to write your own version of this, because I’m curious about what’s going on in your life. Do it, even if it’s just a few sentences, even if you send it just to me. The mathematicians won’t tell you this but there’s a huge difference between 1 and 0.]