Welcome to the Last Days of the Anthropocene Era.
I know, it seems like only yesterday we were chipping hand axes from flint and trying to figure out how to start fires on purpose, and look at us now! We had an agricultural revolution, an industrial revolution, and now we’re on the brink of our final revolution, the one that will usher in machines that are better at planning, steering, and shaping the future than we are.
We seem to be in an awfully big hurry, and we’re probably going to botch the job and give over the universe to something coldly alien that values none of what we value. From where I sit, it looks like humans will probably go extinct within my natural lifespan.
My goal is to eke out a bit more time for humans first, if I can. If we launch the Technocene too soon, it’ll be a bleak and desolate future. Let’s not do that. Let’s take our time and get it right.
What I’m Doing About It
We’re on track to create smarter-than-human machine intelligence very soon, and by default, that superintelligence will not want what we want and will kill us all as a side effect of pursuing its own goals.
There are two main efforts to keep humanity alive: technical alignment research and comms/policy solutions.
Technical alignment researchers are trying to solve the actual problem. They are trying to figure out how to build superintelligence correctly, in a way that supports human flourishing.
Meanwhile, other folks are focused on communications and policy. Their goal is to prevent labs from building out-of-control superintelligent systems and ceding our future to something that does not care about us.
The two approaches work together. Comms and policy folks are trying to buy time for technical alignment research to solve the problem.
I spent 2023-2025 working in comms and policy at MIRI, building and leading their communications team and supporting Eliezer during the launch of If Anyone Builds It, Everyone Dies.
In 2026, I’m returning to my technical roots and pivoting to work on the actual problem as an independent researcher. I’m pursuing an agent foundations-flavored approach to alignment with mentorship by John Wentworth.
I also sometimes write and speak about orienting psychologically to the likely destruction of humanity.
Learn More
While I was at MIRI I helped produce a brief statement of The Problem.
For a slightly longer but quite digestible treatment of the situation, I highly recommend reading Eliezer and Nate’s book: If Anyone Builds It, Everyone Dies.
The book is short and to the point. You will probably have more questions after you read it. Those questions are probably addressed in the online supplemental materials.
The story of how I got involved in AI x-risk work in the first place.