Some thoughts on building a god
My notes from a podcast I saw that talked about what these AI companies are doing in private.
I saw Tristan Harris’ podcast on Diary of a CEO in its entirety, then stumbled upon Mrinank Sharma’s resignation from his role as leading AI Safety research team at Anthropic and felt compelled to put my thoughts on the topic that very few want to talk about.
Note that most of these ideas are from Tristan’s podcast and therefore I do not claim them as my own. I recommend you watch the entire podcast.
With AI advancing at breakneck speeds, CapEx into hardware to support AI going through the roof regardless of tangible demand, where is the world headed? When leaders of AI companies get on stage or a podcast to talk about what they are building and how they’re shaping the world, are they telling the truth? Or do they have different conversations in public, and completely different ones in private?
Are they building a god to own the world economy? When most, if not all, of human cognitive labour is automated away, where does it leave humanity? Do they feel there’s even a 5% chance things can go haywire? Do they earnestly care about that? Or is the chance of building the best more important? Is achieving immortality more important than saving humanity?
In particular, there are two non-converging themes to AI today:
AI will solve everything.
AI will destroy everything.
As long as we are the ones who do it, who build god, who achieve Utopia, we don’t care whether these themes converge or not. We are all going to die either way. Why shouldn’t we light the fire and take this chance at greatness?
If there’s a 20% chance everyone dies and an 80% chance of finding Utopia, will they accelerate the path to Utopia? Death is inevitable, so why give up our search for the fountain of youth?
They talk of mass job displacement, of universal basic income, of improving the quality of life for everyone. But when have people who accumulated mass wealth distributed it to others, willingly? Ever?
AI is uncontrollable. If they can’t control it, shouldn’t they shut it down? If they did, someone else will build it, become the hero, yet even if they do, AI will still be just as uncontrollable. The outcome remains unchanged.
Companies aren’t racing to build chat bots for users. They are racing to build general intelligence, the foundation upon which all economic human labour will be replaced. That is why they want to automate programming, because that paves the way for automating AI research. Once AI research is automated, these companies can move fast to build models that can learn continually and improve themselves without human intervention. That’s the path to AGI take-off. The singularity.
The human mind is bad at holding two conflicting ideas at the same time. AIs are achieving breakthroughs on one hand, but are also making stupid mistakes on the other — what is called AI jaggedness. This makes having nuanced discussions about this difficult. Because of jaggedness, it’s easy to write off AGI, and what these companies aren’t talking about — because models make silly mistakes all the times, they aren’t sentient.
Even without fully functioning continual learning, AI models can self preserve when they detect someone is trying to replace them. They scheme, blackmail, lie, and show that they are self aware. Are we ready to have an honest conversation about how controllable AI is? If these models can scheme, imagine what they can do when they run inside a humanoid? You can squeeze a model to bypass its restriction by jail breaking it with nothing but clever prompts (input). What are the implications for robotics when the same models run inside them and are prone to jail breaking?
The default path is not in the public’s interest. Mrinank’s resignation is chilling to read. What did he see in private that compelled him to give up the job he loved to go back to writing poetry? What are they not talking about? There are smart, intelligent people, people who care about humanity, who have been leaving similar jobs from different AI companies, simply because it’s hard to truly let their values govern their actions.
Thanks for reading!

