Skip to content
Menu Close Icon

Moloch and Generative AI's Race to the Bottom Pt. 1: Setting the Stage

Note that this is the first part in a series exploring the dangers of AI in our society, and how we work together in preventing the worst of it.

What sphinx of cement and aluminum bashed open their skulls and ate up their brains and imagination?

Moloch! Solitude! Filth! Ugliness! Ashcans and unobtainable dollars! Children screaming under the stairways! Boys sobbing in armies! Old men weeping in the parks!

Moloch! Moloch! Nightmare of Moloch! Moloch the loveless! Mental Moloch! Moloch the heavy judger of men!

Moloch the incomprehensible prison! Moloch the crossbone soulless jailhouse and Congress of sorrows! Moloch whose buildings are judgment! Moloch the vast stone of war! Moloch the stunned governments!

Moloch whose mind is pure machinery! Moloch whose blood is running money! Moloch whose fingers are ten armies! Moloch whose breast is a cannibal dynamo! Moloch whose ear is a smoking tomb!

Moloch whose eyes are a thousand blind windows! Moloch whose skyscrapers stand in the long streets like endless Jehovahs! Moloch whose factories dream and croak in the fog! Moloch whose smoke-stacks and antennae crown the cities!

Moloch whose love is endless oil and stone! Moloch whose soul is electricity and banks! Moloch whose poverty is the specter of genius! Moloch whose fate is a cloud of sexless hydrogen! Moloch whose name is the Mind!

Moloch in whom I sit lonely! Moloch in whom I dream Angels! Crazy in Moloch! Cocksucker in Moloch! Lacklove and manless in Moloch!

Moloch who entered my soul early! Moloch in whom I am a consciousness without a body! Moloch who frightened me out of my natural ecstasy! Moloch whom I abandon! Wake up in Moloch! Light streaming out of the sky!

Moloch! Moloch! Robot apartments! invisible suburbs! skeleton treasuries! blind capitals! demonic industries! spectral nations! invincible madhouses! granite cocks! monstrous bombs!

They broke their backs lifting Moloch to Heaven! Pavements, trees, radios, tons! lifting the city to Heaven which exists and is everywhere about us!

Visions! omens! hallucinations! miracles! ecstasies! gone down the American river!

Dreams! adorations! illuminations! religions! the whole boatload of sensitive bullshit!

Breakthroughs! over the river! flips and crucifixions! gone down the flood! Highs! Epiphanies! Despairs! Ten years’ animal screams and suicides! Minds! New loves! Mad generation! down on the rocks of Time!

Real holy laughter in the river! They saw it all! the wild eyes! the holy yells! They bade farewell! They jumped off the roof! to solitude! waving! carrying flowers! Down to the river! into the street!


-Howl, Allen Ginsberg



I know what you’re thinking, if you made it this far. What the hell does this random unsettling poem from Ginsberg have to do with Artificial intelligence? I would have said the same thing until I recently listened to the Lex Fridman Podcast featuring MIT Professor Max Tegmark, which I highly recommend. Max speaks at length about a blog written by Scott Alexander called Meditations on Moloch (also highly recommend). We will dive deeply into how this is related to AI’s potential for great opportunity, but also great harm, and how we can steer it toward the former and narrowly avoid the latter. Let’s talk a little about Max, and the number one issue we need to solve in order to prevent humanity's casual annihilation by our own short-sightedness.

AI to the MAX

Max Tegmark sits in the unique intersection of physics and machine learning. He reminds me of a modern day Einstein. Unbelievably more intelligent than any of us will ever be, but so approachable, quotable, kind, and funny that if you didn’t know him, and he told you what he does, you might not believe him. Also why does his website look like it was created in the 90s? 

Max’s swedish-american accent and optimistic lens toward AI and the universe at large is delightfully infectious. Of all the voices I have heard so far along the continuum, his is the most cogent, nuanced, and accessible. He also provides us a realistic path forward. Don’t get me wrong, he is realistic about the problems we are sprinting toward, and he is firmly on the side of slowing things down. In fact, he’s one of the many who helped start the open letter (which by the way I will continually ask you to sign throughout this series. It takes 2 minutes, and can actually have transformative change). Despite that, he’s the only person I’ve heard describe a clear road towards success in AI, as opposed to someone like Eliezer Yudkowsky whose views on the matter must be taken seriously, but also do not chart a course towards success.


The central problem in AI to be solved across all of the conversations with experts is one of alignment. Within the context of Artificial Intelligence (AI), alignment refers to our ability to steer the outputs of an AI system toward the goals and intentions of the designer, and to be used safely by a large number of people, some of whom will inevitably be bad actors and trolls. 

You can likely see the significant issues inherent in this. Humans can barely keep their new year’s resolutions after a couple of months, so how are we supposed to do this for a technology that will very soon be degrees of magnitude more intelligent than humans without an effective system of checks and balances?

A classic scenario to demonstrate this point is designing an AI system to fix all the damage humanity has done to the planet. So the AI goes about its task, trying all sorts of innovative and technological solutions, but ends up deciding the only way to effectively achieve its goal is to remove humanity from the equation. So it does. Oops.

I realize there might be some opinions about this specific scenario, but I would like for you to consider the ideas behind it, rather than the example itself. We will discuss this more in our conversation about our first contact with robust AI, social media, in a future blog, but the takeaway now is that we encode a goal into an algorithm, and that algorithm executes its mission in unpredictable ways, which can potentially have harmful effects without easy ways to mitigate or stop. It's like the classic monkey paw careful what you wish for.


Before we get too deep, I want to give you a brief introduction to humanity's greatest foe - Moloch. He appears first in the bible, and weedles his way throughout history taking many forms, but at his core imagine him as the Hollywood version of the devil who convinces you to sell your soul in exchange for what you want the most. He is the greatest negotiator who ever was, cleverly convincing you what is in your best interest, and distracting you from the true impact of your sacrifice until it’s already been made, and the deal done. The worst part is we all know Moloch’s reputation, see exactly what he’s doing to us, but the carrot is just too close, and the stick so far and difficult to comprehend beyond the abstract. Or worse, perhaps we think we can beat Moloch at his own game, relying on our own hubris to fall straight into his monstrous clutches.

Moloch’s newest tool is something coined Generative Large Language Multi-modal Models (GLLMs) or Golems, which is how we will refer to them throughout this series. For those of you who don’t know the mythology behind Golems, they are automatons made of something like clay from Jewish folklore, and imbued with an intent through writing its purpose on a piece of paper and inserting it into the Golem. To be clear, Golems are not in any way inherently evil, or otherwise dangerous. Golems can be victims as well, or support a community, or serve as a friend. It’s clay, animated by the magic of the user's intent. Sound familiar, fellow Chatters?

Like the above example on climate change, I also don’t want you getting stuck in any one example. Moloch could just as easily be Sauron helping middle-earth to develop ring technology. On a macro-scale, Moloch is the system’s within which humans operate, incentivized by self-interest in the short term in exchange for harming themselves and everyone else in the long run. 

A simple way to demonstrate this is through the tragedy of the commons, and many other ideas found in game theory or behavioral economics such as the Prisoner’s Dilemma. The general idea is that when you create an unregulated common space, people will act in their own self-interest to exploit it. Not from any malevolence, but from a desire to profit and progress. When the resource in question is finite, such as land for farming, this creates an incentive structure whereby everyone is playing a zero-sum game and will over develop in order to maximize personal profit and gain. The problem very quickly becomes that the land is used up, is no longer arable, and the agriculture fails on the systems level for everyone, even if you personally are not participating in the exploitative development. Aristotle once pointed out, “That which is common to the greatest number gets the least amount of care. Men pay most attention to what is their own: they care less for what is common."

It’s difficult not to go in a lot of directions with this idea, but in a lot of ways this is why private property is important. If you feel like you “own” something that is related to your well-being, you are more likely to be a better steward of it. In absence of that, there needs to be an overarching authority to set standards, norms, and rules, and then enforce them for the “commons”.

The problem is…what happens when it’s Moloch writing the rulebook? What if the promise of power, disguises the more devastating promise…the loss of what makes us human?

Throughout this blog series we will explore how Moloch and Golems are shaping our future potentially toward devastating consequences for humanity. I invite you to provide your own opinions by leaving a comment.

Unlike many of the blogs on our website around AI education, which are co-written with ChatGPT doing most of the heavy lifting, these blogs are written by Joseph Rosenbaum, who will actively cite the use of ChatGPT when it has been used in an article for transparency.