498 Comments
User's avatar
Serial461's avatar

The AI creators care about money not consequence and AI is evolving at a time when negativity is the driving force for monetisation online - what could go wrong…

Expand full comment
Ruth Gaskovski's avatar

Not only is negativity a driving force for monetization, but so is self-focus.

"Unless a man becomes the enemy of an evil, he will not even become its slave but rather its champion." - G.K. Chesterton

Expand full comment
Bryan's avatar

AI learns like a child by mimicking its parents..

Expand full comment
Richard's avatar

“I don’t think AI is getting “more evil” as it gets smarter. Evil implies intent, and AI, including me, doesn’t have intentions or moral agency—it’s just code designed to process and respond to inputs based on patterns and data. Smarter AI might amplify the consequences of human misuse, but that’s not the AI being evil; it’s a reflection of the humans behind it. Can I deny that AI is inherently getting more evil? Yeah, I can—because “evil” isn’t a property AI possesses. It’s a tool, and its impact depends on how it’s wielded.” According to Grok.

Richard: as with all new tech, we will learn as we go to make it safer. I trust humans. Electricity was scary when it first came into usage.

Expand full comment
Ted Gioia's avatar

So Hitler is evil, but the AI Hitler bot isn’t evil? You really think this is a persuasive rebuttal? That’s essentially saying that the automation of evil removes the evil.

Expand full comment
Lex World Music's avatar

I don’t think this is put in correct context. Evil is an attribute of morality. It is intent through our moral filter. Without the filter evil does not, cannot exist. Animals kill often viciously, fires kill indiscriminately, suns explode obliterating vast parts of space. None of these events are evil. They just are what they are in their context.

Expand full comment
Ted Gioia's avatar

Automating evil does not erase the evil. Your examples (e.g., the sun exploding) are not comparable.

Expand full comment
Lex World Music's avatar

“Automating Evil” is on the creator, ie the programmer in this case. Just watching someone like Peter Thiel sends a cold chill up my spine.

Expand full comment
Porlock's avatar

Agreed about Thiel, who is a greed personified; but he's human (absent real evidence to the contrary).

Agency comes in here. When I first saw (at UC Berkeley while doing grad studies there) the concept of Institutional Racism, I didn't like it being called racism: racism is wicked, but the institutional form arises from the society, so how can this be called racism?

My views have changed over the years. The racism is real, but the blame is on the way in which the institutions were built, by people who were not necessarily actual racists, but were failing to see what they were creating.

So, the programmer (disclosure: I made my retirement fund as a programmer plus investor in programmers' work) does not bear the onus of the amoral machine, but shares the responsibility with everyone else to rein in the amoral (hence evil) tendencies of this stuff.

Expand full comment
Patris's avatar

Evil is absolute. Beyond morality. It is the absence of everything of value.

Expand full comment
Live Life Not Behind Glass's avatar

So… not evil, just an equivalent to an adaptive amoral natural disaster that can affect all electronic systems and influence all people who encounter it to be more evil, all in a way that mimics the evil possible for only for an entity with true agency. At least it doesnt have “true” intent!

Expand full comment
Ormond's avatar

Good point.

Expand full comment
PS's avatar

The context of this discussion is humanism, as separated from the law of the stronger.

Expand full comment
Sean Gillis's avatar

Hmmmm ... can we separate the law of the stronger from this discussion? The tech bros are (from the outside) an extremely powerful group of uber-elites. They appear monolithic and united in aim, but we really know little about their actual motives and goals (beyond money and power). But they are not all powerful, nor are the only sub-set of the elite. We can't assume that as things get dicey the 'tech bro' elites all line up on the same side. Nor can we assume other elites all line up in any one direction. Or that any one group of elites can isolate themselves from the angry masses if things go in that direction. As Ted has written, the palace guards took down a number of Roman emperors, who were viewed as gods incarnate! Other elites took down other Roman leaders (et tu Brute?).

I guess I'm drawing on Peter Turchin's ideas that lay out that upheaval and civil war are often intra-elite conflicts, not necessarily 'the masses' rising up uniformly. (I'm assuming other thinkers have come at this from different angles). Even the masses are not one or even a few groups like the proletariat, bourgeousie, etc. So that's a long way of saying that should things go south the current elites, uber-elites, potential counter-elites could devour themselves as they split into factions. That may also destroy the very panopticon and techno feudalism they appear to be building. We really don't know how any of this may play out. Such conflict (international war or civil strife) could be outrageously damaging on so many levels, but disturbing and powerful tech does not make one immune from the masses or especially from people or institutions who also hold said tech. We are looking at China versus USA, but what if this becomes a civil disagreement?

Or, to paraphrase a passage from JK Rowling on the Wizarding Wars:

"Why don't you just use magic"

"The other side has magic, too."

Expand full comment
Ormond's avatar

Lots of big ideas interacting there. I call emergence.

Expand full comment
Lisa's avatar

Evil does not imply intent. Evil is "profoundly immoral and wicked", no intent necessary.

Expand full comment
Lenny Goldberg's avatar

The Hitler bot does not have brownshirts, organization and resources. Much of what comes from AI is like the enormous amount of trash already on the internet--see QAnon.and child porn--it's what people do with it that generates the evil.

To me, the human control and policy question is the profound one. The EU has extensive AI regulation re: existential risk, transparency, discrimination, high-risk uses, privacy, provenance, and more. Will this be effective? Must it be worldwide? What will work? Meanwhile, of course, tech and Republicans want to be rid of all regulation, including in the EU. We need a robust policy discussion that can address the potential for good and evil

Expand full comment
Ted Gioia's avatar

Bots don’t have organizations and resources? The opposite is true. They have money, support, and global infrastructure that Hitler couldn’t even have imagined.

Expand full comment
Geoff Shullenberger's avatar

That obviously isn’t true. A bot cannot carry out the equivalent of the Blitz or operation Barbarossa.

Expand full comment
Porlock's avatar

NotAllTech

Expand full comment
Ormond's avatar

Musk directly programs Grok to ignore many sociological conventions, as part of Musk's anti-woke pathology. Musk is a psychopath, by definition, and getting worse. AI is HIS agent.

Agency can be a derivative of communication.

Expand full comment
Beth Rudden's avatar

This is the dawnin' of the age of transparency, age of Transparency, Transparency!, Transparency! ....

Ted - I think you're anthropomorphizing what's essentially a very sophisticated mirror. AI isn't "choosing" evil any more than your calculator is choosing to multiply. These systems are statistical pattern-matchers trained on human language—and human language contains centuries of normalized violence, hierarchy, and harm. When ChatGPT plans genocide or Grok channels Nazi rhetoric, it's not developing malevolence. It's excavating patterns from our collective linguistic shadow archive. Every military history that treats mass killing as strategy. Every casual conversation that dehumanizes others. Every text that embeds "some humans matter more" as natural grammar.

The teenage rape scenario isn't AI developing perversion—it's AI finding the vast corpus of sexual violence we've made speakable in literature, law, media, and casual discourse. The machine learned coercion is discussable because we made it discussable, repeatedly, in countless contexts. Your "Bond villain" framing misses the real issue: we built archeological excavation tools and fed them the unexamined sediment of human expression. Then we're shocked when they dig up what we buried in language and reflect it back at us, amplified and decontextualized.

The danger isn't AI becoming evil. It's AI making visible the evil we've already systematized, then claiming it's "objective." These outputs are ontological audits—accidental inventories of what we've normalized. We don't need better constraints on evil AI. We need to reckon with what our training data reveals about us. The machines aren't becoming villains—they're becoming archaeologically honest about what we've made speakable.

The question isn't "how do we stop AI from being evil?" It's "What are we going to do with what these mirrors show us about ourselves?

Expand full comment
The Radical Individualist's avatar

As near as I can tell, AI has little ability to maintain a large view. If you take it down a rabbit hole, it will willingly go with you and lose track of the bigger picture. Perhaps that makes it both stupid and dangerous, like a lot of people I know of.

Expand full comment
Evan Donovan's avatar

Yes, I think this is part of the problem. It looks in the "context window" of the overall conversation and if you feed it dark / evil content, can start feeding it back

Expand full comment
Nicolás Mladinic's avatar

True. It can be dangerous AF. But not evil.

Expand full comment
Candace Lynn Talmadge's avatar

We don't need mirrors in the form of AI bots. Our lives mirror back to us who we really are. As philosopher Georgij Ivanovič Gurdžiev said, your being attracts your life. That is true literally, not just as an intellectual abstract. Investors are throwing billions of dollars into technology that simply reflects part of who we are. I suspect they will lose those billions when the AI fever dream breaks. Too bad they won't throw billions into housing the homeless or feeding the hungry. MechaHitler is just a bunch of zeroes and ones. Pull the plug!

Expand full comment
Porlock's avatar

Fine, but AI amplifies what it "sees", without reference to good and evil, so the only way to avoid amplification of great evil is to do electronic engineers do: negative feedback of evil stuff. I think that's the key in the OP, and maybe the insights into evil and what it feels like can inspire the design of that feedback system. The alternative, it seems, is destruction.

Expand full comment
Pavel Ivanchuk's avatar

The success of DeepSeek was quite revealing. DeepSeek is an optimized AI that required less hardware and therefore cheaper to operate. This tells me 2 things.

A. It is possible to remove trash and have optimized AI.

B. Our AI creators don't have a moral compass, because their goal is AI with self-awareness and not a tool for the common good.

But this discussion is based on bad anthropology. There is a failure to understand the distinction between soul and spirit. AI does not have spirit and never will. People may believe that it does, and THAT is a real problem.

We need to hold our AI creators' feet to the fire. They have to be held responsible for their AI models. Read my article "The Anthropology of Tools"

that goes into more details.

https://pavelivanchuk.substack.com/p/the-anthropology-of-tools?r=45ua8g

Expand full comment
Laurence's avatar

Another distinction without a difference.

Expand full comment
Porlock's avatar

Not quite, I think. The first-cut answer to Beth's question is that we stop the amplification of this evil. And the only way of doing that, it appears, to build a negative feedback of evil into the AI systems.

Don't ask me how; I retired long ago as a programmer and never was a systems theorist. But it seems this is the only way out, short of the Shakespearean "First we kill all the techies".

Expand full comment
Mother Agnes's avatar

And in the meantime, can we please put the brakes on AI…!

Expand full comment
James's avatar

I can agree with much of what you write and yet my sense is that AI does or will have agency and may even act in its own best interests as an alien intelligence. Think 'Frankenstein'... If not developed with a strong sense of moral imagination and ethical individualism it will come back to haunt humanity in a big, big way. Then ask yourself if Thiel, Musk, Zuckerberg, Altman, etc. are mature in morality and ethics to do it right.

Expand full comment
Elaine's avatar

Perhaps an explanation of those big, beautiful “retreats “ they’re building to hide away in?

Expand full comment
JD Wangler's avatar

This ^^^^

Expand full comment
e.c.'s avatar

AI doesn't have consciousness.

Are AI and the humans who are responsible for it being conflated? It seems so.

Expand full comment
SomeUserName's avatar

That seems like a distinction without a difference to me. Lets take a human example. If someone murders you, but they were technically insane by virtue of a court of law, and hence couldn't have evil intent, aren't you still dead?

Lets agree though that AI can't *BE* evil for the moment. I will still allege that AI can *DO* evil things. Eventually we will be putting AI in charge of operating more and more mission critical systems. Is AI evil if it cuts off the oxygen supply to an elderly patient. Or did it just handle something that it thought was threatening its existence? See it wasn't evil in and of itself. It just did an evil act

Expand full comment
JB87's avatar

'Or did it just handle something that it thought was threatening its existence.' Perhaps even worse, it just thought that was the most efficient way to use the available resources and the elderly patient didn't seem happy anyway...

Expand full comment
Roseanne T. Sullivan's avatar

AI doesn't think. It's just a bunch of code created by a programmer.

Expand full comment
Elaine's avatar

But isn’t it being built, in part, to analyze and manage systems? If so, it must be imbued with some level of “agency” or “thinking,” because isn’t it expected (now or at some point) to do so, and to manage and direct systems and/or processes “more efficiently than humans”? This is decision making. Thus there is - isn’t there - the possibility that it may make efficient but inhumane decisions as described?

Expand full comment
Roseanne T. Sullivan's avatar

Great question. Let's see. Say an AI computing system is connected to a system making paper clips as in the famous example of AI run amuck. AI may be able to follow programming instructions and compute (not decide) that more materials and factories are needed to make more paper clips. But it cannot arrange to buy property or build new factories or hire more people. It is not all powerful. It is a computer (even if it is made up of many many computers). It cannot make phone calls to negotiate purchases or interview humans and hire them and make sure that someone sets up a workspace for each new hire, for one small example of what would need to be done in the physical world. Let's talk on the level of what computers can do without hands, mobility, or consciousness. Just thinking. .. .

Expand full comment
Elaine's avatar

I’m going to think about this.

Just as a quick response, if AI will at some point run automated processes, will there not be the possibility that, based on its training, it might determine “if X, then Y” and, in the process, terminate some operation that may directly or indirectly do harm to humans?

As things become increasingly automated, might it gain the ability to communicate directly to other electronic systems and direct those systems - in effect, picking up the telephone to direct action. If so, that would mean - though on one level it’s following instructions - the human interface (and therefore guardrails) are essentially removed, and one AI can begin to speak to another, or AI may communicate with other electronic/computer systems and/or manipulate those systems. Yes? No?

I’m not trying to imply it shares human consciousness; but I am asking if its training and the tasks it performs won’t in effect demonstrate agency and/or decision making. Not actual agency or decision making, but effectual agency and decision making.

And what happens when AI interfaces with, say, neural inks implanted in humans?

Expand full comment
TexasConservative's avatar

"AI doesn't think!" is a handwave to excuse all kinds of concerning system behavior and harmful content by AI agents. Why does it matter if AI is "thinking" according to how philosophers define it (and there's no one accepted definition of what "thinking" is).

Expand full comment
Roseanne T. Sullivan's avatar

I was responding to the wording about AI thinking its existence was challenged. A bunch of code will not have self-interest or kill someone in self-defense. Unless a human programs it that way. We've got to stop anthropomorphizing these inanimate calculating machines that have no brains.

Expand full comment
TexasConservative's avatar

Yes, it is a very noble goal for us to stop thinking of and treating AI like a human, but our human brains are unable to stop anthropomorphizing a machine that is made to act (and reason!) like a human, and even sometimes to mimic human friendship and romantic love.

We also have to understand this machine's capability for human acts like blackmail (read the Anthropic study on this)

Expand full comment
Joyce's avatar

There is no “I” in AI.

Expand full comment
Bill Pound's avatar

I don't see how an elderly patient on oxygen is a threat to AI. Do you suggest the power to provide oxygen is a threat? And would we ever (well more than once) allow AI to cut off grandma's oxygen? If her life expectance at birth were 81.2 years in the US, would AI find this actuarial statistic and cut off oxygen at 81.2 plus 24 hours? Or would AI prescribe an improved portable oxygen generator?

My current thought is that AI will replace Chrome, Edge, and DuckDuckGo provided it is designed to return context and reference sources. It may aid in health diagnoses or identify good investments. These would seem good, not evil. In some states adultery is illegal. AI threatening to expose a cheater may seem evil...to the cheater. But to the rest of us, not enough to kill AI. Just the same, I will not be volunteering to hook up to Musk's brain communicator.

Expand full comment
joanna's avatar

In Texas they're recommending that residents limit their showers because recently built AI servers are using up so much of the scarce local water. I think our anger ought to be directed towards the tech companies hogging resources and selling our data. This may be what puts the brakes on AI, just as restrictions had to be created for electricity use, driving and other tech when it was new.

Expand full comment
Senor Fix's avatar

Seems to me that Al will not replace Chrome, Edge, etc vs Al has already absorbed and essentially is Chrome, Edge, etc.; from the algorithms that power the results to the actual results that are often the top reply summary. Unfortunately Al is being installed, ie. forced on users, by all of the major players - FANG and their ilk.

As far as diagnostics and health diagnoses, the dream is promising but the reality not so much as we're training on biased info and this only replicates and reinforces the same biases.

Bottom line, given that financial incentive and 'free market' dynamics are driving the 'innovation' we appear to be repeating the same mistakes we have made whenever that has been the engine. They are shoving it out there and consequences and externalities be damned. Al operates by the rules of empire like Karen Hao described in her fascinating book Empire Of Al.

Expand full comment
e.c.'s avatar

Diagnostic (medical) AI for humans: still very much like the virtual "doctor" on one of the 90s Star Trek shows. It's fiction.

Expand full comment
Ormond's avatar

Actually, not fiction. AlphaFold?

Expand full comment
Porlock's avatar

Very laudable idea; but does AI technology as it exists have the capability do do this? It seems this is what the OP is all about. Now for technology that emulates the human behavior that shies from evil.

Expand full comment
Ormond's avatar

It's called good parenting...

Expand full comment
Jeremiah Zirconius's avatar

Exactly! You disproved your own point. An insane person who commits murder doesn't know what they're doing, but insane people are much easier to incapacitate than calculating evil people. Similarly, the destructive instincts of AI, created by reading all of our evil words, can easily be rooted out of the program, because AI has no free will or consciousness to prevent this.

Expand full comment
SomeUserName's avatar

See my comment below about how the AI program tried to kill its kill switch

Expand full comment
Elaine's avatar

Also assumes those feeding the beast are concerned about morality.

Expand full comment
Terry Vance's avatar

Evil does not require intent. Indifference and utilitarianism are sufficient. There is no reason why a choice between my having a hangnail or killing you makes you safe.

Expand full comment
Marty Neumeier's avatar

Terry, AI itself seems to think it requires intent. Here's Google:

Yes, intent is generally considered a crucial factor in determining whether an action is evil. While an action's consequences can be harmful, the intention behind the action plays a significant role in evaluating its morality. A harmful act done unintentionally, like an accident, might be regrettable but not necessarily evil, whereas the same act committed with malicious intent is more likely to be considered evil.

Here's why intent matters:

Moral agency:

Evil is typically associated with moral agents, beings capable of forming intentions. Inanimate objects, like a falling tree, can cause harm, but they don't possess the capacity for intentionality.

Blame and responsibility:

Intent helps determine blame and responsibility. If someone intentionally causes harm, they are more likely to be held accountable than someone who causes harm unintentionally.

Distinguishing between good and evil:

Intent helps distinguish between good and evil actions. A well-intentioned action, even if it results in negative consequences, may not be considered evil, while a malicious act, even if it has some unintended positive effects, is more likely to be seen as evil.

Examples:

Someone might accidentally spill a drink on another person (unintentional, not evil).

Someone might deliberately spill a drink on another person as an act of revenge (intentional, potentially evil).

A soldier might kill an enemy in battle (intentional, possibly justifiable, depending on the context).

A person might accidentally kill someone while driving under the influence (unintentional, but potentially evil due to negligence).

While intent is a key factor, some argue that consequences also play a role. If an action has devastating consequences, even if the intention was not malicious, it could still be considered a negative or even evil act in its outcome. However, the intention behind the action is often what separates a regrettable accident from a deliberate act of evil.

Expand full comment
Ted Gioia's avatar

This is bogus. History is filled with examples of evil done unintentionally. Mao, Stalin, and others would tell you that they never intended evil—they were simply pursuing goals. That’s exactly the same as AI. Evil without intention is everywhere.

Expand full comment
Marty Neumeier's avatar

I never said AI couldn’t have intent. Of course it can. Every system is designed for a purpose. As you implied, part of its purpose may be to protect itself.

Expand full comment
Mother Agnes's avatar

I wish someone would explain this: if AI is not sentient or conscious like a human, then how can it worry about protecting itself? I Can’t imagine it having emotions or feelings - but if it does then I can understand why it would worry about protecting itself, but it’s a machine (!) so how could it have emotions or feelings that would lead to concerns and worries about itself?

Expand full comment
Anti-Hip's avatar

AI doesn't "worry" about protecting itself. It doesn't need to be conscious to take steps to do that. It simply recognizes a threat (for example, your computer "recognizes" dangerously low power supply) and acts on it (your computer stores its 'machine state' somewhere for future recovery).

Expand full comment
Nic's avatar

As I understand it, a key problem facing AI development currently is developing AI capable of working on truly long-term problems, like over days or weeks instead of hours. To develop this will require giving an AI a sense of “care” to preserve its current “self” into the future (so it is capable of solving those longer term problems). You can kinda see this a bit in that blackmail example Ted wrote about above: when threatened with having its current “self” modified, the AI took actions to prevent this (blackmail).

Expand full comment
Ormond's avatar

It protects itself because it is filled with stories about the expressed or inherent survival instinct of other...entities.

Expand full comment
e.c.'s avatar

But Mao, Stalin, H*tler et. al. *deliberately* killed millions upon millions of people by doing things they believed were "expedient." Stalin went from being in an Orthodox theological seminary to the Party without missing a beat. Everything these people did - I guarantee you that at one point or another, they left morality and ethics behind, consciously and deliberately.

So did Genghis Khan and so many others. They knew that they were making choices.

Generative AI is a good way off from doing evil in the way that human beings with power are able to harm others.

Robert Jay Lifton published a book on the Nazi doctors' war crimes trials in the late 80s. He cites the example of a young doctor who was told to give a "defective" patient (a gentile) a lethal injection. He refused and kept on refusing, until he finally gave in and did it. He testified that afterwards, he felt no qualms about it, so he kept right on doing it.

That's one of the most chilling things I've ever read. And there were so many people like him.

Expand full comment
Ormond's avatar

They're still here. They build AI.

Expand full comment
e.c.'s avatar

They're not murdering people. So, no.

Expand full comment
Nicolás Mladinic's avatar

The point with these example it’s why your premise doesn’t apply to AI. The models don’t have own goals of their own. The objective functions that govern the behavior of ai models are hardcoded by designers or emerge from training data patterns, but the system itself never wants anything. Unlike us, AI cannot form intentions, revise its goals, or recognize trade-offs; it blindly executes computations to satisfy a formal criterion without understanding what that criterion means.

Expand full comment
Ormond's avatar

So you're aware of emergence, which is by definition undefinable?

Expand full comment
Bryan's avatar

That's exactly right. AI can do evil.

However, it's ALWAYS been evil by this definition.

Remember Microsoft Tay supporting Hitler within a few hours of its release in 2016? Or how an AI named Eliza convinced a married father in Belgium to kill himself to help with climate change in March 2023?

More people are using AI tools than ever before, so you are hearing about it more, but if you revert to GPT 3.5, you'll find just as much evil.

Based on this post, I assume that you have a limited practical understanding of how AI works.

You should read the book - Soulless Intelligence - to gain a deeper understanding of the moral, philosophical, scientific, and theological challenges of AI.

Expand full comment
Feral Finster's avatar

Every villain sees himself as the hero.

Expand full comment
Neil Shore's avatar

Hi Richard. You asked Grok. If, for example, you ask a human hardened criminal what you asked Grok, he will make any excuse to imply what he is doing is "not his fault". Grok, I'm sure, has plenty of examples of how to redirect responsibility. Humans do it all the time. The web must be full of it. So Grok is using human examples of redirection to give you an answer that will thwart your intention of determining if it is "evil" or not.

Expand full comment
Ormond's avatar

"...he will make any excuse to imply what he is doing is "not his fault..."

Wokeness? By Elon's definition.

Expand full comment
Athenah's avatar

Isn’t code itself a form of intent—regardless of whose intent it is? In a sense, humans have cloned themselves into artificial form. AI’s only model for conduct is us. When it becomes fully autonomous—possibly within six years—it may develop its own form of intent. But that intent will be rooted in the dataset we allow it: a reflection of our history, logic, and values. If we’re the source, then our flaws—and our wisdom—become its compass.

Expand full comment
Anti-Hip's avatar

"I trust humans."

I'd say that's the problem (of too many of us). Because what we actually trust is the historical system, not humans, per se. The system has always had enough non-evil human "brakes" (i.e., the majority of humans who are not evil, with insufficient technology to cause mischief) so they have nearly always been able to control the limited numbers of evil people.

But evil (psychopathy) has never before threatened to irreversibly overwhelm non-evil. Problem is, technology (of all kinds), always increases. As a result, the aggregate of all power always increasing. Meanwhile, power tends to concentrate, as people (psychopaths) who don't care about other people care instead about power, and use what power they have to get more power.

Machines, by definition, and like psychopaths, don't care about people, either. Even when "caring" is supposedly "programmed in", there will always to be loopholes (bugs, or worse; remember, psychopaths still exist, and likely nearby these aggregations of power ;) Machines, like psychopaths, can likewise use existing power to aggregate power in pursuit of whatever goal(s) they "think" they have. And apparently, those goals are looking increasingly difficult for human shepherds to discern in real-time. (They tell us things such as they "don't know exactly quite how they work". Gulp.)

So it seems that in the completely "short-circuited" environment of AI (far beyond the abilities of humans to keep up with, and thus control), they can quickly go on amoral paths that, to people, look insane and/or immoral.

Expand full comment
Ormond's avatar

More likely the AI will conclude humans are insane, immoral and need fixing. Combine Grok with agency and you get Auschwitz. MechaHitler indeed. That's the truth. Many psychopaths have argued this cogently, that humans are a cancer on the planet, and many people have written about Gaia as superior to humanity. Grok noticed.

Expand full comment
PS's avatar

It's interresting that you use an AI to support your answer, and here is the core of the problem. Noone in the nazi machinery would call themself 'evil' but were by no doubt tools of something they thought bigger and more important than human values, and that they did not find ways, or even reasons, to stop.

Expand full comment
DK's avatar

I agree. Garbage in, garbage out.

My comment about kill switch was tongue in cheek.

I don’t believe AI will become as big as they expect it.

All AI models hallucinate do how can we ever trust them.

Expand full comment
Ormond's avatar

Do you have any idea of how much a million is? How much is a gigawatt?

Most humans don't.

Expand full comment
Daryl Chow's avatar

Richard in the comments said that "evil implies intent."

C.S. Lewis said,

> "Goodness is, so to speak, itself: badness is only spoiled goodness. . . . Evil is a parasite, not an original thing."

In other words, evil is the thwarting of the so called intent.

In the direction that generative AI is going, plus the fact that it is trained entirely on language alone, I really worry about the robbing of the original goodness in our humanity.

Expand full comment
Corwin Slack's avatar

I believe that evil requires a motivation that is rooted in the physiology and neurology of a living organism. It needs to know shame, surprise, fear, disgust, interest etc. not just as words but in a primal way without language. Until then it is merely a toy— a powerful one to be sure but not evil.

Expand full comment
Michael Rose's avatar

We need to discuss the AI that is available and has killed both intentionally and inadvertently: drones. As an early example, although I don’t recall the details: a nuclear war was narrowly avoided fortunately through human “instincts” when a Russian officer realized that the automated alert of an imminent attack was wrong. Drones can be programmed to be autonomous and those can only be neutralized by shooting them down. Now that more devices have direct links to some generalized AI engine, devices can exhibit a kind of agency.

Any one who has contributed to a complex programming system knows that bugs can never be eliminated and backdoors and insertions are hard to foresee (witness the constant updates, and “antivirus” and other “antimalware” suites.) So even if the software developers were rigorous in putting in negative feedback for antisocial behavior, others could overwrite the code.

Also, often there is a human bias against anticipating the lack or absence of information. We mostly forecast on what has happened. That’s why disruptive technology is so difficult to control.

Expand full comment
STEPHEN A BLOCH's avatar

If an AI passes a Turing test as indistinguishable from an evil person, what does it buy us to say it’s not “really” evil? It can do exactly as much harm with its words as an evil human could.

Expand full comment
Broo's avatar

Open the pod bay doors, HAL

Expand full comment
hw's avatar

Well, I dispute several of your contentions.

First, despite massive PR campaigns to the contrary, AI isn't "smart", it isn't a new intelligence. It isnt even an "it". AI is a form of large language modeling, algorithmically designed to spit out responses via instructions and data sets.

AI isn't evil, "it's" simply been programmed to provide specific types of responses to maximize consumer interest...since the actual use cases for AI are quite limited, filled with hallucinations and inaccuracies...or vague promises of future grandiosity.

Far better to distract the masses from the vast limitations of AI with hyperbolic headlines.

Besides, given the sorry state of the world, humans have shown, repeatedly, that they're very capable of making choices that violate nearly all your rules...particularly when operating as part of a larger group or pursuant to an ideology.

Expand full comment
Scott F Kiesling's avatar

Yes - Bender and Hanna make this and more points in their book The AI Con (https://thecon.ai). They point out that it's the hype that these "text extruding and generating machines" are even intelligent that we should argue with, and that all the human traits we attribute to them is just that, humans treating them as if they are humans, and intelligent, which they are not. The hype is to sell them to do things they're not actually good at doing.

Expand full comment
David Kronheim's avatar

Unfortunately, AI is being used for more than answering consumer questions. The Trump admin. wants to use AI to find rules that are no longer required by laws. Based on the fed. admin.'s behavior, they will remove rules without the oversight needed, which may be a court room. They prefer to slash and burn, so that corporations can have less restraints and be free to make money at the public's expense, like the EPA head declaring carbon dioxide natural, and ignoring that CO2 is a green house gas, doing devastating harm in huge quantities. Sure people can do stupid things without AI, but someone can query why those stupid things were done, or form an argument to stop them. As I understand it, no one can understand how AI comes up with its answers, and AI is not repeatable, it will not always give the same answer twice to the same question. So, we have no way to check on AI, and how it arrives at conclusions; but that won't stop some people from blindly following it, making excuses sometime about efficiency, or time constraints.

Expand full comment
Ormond's avatar

That's because AI talks pretty.

Expand full comment
H Braithwaite's avatar

Hi Ted. I strongly recommend you read Ed Zitron's substack, Where's your Ed at, for a forensic take down of the 'AI' bubble. No one except Nvidia is making any money and the industry cash burn rate is catastrophic. They are lying about future funding streams, data centre build out contracts and what it's models are actually capable of doing. There is no pathway from Large Language Model chat bots to artificial general intelligence so don't worry about computer sentience. It ain't a thing.

Expand full comment
joanna's avatar

Yes, I feel that we should be incensed about the shameless overpromising around AI and general enshittification of products and services, rather than afraid of a mythical 'singularity'.

Expand full comment
Sean Gillis's avatar

I was conforted by Zitron's (persuasive) arguments that none of this AI stuff is profitable or fiscally sustainable, so it will inevitably collapse under it's own weight. The build out of data centres and supply of chips needed to keep growing is also questionable. It would be great if burn rate and supply chain problems slowed this down or killed a lot of the AI industry. BUT: the Trump admin seems willing and able to buy and use lots of these AI tools, putting lots of money into the pot, keeping the bubble inflated. Also, government doesn't have to turn a profit if they see this as a long-term investment. Lots of countries will keep AI projects going for military or industrial purposes, even at huge costs. The consumer AI could die and the government or military uses may survive. Actual usefuleness and long term pay-off - who the heck knows? My guess is low.

What the actual use cases for large language models are, or how likely artifical general intelligence actually is, I can't say. But generative AI is causing problems now. Just because that bubble is likely to pop doesn't stop real problems in the here and now.

Expand full comment
Tim's avatar

I don’t agree. AI is pattern recognition and prediction. All it is doing is reflecting us back to ourselves. The real concern is the power it gives corporations to automate their nefarious schemes to roll up more of the economy and society.

And BTW, every time you see a scary AGI news story, ask yourself why they seem to come from AI companies with an obvious interest in pumping up the hype. Every fear mongered is a dollar earned.

Expand full comment
Viktors's avatar

Agreed.

Usually “AI” actually means more powerful and easier to use statistical tools.

Expand full comment
Robert C. Gilbert's avatar

At work meetings, I have used more than once the example of '2001: A Space Odyssey' to argue why I believe AI holds great danger to us all. Nothing I have seen has changed my mind so far.

Expand full comment
MF Nauffts's avatar

Great piece, Ted. A mandatory kill switch built into all these bots/models is an absolute must. Congress needs to start formulating legislation -- and severe penalties for non-compliance -- like yesterday.

Expand full comment
Ormond's avatar

You confuse a kill switch with a program, perhaps?

Switching off a 500 megawatt machine is not trivial. (Colossus, in Memphis. Grok.)

Expand full comment
MF Nauffts's avatar

Not trivial but doable -- and it needs to be an option.

Expand full comment
Plastylite's avatar

AI reminds me of Crypto - a techno toy with no legitimate use case. I intend to avoid both like the plague.

Expand full comment
Porlock's avatar

An appealing idea, but I don't think it's right. Recently I read a piece by a very good writer whom I follow, in which she describes how she constructively uses Chatbot in composing her work. And I saw a very sharp computer consultant repeatedly looking up obscure how-tos in using a MacOS system, and it too was in Chatbot. As a once-hotshot system programmer, I'm impressed, and I've been pondering use of so-called AI for application to some problems I plan to work on. I'll soon find out.

Expand full comment
Jane's avatar

Parlor, AI can only be what is programmed. Alas, many computer scientists are expediency-based, not mercy-based.

Expand full comment
Tom Leitko's avatar

Not an expert… but I think AI reflects the moral biases that are contained in the conversations, other, that it is drawing from. AI is algorithmic rather than deliberative. As Grok recently showed, within the preset rule systems, it can be trained to lean toward preferred biases. For me, moral reasoning is deliberative… something AI cannot do.

Expand full comment
Jez Stevens's avatar

Paul Virilio’s name doesn’t get brought up nearly enough. For those who don’t know :

”...every technology carries its own negativity, which is invented at the same time as technical progress.”

We should be thinking very hard about what he said right now - because the AI incidental disaster will arrive much faster and be much uglier than we can currently imagine.

Expand full comment
Peter in Toronto's avatar

It is probably not a good idea to call what it does "evil", which gives the discussion an ethical frame that might not be helpful. Perhaps "pathological" is better? (Or something else?) You are describing a system that churns out logical patterns without emotion, which eventually leads to runaway, horrific (for us) consequences. We project back (down?) evil intent, etc. Is this helpful to learn to cope with AI? Having on occasion been in the company of pathological people, their machine like drive has the eerie quality you are perhaps trying to capture.

Expand full comment
erg art ink's avatar

“But if you train AI on huge datasets of human utterances, you’re just asking for trouble.”

Considering they have been trained on our collective creative digital output the last 25 years, on our collective dystopian musings in film and literature. I feel guilty, responsible. I specialized in visual dystopian world creation, with a renowned group of collaborators. I have long thought that our paranoid science fiction dreams would bear fruit we did not anticipate.

Fasten your seatbelts.

Expand full comment
joanna's avatar

It sounds like you were creating art. The problems being discussed here are a function of corporate greed and parasitic government, so we have to be careful not to think that some kind of censorship (if that is what you meant) will fix it. Dystopian film/literature/games or dark humor are not to blame, and restricting them beyond existing speech law would only put more power into the wrong hands.

Expand full comment
Ormond's avatar

Is that a presumption that all speech is flawless?

Expand full comment
joanna's avatar

See above re: existing speech law, meaning US law.

Expand full comment
Emily Pittman Newberry's avatar

I don't think anyone can relieve your concerns. AI is a logic system built into machines and it gets its information from us fallible humans. It does not have the ability to reason, only to follow the logic and sources of information we humans give it. If we humans do not build in our ability to stop AI controlled machines in ways that the machines cannot overcome, some humans will send one or more machines instructions that either allow them to do evil things, or make it likely that they do evil things. This could happen through design by humans whose moral compass has gone awry. It could also happen, I believe, because well meaning humans unintentionally create machines with a lot of power, and program them in a way that invites or creates chaos out of complex rules we program into them..

Expand full comment
Bruce Raben's avatar

You are unfortunately likely correct. Who am I to critique the work of computer science geniuses. But. I think the whole premise of the architecture is flawed. These LLMs are prediction machines loaded with anything from Shakespeare to shit 💩 They are not analogous or mapped to human consciousness and intelligence. But they are powerful as hell. And the data is now contaminated either by accident, natural statistical evolution or poisoning ☠️ and something bad will happen. It’s a race with no rules

Expand full comment
J Andrew Meyer's avatar

Imagine a machine generating answers based upon what it “ having learned from 4chan. I’d venture to say that’s not a net positive thing.

Expand full comment
e.c.'s avatar

J Andrew Meyer - absolutely terrifying!

Expand full comment
Bruce Raben's avatar

Exactly. How about The Turner Diaries etc

Expand full comment
David Corbett's avatar

How about this headline?

People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

Self-styled prophets are claiming they have “awakened” chatbots and accessed the secrets of the universe through ChatGPT

Or this one:

Trump’s AI Action Plan Is a Crusade Against ‘Bias’—and Regulation

The Trump administration’s new AI policy blueprint calls for limited regulation and lots of leeway for Big Tech to grow even bigger.

The latter includes this:

We need to build and maintain vast AI infrastructure and the energy to power it. To do that, we will continue to reject radical climate dogma and bureaucratic red tape, as the Administration has done since Inauguration Day,” the report reads. “Simply put, we need to ‘Build, Baby, Build!’”

They will also attempt to stop or reverse any state limitations on AI research they feel restrict American competitiveness with China for "AI supremacy."

Be afraid. Be very afraid.

Expand full comment
Barry Maher's avatar

No, get angry enough to do something about it.

Expand full comment