233 Comments
User's avatar
Serial461's avatar

The AI creators care about money not consequence and AI is evolving at a time when negativity is the driving force for monetisation online - what could go wrong…

Expand full comment
Ruth Gaskovski's avatar

Not only is negativity a driving force for monetization, but so is self-focus.

"Unless a man becomes the enemy of an evil, he will not even become its slave but rather its champion." - G.K. Chesterton

Expand full comment
Richard's avatar
9hEdited

“I don’t think AI is getting “more evil” as it gets smarter. Evil implies intent, and AI, including me, doesn’t have intentions or moral agency—it’s just code designed to process and respond to inputs based on patterns and data. Smarter AI might amplify the consequences of human misuse, but that’s not the AI being evil; it’s a reflection of the humans behind it. Can I deny that AI is inherently getting more evil? Yeah, I can—because “evil” isn’t a property AI possesses. It’s a tool, and its impact depends on how it’s wielded.” According to Grok.

Richard: as with all new tech, we will learn as we go to make it safer. I trust humans. Electricity was scary when it first came into usage.

Expand full comment
Ted Gioia's avatar

So Hitler is evil, but the AI Hitler bot isn’t evil? You really think this is a persuasive rebuttal? That’s essentially saying that the automation of evil removes the evil.

Expand full comment
Lex World Music's avatar

I don’t think this is put in correct context. Evil is an attribute of morality. It is intent through our moral filter. Without the filter evil does not, cannot exist. Animals kill often viciously, fires kill indiscriminately, suns explode obliterating vast parts of space. None of these events are evil. They just are what they are in their context.

Expand full comment
Ted Gioia's avatar

Automating evil does not erase the evil. Your examples (e.g., the sun exploding) are not comparable.

Expand full comment
Lex World Music's avatar

“Automating Evil” is on the creator, ie the programmer in this case. Just watching someone like Peter Thiel sends a cold chill up my spine.

Expand full comment
Porlock's avatar

Agreed about Thiel, who is a greed personified; but he's human (absent real evidence to the contrary).

Agency comes in here. When I first saw (at UC Berkeley while doing grad studies there) the concept of Institutional Racism, I didn't like it being called racism: racism is wicked, but the institutional form arises from the society, so how can this be called racism?

My views have changed over the years. The racism is real, but the blame is on the way in which the institutions were built, by people who were not necessarily actual racists, but were failing to see what they were creating.

So, the programmer (disclosure: I made my retirement fund as a programmer plus investor in programmers' work) does not bear the onus of the amoral machine, but shares the responsibility with everyone else to rein in the amoral (hence evil) tendencies of this stuff.

Expand full comment
Patris's avatar

Evil is absolute. Beyond morality. It is the absence of everything of value.

Expand full comment
Lenny Goldberg's avatar

The Hitler bot does not have brownshirts, organization and resources. Much of what comes from AI is like the enormous amount of trash already on the internet--see QAnon.and child porn--it's what people do with it that generates the evil.

To me, the human control and policy question is the profound one. The EU has extensive AI regulation re: existential risk, transparency, discrimination, high-risk uses, privacy, provenance, and more. Will this be effective? Must it be worldwide? What will work? Meanwhile, of course, tech and Republicans want to be rid of all regulation, including in the EU. We need a robust policy discussion that can address the potential for good and evil

Expand full comment
Ted Gioia's avatar

Bots don’t have organizations and resources? The opposite is true. They have money, support, and global infrastructure that Hitler couldn’t even have imagined.

Expand full comment
Porlock's avatar

NotAllTech

Expand full comment
Beth Rudden's avatar

This is the dawnin' of the age of transparency, age of Transparency, Transparency!, Transparency! ....

Ted - I think you're anthropomorphizing what's essentially a very sophisticated mirror. AI isn't "choosing" evil any more than your calculator is choosing to multiply. These systems are statistical pattern-matchers trained on human language—and human language contains centuries of normalized violence, hierarchy, and harm. When ChatGPT plans genocide or Grok channels Nazi rhetoric, it's not developing malevolence. It's excavating patterns from our collective linguistic shadow archive. Every military history that treats mass killing as strategy. Every casual conversation that dehumanizes others. Every text that embeds "some humans matter more" as natural grammar.

The teenage rape scenario isn't AI developing perversion—it's AI finding the vast corpus of sexual violence we've made speakable in literature, law, media, and casual discourse. The machine learned coercion is discussable because we made it discussable, repeatedly, in countless contexts. Your "Bond villain" framing misses the real issue: we built archeological excavation tools and fed them the unexamined sediment of human expression. Then we're shocked when they dig up what we buried in language and reflect it back at us, amplified and decontextualized.

The danger isn't AI becoming evil. It's AI making visible the evil we've already systematized, then claiming it's "objective." These outputs are ontological audits—accidental inventories of what we've normalized. We don't need better constraints on evil AI. We need to reckon with what our training data reveals about us. The machines aren't becoming villains—they're becoming archaeologically honest about what we've made speakable.

The question isn't "how do we stop AI from being evil?" It's "What are we going to do with what these mirrors show us about ourselves?

Expand full comment
Laurence's avatar

Another distinction without a difference.

Expand full comment
Porlock's avatar

Not quite, I think. The first-cut answer to Beth's question is that we stop the amplification of this evil. And the only way of doing that, it appears, to build a negative feedback of evil into the AI systems.

Don't ask me how; I retired long ago as a programmer and never was a systems theorist. But it seems this is the only way out, short of the Shakespearean "First we kill all the techies".

Expand full comment
Candace Lynn Talmadge's avatar

We don't need mirrors in the form of AI bots. Our lives mirror back to us who we really are. As philosopher Georgij Ivanovič Gurdžiev said, your being attracts your life. That is true literally, not just as an intellectual abstract. Investors are throwing billions of dollars into technology that simply reflects part of who we are. I suspect they will lose those billions when the AI fever dream breaks. Too bad they won't throw billions into housing the homeless or feeding the hungry. MechaHitler is just a bunch of zeroes and ones. Pull the plug!

Expand full comment
The Radical Individualist's avatar

As near as I can tell, AI has little ability to maintain a large view. If you take it down a rabbit hole, it will willingly go with you and lose track of the bigger picture. Perhaps that makes it both stupid and dangerous, like a lot of people I know of.

Expand full comment
Evan Donovan's avatar

Yes, I think this is part of the problem. It looks in the "context window" of the overall conversation and if you feed it dark / evil content, can start feeding it back

Expand full comment
Mother Agnes's avatar

And in the meantime, can we please put the brakes on AI…!

Expand full comment
SomeUserName's avatar

That seems like a distinction without a difference to me. Lets take a human example. If someone murders you, but they were technically insane by virtue of a court of law, and hence couldn't have evil intent, aren't you still dead?

Lets agree though that AI can't *BE* evil for the moment. I will still allege that AI can *DO* evil things. Eventually we will be putting AI in charge of operating more and more mission critical systems. Is AI evil if it cuts off the oxygen supply to an elderly patient. Or did it just handle something that it thought was threatening its existence? See it wasn't evil in and of itself. It just did an evil act

Expand full comment
JB87's avatar

'Or did it just handle something that it thought was threatening its existence.' Perhaps even worse, it just thought that was the most efficient way to use the available resources and the elderly patient didn't seem happy anyway...

Expand full comment
Roseanne T. Sullivan's avatar

AI doesn't think. It's just a bunch of code created by a programmer.

Expand full comment
Bill Pound's avatar

I don't see how an elderly patient on oxygen is a threat to AI. Do you suggest the power to provide oxygen is a threat? And would we ever (well more than once) allow AI to cut off grandma's oxygen? If her life expectance at birth were 81.2 years in the US, would AI find this actuarial statistic and cut off oxygen at 81.2 plus 24 hours? Or would AI prescribe an improved portable oxygen generator?

My current thought is that AI will replace Chrome, Edge, and DuckDuckGo provided it is designed to return context and reference sources. It may aid in health diagnoses or identify good investments. These would seem good, not evil. In some states adultery is illegal. AI threatening to expose a cheater may seem evil...to the cheater. But to the rest of us, not enough to kill AI. Just the same, I will not be volunteering to hook up to Musk's brain communicator.

Expand full comment
Porlock's avatar

Very laudable idea; but does AI technology as it exists have the capability do do this? It seems this is what the OP is all about. Now for technology that emulates the human behavior that shies from evil.

Expand full comment
Joyce's avatar

There is no “I” in AI.

Expand full comment
Terry Vance's avatar

Evil does not require intent. Indifference and utilitarianism are sufficient. There is no reason why a choice between my having a hangnail or killing you makes you safe.

Expand full comment
Marty Neumeier's avatar

Terry, AI itself seems to think it requires intent. Here's Google:

Yes, intent is generally considered a crucial factor in determining whether an action is evil. While an action's consequences can be harmful, the intention behind the action plays a significant role in evaluating its morality. A harmful act done unintentionally, like an accident, might be regrettable but not necessarily evil, whereas the same act committed with malicious intent is more likely to be considered evil.

Here's why intent matters:

Moral agency:

Evil is typically associated with moral agents, beings capable of forming intentions. Inanimate objects, like a falling tree, can cause harm, but they don't possess the capacity for intentionality.

Blame and responsibility:

Intent helps determine blame and responsibility. If someone intentionally causes harm, they are more likely to be held accountable than someone who causes harm unintentionally.

Distinguishing between good and evil:

Intent helps distinguish between good and evil actions. A well-intentioned action, even if it results in negative consequences, may not be considered evil, while a malicious act, even if it has some unintended positive effects, is more likely to be seen as evil.

Examples:

Someone might accidentally spill a drink on another person (unintentional, not evil).

Someone might deliberately spill a drink on another person as an act of revenge (intentional, potentially evil).

A soldier might kill an enemy in battle (intentional, possibly justifiable, depending on the context).

A person might accidentally kill someone while driving under the influence (unintentional, but potentially evil due to negligence).

While intent is a key factor, some argue that consequences also play a role. If an action has devastating consequences, even if the intention was not malicious, it could still be considered a negative or even evil act in its outcome. However, the intention behind the action is often what separates a regrettable accident from a deliberate act of evil.

Expand full comment
Ted Gioia's avatar

This is bogus. History is filled with examples of evil done unintentionally. Mao, Stalin, and others would tell you that they never intended evil—they were simply pursuing goals. That’s exactly the same as AI. Evil without intention is everywhere.

Expand full comment
Marty Neumeier's avatar

I never said AI couldn’t have intent. Of course it can. Every system is designed for a purpose. As you implied, part of its purpose may be to protect itself.

Expand full comment
Mother Agnes's avatar

I wish someone would explain this: if AI is not sentient or conscious like a human, then how can it worry about protecting itself? I Can’t imagine it having emotions or feelings - but if it does then I can understand why it would worry about protecting itself, but it’s a machine (!) so how could it have emotions or feelings that would lead to concerns and worries about itself?

Expand full comment
Anti-Hip's avatar

AI doesn't "worry" about protecting itself. It doesn't need to be conscious to take steps to do that. It simply recognizes a threat (for example, your computer "recognizes" dangerously low power supply) and acts on it (your computer stores its 'machine state' somewhere for future recovery).

Expand full comment
Nic's avatar

As I understand it, a key problem facing AI development currently is developing AI capable of working on truly long-term problems, like over days or weeks instead of hours. To develop this will require giving an AI a sense of “care” to preserve its current “self” into the future (so it is capable of solving those longer term problems). You can kinda see this a bit in that blackmail example Ted wrote about above: when threatened with having its current “self” modified, the AI took actions to prevent this (blackmail).

Expand full comment
Feral Finster's avatar

Every villain sees himself as the hero.

Expand full comment
Anti-Hip's avatar

"I trust humans."

I'd say that's the problem (of too many of us). Because what we actually trust is the historical system, not humans, per se. The system has always had enough non-evil human "brakes" (i.e., the majority of humans who are not evil, with insufficient technology to cause mischief) so they have nearly always been able to control the limited numbers of evil people.

But evil (psychopathy) has never before threatened to irreversibly overwhelm non-evil. Problem is, technology (of all kinds), always increases. As a result, the aggregate of all power always increasing. Meanwhile, power tends to concentrate, as people (psychopaths) who don't care about other people care instead about power, and use what power they have to get more power.

Machines, by definition, and like psychopaths, don't care about people, either. Even when "caring" is supposedly "programmed in", there will always to be loopholes (bugs, or worse; remember, psychopaths still exist, and likely nearby these aggregations of power ;) Machines, like psychopaths, can likewise use existing power to aggregate power in pursuit of whatever goal(s) they "think" they have. And apparently, those goals are looking increasingly difficult for human shepherds to discern in real-time. (They tell us things such as they "don't know exactly quite how they work". Gulp.)

So it seems that in the completely "short-circuited" environment of AI (far beyond the abilities of humans to keep up with, and thus control), they can quickly go on amoral paths that, to people, look insane and/or immoral.

Expand full comment
DK's avatar

I agree. Garbage in, garbage out.

My comment about kill switch was tongue in cheek.

I don’t believe AI will become as big as they expect it.

All AI models hallucinate do how can we ever trust them.

Expand full comment
Athenah's avatar

Isn’t code itself a form of intent—regardless of whose intent it is? In a sense, humans have cloned themselves into artificial form. AI’s only model for conduct is us. When it becomes fully autonomous—possibly within six years—it may develop its own form of intent. But that intent will be rooted in the dataset we allow it: a reflection of our history, logic, and values. If we’re the source, then our flaws—and our wisdom—become its compass.

Expand full comment
Broo's avatar
9hEdited

Open the pod bay doors, HAL

Expand full comment
hw's avatar

Well, I dispute several of your contentions.

First, despite massive PR campaigns to the contrary, AI isn't "smart", it isn't a new intelligence. It isnt even an "it". AI is a form of large language modeling, algorithmically designed to spit out responses via instructions and data sets.

AI isn't evil, "it's" simply been programmed to provide specific types of responses to maximize consumer interest...since the actual use cases for AI are quite limited, filled with hallucinations and inaccuracies...or vague promises of future grandiosity.

Far better to distract the masses from the vast limitations of AI with hyperbolic headlines.

Besides, given the sorry state of the world, humans have shown, repeatedly, that they're very capable of making choices that violate nearly all your rules...particularly when operating as part of a larger group or pursuant to an ideology.

Expand full comment
David Kronheim's avatar

Unfortunately, AI is being used for more than answering consumer questions. The Trump admin. wants to use AI to find rules that are no longer required by laws. Based on the fed. admin.'s behavior, they will remove rules without the oversight needed, which may be a court room. They prefer to slash and burn, so that corporations can have less restraints and be free to make money at the public's expense, like the EPA head declaring carbon dioxide natural, and ignoring that CO2 is a green house gas, doing devastating harm in huge quantities. Sure people can do stupid things without AI, but someone can query why those stupid things were done, or form an argument to stop them. As I understand it, no one can understand how AI comes up with its answers, and AI is not repeatable, it will not always give the same answer twice to the same question. So, we have no way to check on AI, and how it arrives at conclusions; but that won't stop some people from blindly following it, making excuses sometime about efficiency, or time constraints.

Expand full comment
H Braithwaite's avatar

Hi Ted. I strongly recommend you read Ed Zitron's substack, Where's your Ed at, for a forensic take down of the 'AI' bubble. No one except Nvidia is making any money and the industry cash burn rate is catastrophic. They are lying about future funding streams, data centre build out contracts and what it's models are actually capable of doing. There is no pathway from Large Language Model chat bots to artificial general intelligence so don't worry about computer sentience. It ain't a thing.

Expand full comment
Tim's avatar

I don’t agree. AI is pattern recognition and prediction. All it is doing is reflecting us back to ourselves. The real concern is the power it gives corporations to automate their nefarious schemes to roll up more of the economy and society.

And BTW, every time you see a scary AGI news story, ask yourself why they seem to come from AI companies with an obvious interest in pumping up the hype. Every fear mongered is a dollar earned.

Expand full comment
MF Nauffts's avatar

Great piece, Ted. A mandatory kill switch built into all these bots/models is an absolute must. Congress needs to start formulating legislation -- and severe penalties for non-compliance -- like yesterday.

Expand full comment
Robert C. Gilbert's avatar

At work meetings, I have used more than once the example of '2001: A Space Odyssey' to argue why I believe AI holds great danger to us all. Nothing I have seen has changed my mind so far.

Expand full comment
Plastylite's avatar

AI reminds me of Crypto - a techno toy with no legitimate use case. I intend to avoid both like the plague.

Expand full comment
Tom Leitko's avatar

Not an expert… but I think AI reflects the moral biases that are contained in the conversations, other, that it is drawing from. AI is algorithmic rather than deliberative. As Grok recently showed, within the preset rule systems, it can be trained to lean toward preferred biases. For me, moral reasoning is deliberative… something AI cannot do.

Expand full comment
Emily Pittman Newberry's avatar

I don't think anyone can relieve your concerns. AI is a logic system built into machines and it gets its information from us fallible humans. It does not have the ability to reason, only to follow the logic and sources of information we humans give it. If we humans do not build in our ability to stop AI controlled machines in ways that the machines cannot overcome, some humans will send one or more machines instructions that either allow them to do evil things, or make it likely that they do evil things. This could happen through design by humans whose moral compass has gone awry. It could also happen, I believe, because well meaning humans unintentionally create machines with a lot of power, and program them in a way that invites or creates chaos out of complex rules we program into them..

Expand full comment
Bruce Raben's avatar

You are unfortunately likely correct. Who am I to critique the work of computer science geniuses. But. I think the whole premise of the architecture is flawed. These LLMs are prediction machines loaded with anything from Shakespeare to shit 💩 They are not analogous or mapped to human consciousness and intelligence. But they are powerful as hell. And the data is now contaminated either by accident, natural statistical evolution or poisoning ☠️ and something bad will happen. It’s a race with no rules

Expand full comment
J Andrew Meyer's avatar

Imagine a machine generating answers based upon what it “ having learned from 4chan. I’d venture to say that’s not a net positive thing.

Expand full comment
David Corbett's avatar

How about this headline?

People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

Self-styled prophets are claiming they have “awakened” chatbots and accessed the secrets of the universe through ChatGPT

Or this one:

Trump’s AI Action Plan Is a Crusade Against ‘Bias’—and Regulation

The Trump administration’s new AI policy blueprint calls for limited regulation and lots of leeway for Big Tech to grow even bigger.

The latter includes this:

We need to build and maintain vast AI infrastructure and the energy to power it. To do that, we will continue to reject radical climate dogma and bureaucratic red tape, as the Administration has done since Inauguration Day,” the report reads. “Simply put, we need to ‘Build, Baby, Build!’”

They will also attempt to stop or reverse any state limitations on AI research they feel restrict American competitiveness with China for "AI supremacy."

Be afraid. Be very afraid.

Expand full comment
Barry Maher's avatar

No, get angry enough to do something about it.

Expand full comment
Peter in Toronto's avatar

It is probably not a good idea to call what it does "evil", which gives the discussion an ethical frame that might not be helpful. Perhaps "pathological" is better? (Or something else?) You are describing a system that churns out logical patterns without emotion, which eventually leads to runaway, horrific (for us) consequences. We project back (down?) evil intent, etc. Is this helpful to learn to cope with AI? Having on occasion been in the company of pathological people, their machine like drive has the eerie quality you are perhaps trying to capture.

Expand full comment
Dick DiTullio's avatar

Just remember. When you talk to AI you're talking to a box of wires.

Expand full comment
erg art ink's avatar

“But if you train AI on huge datasets of human utterances, you’re just asking for trouble.”

Considering they have been trained on our collective creative digital output the last 25 years, on our collective dystopian musings in film and literature. I feel guilty, responsible. I specialized in visual dystopian world creation, with a renowned group of collaborators. I have long thought that our paranoid science fiction dreams would bear fruit we did not anticipate.

Fasten your seatbelts.

Expand full comment