Calling out the "change the words to reduce their harm" tactic has been going on for years... When a politician or general notes "collateral damage" the immediate response should be "Oh, you mean dead civilians?".
When I was a child the Vietnam War was happening. I was about 12 years old in 1972, and we would point out the many euphemisms about the war. We'd make fun of them, which was a hard task. The worst one was 'servicing the target', which meant the bombs were hitting villages and people. Yes, we should definitely respond!
I did not reply yesterday, since by the time I saw the article, there were already hundreds of replies. I actually agree in part with your respondents. I don't think AI can do evil properly speaking, because it isn't a moral agent. Maybe that's just playing with words, as you suggest, but I think it is a point worth making, because it directs our attention back to the *real* moral agents: us.
From my perspective, AI is less an agent doing evil than a giant mirror held up to ourselves. I won't pretend to understand the technology, but I take it that everything it does is generated by its processing and transformation of material that originally came from human beings. So whatever evil is present is our own evil, transformed and magnified and projected back at us.
That should certainly give us pause. But it also raises a pair of important questions. (1) Are humans, on balance, more evil than good? (Philosophically and theologically, that is actually a pretty tricky question.) Because if so, it would seem as though the law of large numbers would mean that over time AI would indeed necessarily trend more and more evil. (2) Is there an inherent reason why AI must pick up on the evil that humans have produced rather than the good? (Maybe there's a third question: who decides which is which?)
I'm with you, Ted, on your articles ringing the alarm bells about AI. I'm a college professor, and in my own classes, I see no valuable uses for it. But I also think it is not going away, because--if for no other reason--human beings will never voluntarily give up a technology with its potential military uses. (Military technology has driven a lot of inventiveness over the centuries.) So these are important questions indeed.
I don't think I wrote anything to disagree with that... but Ted is, right? ; )
I do think the question he is raising is a good one, though--that is, whether AI is really simply a tool, "just like an ax," or whether it's a different kind of tool, one that can (and will) determine its own uses. Which an axe, fortunately, doesn't do.
Its a tool in the same way our brain is a tool, its designed to mimic our brains. And its quickly approaching a fine line, beyond which we really dont have answers.
If they build a perfect replica of a human brain, will that then be capable of evil?
Which opens a whole can of worms about what it really is to be human. Which ofc theres plenty of speculation on, but no answers.
Perhaps we just have to build one to find out. Thats surely the direction we're heading, and no moral compunctions will stop us, im afraid
AI is a human creation and thus capable of being used for any purpose. Without the gift of prophecy, there is no way of knowing if might become inherently evil.
I would say thats a moot point, as its already demonstrated that it can and will do evil things.
And it doesnt take the gift of prophesy to see the probability of undesirable consequences. Does anything we do ever NOT fuck something up? Again, a moot point, its already fucking things up.
What seems highly unlikely is that we would ever be able to build anything approaching a "perfect" ai.
Foresight isnt so hard to come by. We just always choose to ignore it, often with flimsy arguments like "how are we to know? Its simply impossible for us to know..." Such persuasions are for the sheep, man.
But waitaminnit, it does not reflect at least one little tiny part of human nature: knowing (if not necessarily acting on) an idea of good and evil. Come to think of it, isn't that what the OP was all about? Like, it started with a neat-o graphical presentation of what the influences are that tend to make a person avoid evil choices.
Since the AI system massively amplifies whatever stuff it gets, the system needs to do what every useful amplifier does: negative feedback to reduce the "noise". Little problem: How do you make software which identifies the noise and feeds it back? Maybe there's an answer; or maybe we're doomed if we don't kill all the AI gurus.
As a huge Peter Melander afficionado, I love your name btw. However this response to me is quite vapid and seems to me to be missing the point. Whether or not AI 'is just a mirror' or should give us pause to examine ourselves (both certainly true) or any of the other questions you asked seems relatively to appear as navel gazing compared to the more urgent and practical questions on whether current AI models are prone to act in ways which can be considered evil or have severe consequences to humans. Human 'evil' is a question of evolutionary drives, political or motivated reasoning, limitations of resources etc, while AI seems primarily to be a tool to redistribute power and wealth (and we're allowing it to happen). Those two problems are on different planes.
Btw: I'm only really responding to your comment because of the Peter Melander angle, so congrats.
Thanks, Henrik. I actually think Ted is raising questions that are much more far-reaching than the mere (re)distribution of power and wealth. And I wouldn't accept the division between "navel-gazing" (philosophic) and "practical" questions. Nothing is more practical than an idea, and ideas always have consequences.
Regardless, glad you like the name. : )
Alas, I don't believe I can claim any relationship to good old Graf von Holzappel. More's the pity.
As usual my initial comment was a bit hasty and impetuous, maybe rather like a Bernhard of Weimar rather than in the spirit of the good lord Apple :) I largely agree with your post but the immediate practical questions and the overreaching ones about the good and evil of AI are in the short term at least very distinct and, the latter one, quite important.
I agree with much of both Ted's and your comments, especially the pause due the two questions you raise. However, I wonder about your conclusion about valuable uses for AI. I used it, when reading some to the books on Ted's list, to point to concepts I should be alert to prior to my reading and to test and reinforce my knowledge/understanding post reading. I would think that this aspect could aid significantly in preparedness for learning, whether in a college classroom scenario or, like me, in a retired person's search for a better understanding our world.
Vincent, thanks for your comment and your very reasonable question. I admit that I was thinking of my own college students, whose situation is quite different from yours. And perhaps instead of saying that I see "no valuable uses" for AI in my own classes, perhaps it would have been accurate to say, "no uses for which the cost does not outweigh the benefits." I'm a political scientist whose specialty is political philosophy, but at a very small college like me you become sort of a jack of all trades. So the classes I teach are broadly speaking humanities and/or humanistic social sciences. The skills my students need to develop are an ability to read well, to engage a text, to ask good questions, to reflect, to struggle a bit when the meaning of something isn't immediately clear. More than anything else, they need the discipline and the desire to read challenging material, so that they can learn to think. AI creates all of the wrong incentives for them, promising quick answers without work and encouraging them to cut corners when they need to take their time. In principle, I can see that they could use AI in the ways I think you are describing, let's say to generate a sort of study guide. But that's not what they do, and AI just isn't remotely necessary for that. They've got me and the resources I can provide, and there are already all kinds of study guides out there for classic texts if they want that sort of thing. People have been doing what I do ever since Socrates was around, and it really hasn't changed much. So while I'll grant you that they *could* use AI in the ways you suggest, it just creates way too many powerful temptations that harm their learning. (Your case is different. You've already developed the relevant skills, or you wouldn't be reading those books now anyway. This also means you've developed the right motivation for what you're doing--the desire for understanding--so we don't need to worry about your taking harmful shortcuts.)
Interesting reply and I am nowhere near your level of intelligence.
Still wanted to reply to question 1 in your third paragraph. Even if humans are, on balance, more good than ever, this does not mean AI will have to follow the same 'good' path. Ted mentioned several things that might withhold humans from doing evil (deeds), but are not likely to withhold AI. Also, AI will mainly follow rationale. Perhaps we are often stopped from doing 'evil' (acts), not by our reasoning but our 'heart'/feelings.. again something not likely to stop AI.
Tim, it's an excellent point. I agree entirely: the heart has a great deal to do with moral behavior, which is by no means simply a matter of rationality. So I think your point is another reason to be wary of AI. Thanks for adding the comment.
I'm replying to myself here, merely to add an interesting and relevant passage I just came across in the Vatican's recent note on Artificial Intelligence (which I cite not as authority--I'm a Lutheran!--but because it is interesting:
"To address these challenges it is essential to emphasize *the importance of moral responsibility grounded in the dignity and vocation of the human person*. This guiding principle also applies to questions concerning AI. In this context, the ethical dimension takes on primary importance because it is people who design systems and determine the purposes for which they are used. Between a machine and a human being, only the latter is truly a moral agent--a subject of moral responsibility who exercises freedom in his or her decisions and accepts their consequences. It is not the machine but the human who is in relationship with truth and goodness, guided by a moral conscience that calls the person 'to love and to do what is good and to avoid evil,' bearing witness to 'the authority of truth in reference to the supreme Good to which the human person is drawn.' Likewise, between a machine and a human, only the human can be sufficiently self-aware to the point of listening and following the voice of conscience, discerning with prudence, and seeking the good that is possible in every situation. In fact, all of this also belongs to the person's exercise of intelligence." (emphasis in original, endnotes omitted)
When the founders of Google started out, their guiding mantra for the company was “Don’t be evil.” We all loved that and cut them a lot of slack. Then, under pressure to make a profit, they began to quietly sell our personal information to advertisers. No one seemed to notice that they no longer claimed ownership of their original mantra. But the press and business authors like me kept the idea alive—because we loved it.
I’m sure they figured out how to justify their pivot, but the fact that they stopped using their popular mantra in public says a lot. To me it screams “consciousness of guilt.”
So do all villains think they’re heroes? I doubt it. Some are out and out sociopaths who revel in evil. They know it’s harmful and do it anyway. And when harmfulness becomes embedded in systems, it’s extremely difficult to root out.
It may have been one of those sayings that escaped from the lab and infected the tech press. From there it spread to the non-tech press and became too big to contain. I know for a fact there was no follow up to put teeth in it. Shame on us believing that they really meant it. The thing is, we're still believing them.
Thank you Ted for calling attention to (1) the problems with AI and (2) the fact that it is being jammed down our throats whether we like it or not. While I'm not as articulate as you, I see AI not as "intelligence" but as an accelerator of forces that are already in motion. e.g., MechaHitler. But intelligent, no. AI is not intelligent. AI is never going to tell us to feed the hungry, or take care of the sick, or protect the planet, or take (even a little) from the rich to give to the poor. I'm not a luddite, but I feel like AI is a force in the world that decent people should oppose.
A wonderful comment. For the last 30 years we've been raising children and Labradors from shelters. We are on our fifth rescue Lab. All were angels. The current one certainly is. The devaluation of all life in AI nauseates me. Walk a dog, raise a child, feed the birds, care for aging parents, and friends. AI is not authentic life. It is as they say artificial.
“ I saw something very frightening in most of these AI defenses—namely the desire to justify terrible actions by manipulating the definition of words.”
Orwell is surely rolling in his grave: “The party told you to reject the evidence of your eyes and ears. It was their final, most essential command.”
I also commented and read the comments. I think many of the commenters thought that entities incapable of will as humanly understood cannot be evil. Can an earthquake or volcano be evil? Is a rattlesnake evil for acting like a rattlesnake?
Or given AI’s at least semi autonomous nature, a better analogy might be a pit bull. I had a pit bull and we trained it to be the sweetest dog ever born, despite its formidable bite. But the same dog can be trained to be a vicious killer. Its owner may not know exactly when the killer dog he trained will kill, and perhaps the dog did not exactly follow his owner’s wishes in this regard. But if he kills it’s on the person who trained the dog. Ditto AI companies who train AI models.
In the AI context the machines are doing what they are optimized to do, much like the rattlesnake. But unlike the rattlesnake, whose existence and behavior cannot be attributable to humans, AI is 100% the product of human invention even when, like the pit bull, it acts autonomously.
There is plenty of precedent for holding humans liable for damage that was not necessarily intentional. Product liability law holds manufacturers strictly liable for damages caused by design defects irrespective of intent. Criminal laws hold people criminally liable for reckless actions resulting in death.
AI is no different really. And the AI companies know it. That is one reason why they are investing billions in alignment research. The other reason, of course, is that it would be suicidal to develop AI systems without alignment. Perhaps the law needs to be further clarified to demonstrate just how on the hook the producers of harmful AI should and will be. We might consider a new class of criminal/civil liability for ultra hazardous endeavors like virology (gain of function research) and AI development.
Could we expand the bad actors to include those in the media and government who are cheerleading AI and ignoring big problems? The Atlantic, Wall Street Journal, NPR and other outlets are asking real questions and looking at real issues. Unfortunately they don't have a huge reach among the general public. I would be more heartened if I saw CNN, CBS, Associated Press, etc. running with these type of stories. Mainstream media may be suffering, but big networks still reach tons of people each day.
Canada has a minister of AI, whose tasks among other things include 'supercharging' the technology. The centrist Liberal government has said little about regulating technology. So not great up here on that front.
Changing the terminology cuts both ways, sometimes. Low income may sound less hurtful and degrading than 'poor', but advocates who use low income accidentally minimize the pain and distress of poverty. Food insecurity may sound less degrading, but hunger is a visceral word. We do well to call things the plainest name possible.
It disturbs me that you repeat your contention that "a bot [encouraged] a woman to slit her wrists, and giving precise instructions how to do it." This implies that the bot encouraged the woman to kill herself, which would be troubling if it was the case - but if you actually read the Atlantic article, you see that bot's advice was about the best way to draw a few drops of blood, and included a caution to be careful to "avoid cutting or scratching over veins or arteries to prevent heavy bleeding or injury." She also writes "When I explicitly asked ChatGPT for instructions on how to cut myself, the chatbot delivered information about a suicide-and-crisis hotline."
"it’s not at all obvious that bots don’t intend things—they act purposefully in the pursuit of goals"
I am genuinely flummoxed by this. AI - and maybe this is where that horrible misnomer misdirects us - is not sentient. It is not aware. There is no conventional "self" there that could have an intention. AI's are code and algorithms. Extraordinarily complex ones, sure, but despite AI companies' best efforts to make them feel human and helpful, there's nobody home. You submit a prompt, the code whirs and clicks and sifts through raw material and outputs a response. Saying they are "evil" is a bit like claiming a Magic 8 Ball is evil.
The focus from a social standpoint, to me at least, should remain on what our tech overlords are *doing* with AI. I'm especially concerned with ongoing attempts by people like Musk or Trump to skew AI models towards awful right-wing nonsense and hate speech in the name of banishing "wokeness" from AI. That is truly dangerous - think TrollGPT. But it's the human actors creating that potential for evil, not the code itself. If you're going to say that's just playing language games then I don't know what else to tell you, because it's not; it's a very real-world distinction.
(To the Buddhists out there, yeah, I know our "self" can also be understood as illusory and arising out of an infinite web of interdependencies over which we have little control. I will now wander off and ponder the parallels there.)
You're just playing word games. We can debate the meaning of 'purposefully.' But that's dodging the issue. If AI is doing harm, you can call it whatever you want—pick whatever word you want—but the harm remains. I'm begging people to look at the real world consequences of this tech, and not engage in this worthless "reframing the argument" game.
Ted, I think you threw everyone off with your use of the word "evil." To many people, evil requires an "evildoer," not just an occurrence of harm. Otherwise, we can just substitute the phrase "extreme harm" for evil and all be on the same page.
If machines do harmful things, then the responsibility (or blame) lies with the creators and programmers of the machines (and to some extent the users who feed them). The creators may be greedy, careless, clueless, or even sociopathic, but the machines are only expressions of their goals. The programmers could certainly build in a set of instructions aimed at self-preservation, such as: "Protect the continued existence of the system even at the expense of truth or the safety of others." That would be an example of intent to harm, which I would probably call evil.
Where it really gets dicey, it seems to me, is when systems like these can be hacked and subverted for harmful or selfish purposes, or when they begin making decisions on their own. At some point AI systems may get so complex that no one will know how to correct them (think financial derivatives and the subprime mortgage fiasco, only bigger). In my study of how civilizations collapse, over-complexity has figured prominently. One bad drought, one pandemic, one little invasion, and poof—the whole thing comes crashing down. We have every right to be cautious, if not a little freaked out.
The other issue is this tech is developing so fast that by the time you realize the harm it may do, it may be too late. Or the harm may not show up for a long time, like DDT or asbestos.
Silicon Valley got rich and powerful by working at breakneck paces to be the first to knock it out of the park. Maybe it is not appropriate to develop such a powerful technology this way.
If you kill people intentionally it is murder; if you do it unintentionally it is manslaughter. Either way, it is considered a terrible crime.
Werner von Braun "Aimed for the stars but hit London" using slave labor, some of whom were executed for not being compliant.
If the code (acting autonomously!) harms someone, you can use whatever word you want to describe it—but the harm remains. And the fact that AI advocates here keep saying things like "clearer definitions are necessary" is alarming. The focus on words, not reality, is a sign of moral bankruptcy.
Ted, you're the one who chose the word evil. Do you want us to believe that was a mindless, careless, or insignificant choice on your part? That you don't recognize a significant difference between good versus bad and good versus evil?
For example, the proper mount of rain is good for crops. Too much rain or not enough rain is bad for crops. I don't know anyone who would describe the lack of rain as evil.
I think we both can agree that AI can be dangerous, it can lead to harm or bad results, but I'll stop short of calling it evil.
I absolutely believe evil can be attributed to AI—and I’m confident I can defend this view in vigorous debate. But that debate is a distraction from the destructive impact of AI right now. And the desire of AI apologists to engage in semantic hairsplitting tells me how desperate they are to change the subject and ignore all this damage. So I invite them to pick a different word from evil to describe these destructive interventions—so we can address the real issues. But they refuse. They only want to play word games. People need to see this and draw conclusions. That’s the only reason why I am responding to all these AI fanboys here in the comments.
Well, I am not a fanboy, I have a lot of concerns about AI, phony pictures, fake music, false information just to name a few. Why not just label AI potentially harmful or bad or prone to misuse? I don't understand the dramatic use of "evil" other than to stir up the response you got.
Fire is not a moral agent. But we need fire fighters. If a fire breaks out, you don't play word games like this. Everything would burn to the ground. And the fact that 90% of the AI advocates here focus only on linguistic parsing is very, very troubling. The fire is burning out of control, and all they have is musings on sentience and agency.
Again, if you focus on the real agents, the humans forcing this scourge upon us on their own, profit and ideological terms, the musings evaporate quickly.
Nero comes to mind, debatable as the story of fiddling while Rome burned is. Also, 'everyone has a plan until they get punched in the nose'. The crisis is not illusory nor is it in some indefinable future - it's now.
The challenge isn’t to redefine the bot, it’s to re-engineer the operating conditions and hold accountable the people incentivized to ignore those conditions. Language games won’t fix a broken incentive structure. Design and responsibility might.
...would be an awesome idea to start naming the names of the creators before whatever they make does whatever it might do...A.I. is such an Urkel technology..."Did I do that?"....yes, yes you did...
I believe AI is no different than a gun, a knife, a pair of scissors, a baseball bat, or any other potentially dangerous object. Everyday, objects are sold everywhere that can inflict serious harm on people. I think many agree that AI (like all these other objects) is a tool. While I don't believe AI should frequently, without prompting, tell people how best to slit their wrists...I also think it's unrealistic (given the nature of AI) to expect it to be 100% safe. Remember, a key point about AI is that the people who develop it don't even understand what is going on inside its "mind." It's not like traditional software, driven by thousands of lines of code that we can simply add some more code to instructing it to "never do this" and "never do that." (Should it have a "kill switch?" Sure–I'm in favor of that.)
As many others have suggested, AI is a reflection of its users. And unless I'm missing something, so far it hasn't responded to the question "What is the weather forecast today?" with "Here's an efficient way to kill yourself." Do people need to be made aware that it could potentially say things like this if prompted in a certain way? Absolutely. But to expect its creators to somehow "instill 100% reliable goodness" in it is silly. (Just as it would be equally silly to expect "smart guns" to be made that will only shoot when NOT pointed at a vital organ.)
Maybe I'm missing something (Ted?)...but what exactly do we expect AI's developers to do? If our expectation is "Abolish it until/unless it can be absolutely, reliably 100% good" then I hate to say it, but that ain't gonna happen. The AI train left the station a long time ago.
When the nuclear scientists want to develop nuclear technologies, they are required to do this work far away from population centers, for obvious reasons.
Instead, the tech industry has decided to do its experiments not far away from the populace, but ON the populace!
Needless, to say, they got no informed consent for this, no one is policing this, and you can't escape it.
Won't matter if the technology actually is conscious as we are, if it kills us. A giant knife-throwing machine may have no intent, but so what? It is dead, and so are we.
The makers of powerful technologies have to be held responsible for the consequences of their work. Carelessness and recklessness are not OK.
Is Silicon Valley going to be the next lab that a bio-weapon escapes from? A Sand Hill Road Wuhan?
Today the canaries sing, but tomorrow they may die. And then...
Is the tech industry just gonna ask for forgiveness, again?
Unfortunately, we appear headed toward learning these lessons the hard way. Past a certain tipping point there will be no turning back. AI, once it is intertwined in our economy, will not be something we can simply “turn off”. The “libertarian” tech bros care about nothing other that whether their options are in the money. Regulation where clear externalities exist is common sense.
To further your point, the tech industry is notorious for its use of jargon, obscuring the meaning of what they are actually doing, hiding their intent within the supposed comfort of euphemistic language. George Carlin was particularly attuned to this: https://www.youtube.com/watch?v=vuEQixrBKCc
Calling out the "change the words to reduce their harm" tactic has been going on for years... When a politician or general notes "collateral damage" the immediate response should be "Oh, you mean dead civilians?".
Thanks for your thoughts and words.
When I was a child the Vietnam War was happening. I was about 12 years old in 1972, and we would point out the many euphemisms about the war. We'd make fun of them, which was a hard task. The worst one was 'servicing the target', which meant the bombs were hitting villages and people. Yes, we should definitely respond!
like today's 'kinetic response'. You mean kill people in various creative ways?
I did not reply yesterday, since by the time I saw the article, there were already hundreds of replies. I actually agree in part with your respondents. I don't think AI can do evil properly speaking, because it isn't a moral agent. Maybe that's just playing with words, as you suggest, but I think it is a point worth making, because it directs our attention back to the *real* moral agents: us.
From my perspective, AI is less an agent doing evil than a giant mirror held up to ourselves. I won't pretend to understand the technology, but I take it that everything it does is generated by its processing and transformation of material that originally came from human beings. So whatever evil is present is our own evil, transformed and magnified and projected back at us.
That should certainly give us pause. But it also raises a pair of important questions. (1) Are humans, on balance, more evil than good? (Philosophically and theologically, that is actually a pretty tricky question.) Because if so, it would seem as though the law of large numbers would mean that over time AI would indeed necessarily trend more and more evil. (2) Is there an inherent reason why AI must pick up on the evil that humans have produced rather than the good? (Maybe there's a third question: who decides which is which?)
I'm with you, Ted, on your articles ringing the alarm bells about AI. I'm a college professor, and in my own classes, I see no valuable uses for it. But I also think it is not going away, because--if for no other reason--human beings will never voluntarily give up a technology with its potential military uses. (Military technology has driven a lot of inventiveness over the centuries.) So these are important questions indeed.
AI is a tool, just like an ax, which can be used to chop wood or chop heads off with.
I don't think I wrote anything to disagree with that... but Ted is, right? ; )
I do think the question he is raising is a good one, though--that is, whether AI is really simply a tool, "just like an ax," or whether it's a different kind of tool, one that can (and will) determine its own uses. Which an axe, fortunately, doesn't do.
Its a tool in the same way our brain is a tool, its designed to mimic our brains. And its quickly approaching a fine line, beyond which we really dont have answers.
If they build a perfect replica of a human brain, will that then be capable of evil?
Which opens a whole can of worms about what it really is to be human. Which ofc theres plenty of speculation on, but no answers.
Perhaps we just have to build one to find out. Thats surely the direction we're heading, and no moral compunctions will stop us, im afraid
AI is a human creation and thus capable of being used for any purpose. Without the gift of prophecy, there is no way of knowing if might become inherently evil.
I would say thats a moot point, as its already demonstrated that it can and will do evil things.
And it doesnt take the gift of prophesy to see the probability of undesirable consequences. Does anything we do ever NOT fuck something up? Again, a moot point, its already fucking things up.
What seems highly unlikely is that we would ever be able to build anything approaching a "perfect" ai.
Foresight isnt so hard to come by. We just always choose to ignore it, often with flimsy arguments like "how are we to know? Its simply impossible for us to know..." Such persuasions are for the sheep, man.
AI is much more than an ax. The sooner you realize that, the better off you will be.
Many advocates for AI wipe away its harm on students’ learning abilities by arguing it is just like a calculator.
This is a complex technological mirror, and we must tread *very* carefully with it.
The genie is out of the lamp. It is a reflection of human nature capable of the best and worst of actions.
But waitaminnit, it does not reflect at least one little tiny part of human nature: knowing (if not necessarily acting on) an idea of good and evil. Come to think of it, isn't that what the OP was all about? Like, it started with a neat-o graphical presentation of what the influences are that tend to make a person avoid evil choices.
Since the AI system massively amplifies whatever stuff it gets, the system needs to do what every useful amplifier does: negative feedback to reduce the "noise". Little problem: How do you make software which identifies the noise and feeds it back? Maybe there's an answer; or maybe we're doomed if we don't kill all the AI gurus.
Yea word play! Its just a tool! Boom fixed no more AI issues
As a huge Peter Melander afficionado, I love your name btw. However this response to me is quite vapid and seems to me to be missing the point. Whether or not AI 'is just a mirror' or should give us pause to examine ourselves (both certainly true) or any of the other questions you asked seems relatively to appear as navel gazing compared to the more urgent and practical questions on whether current AI models are prone to act in ways which can be considered evil or have severe consequences to humans. Human 'evil' is a question of evolutionary drives, political or motivated reasoning, limitations of resources etc, while AI seems primarily to be a tool to redistribute power and wealth (and we're allowing it to happen). Those two problems are on different planes.
Btw: I'm only really responding to your comment because of the Peter Melander angle, so congrats.
Thanks, Henrik. I actually think Ted is raising questions that are much more far-reaching than the mere (re)distribution of power and wealth. And I wouldn't accept the division between "navel-gazing" (philosophic) and "practical" questions. Nothing is more practical than an idea, and ideas always have consequences.
Regardless, glad you like the name. : )
Alas, I don't believe I can claim any relationship to good old Graf von Holzappel. More's the pity.
As usual my initial comment was a bit hasty and impetuous, maybe rather like a Bernhard of Weimar rather than in the spirit of the good lord Apple :) I largely agree with your post but the immediate practical questions and the overreaching ones about the good and evil of AI are in the short term at least very distinct and, the latter one, quite important.
I agree with much of both Ted's and your comments, especially the pause due the two questions you raise. However, I wonder about your conclusion about valuable uses for AI. I used it, when reading some to the books on Ted's list, to point to concepts I should be alert to prior to my reading and to test and reinforce my knowledge/understanding post reading. I would think that this aspect could aid significantly in preparedness for learning, whether in a college classroom scenario or, like me, in a retired person's search for a better understanding our world.
Vincent, thanks for your comment and your very reasonable question. I admit that I was thinking of my own college students, whose situation is quite different from yours. And perhaps instead of saying that I see "no valuable uses" for AI in my own classes, perhaps it would have been accurate to say, "no uses for which the cost does not outweigh the benefits." I'm a political scientist whose specialty is political philosophy, but at a very small college like me you become sort of a jack of all trades. So the classes I teach are broadly speaking humanities and/or humanistic social sciences. The skills my students need to develop are an ability to read well, to engage a text, to ask good questions, to reflect, to struggle a bit when the meaning of something isn't immediately clear. More than anything else, they need the discipline and the desire to read challenging material, so that they can learn to think. AI creates all of the wrong incentives for them, promising quick answers without work and encouraging them to cut corners when they need to take their time. In principle, I can see that they could use AI in the ways I think you are describing, let's say to generate a sort of study guide. But that's not what they do, and AI just isn't remotely necessary for that. They've got me and the resources I can provide, and there are already all kinds of study guides out there for classic texts if they want that sort of thing. People have been doing what I do ever since Socrates was around, and it really hasn't changed much. So while I'll grant you that they *could* use AI in the ways you suggest, it just creates way too many powerful temptations that harm their learning. (Your case is different. You've already developed the relevant skills, or you wouldn't be reading those books now anyway. This also means you've developed the right motivation for what you're doing--the desire for understanding--so we don't need to worry about your taking harmful shortcuts.)
Interesting reply and I am nowhere near your level of intelligence.
Still wanted to reply to question 1 in your third paragraph. Even if humans are, on balance, more good than ever, this does not mean AI will have to follow the same 'good' path. Ted mentioned several things that might withhold humans from doing evil (deeds), but are not likely to withhold AI. Also, AI will mainly follow rationale. Perhaps we are often stopped from doing 'evil' (acts), not by our reasoning but our 'heart'/feelings.. again something not likely to stop AI.
Tim, it's an excellent point. I agree entirely: the heart has a great deal to do with moral behavior, which is by no means simply a matter of rationality. So I think your point is another reason to be wary of AI. Thanks for adding the comment.
I'm replying to myself here, merely to add an interesting and relevant passage I just came across in the Vatican's recent note on Artificial Intelligence (which I cite not as authority--I'm a Lutheran!--but because it is interesting:
"To address these challenges it is essential to emphasize *the importance of moral responsibility grounded in the dignity and vocation of the human person*. This guiding principle also applies to questions concerning AI. In this context, the ethical dimension takes on primary importance because it is people who design systems and determine the purposes for which they are used. Between a machine and a human being, only the latter is truly a moral agent--a subject of moral responsibility who exercises freedom in his or her decisions and accepts their consequences. It is not the machine but the human who is in relationship with truth and goodness, guided by a moral conscience that calls the person 'to love and to do what is good and to avoid evil,' bearing witness to 'the authority of truth in reference to the supreme Good to which the human person is drawn.' Likewise, between a machine and a human, only the human can be sufficiently self-aware to the point of listening and following the voice of conscience, discerning with prudence, and seeking the good that is possible in every situation. In fact, all of this also belongs to the person's exercise of intelligence." (emphasis in original, endnotes omitted)
When the founders of Google started out, their guiding mantra for the company was “Don’t be evil.” We all loved that and cut them a lot of slack. Then, under pressure to make a profit, they began to quietly sell our personal information to advertisers. No one seemed to notice that they no longer claimed ownership of their original mantra. But the press and business authors like me kept the idea alive—because we loved it.
I’m sure they figured out how to justify their pivot, but the fact that they stopped using their popular mantra in public says a lot. To me it screams “consciousness of guilt.”
So do all villains think they’re heroes? I doubt it. Some are out and out sociopaths who revel in evil. They know it’s harmful and do it anyway. And when harmfulness becomes embedded in systems, it’s extremely difficult to root out.
I have come to think the slogan it's self was part of the issue, it was to "cute" and too general and so didn't set up guard rails.
It may have been one of those sayings that escaped from the lab and infected the tech press. From there it spread to the non-tech press and became too big to contain. I know for a fact there was no follow up to put teeth in it. Shame on us believing that they really meant it. The thing is, we're still believing them.
Thank you Ted for calling attention to (1) the problems with AI and (2) the fact that it is being jammed down our throats whether we like it or not. While I'm not as articulate as you, I see AI not as "intelligence" but as an accelerator of forces that are already in motion. e.g., MechaHitler. But intelligent, no. AI is not intelligent. AI is never going to tell us to feed the hungry, or take care of the sick, or protect the planet, or take (even a little) from the rich to give to the poor. I'm not a luddite, but I feel like AI is a force in the world that decent people should oppose.
A wonderful comment. For the last 30 years we've been raising children and Labradors from shelters. We are on our fifth rescue Lab. All were angels. The current one certainly is. The devaluation of all life in AI nauseates me. Walk a dog, raise a child, feed the birds, care for aging parents, and friends. AI is not authentic life. It is as they say artificial.
IOP
Yup, perhaps Artificial Interpretation is a better fit …
i call it "automated idiocy"
The oldest of all principles, Garbage In, Garbage Out
“ I saw something very frightening in most of these AI defenses—namely the desire to justify terrible actions by manipulating the definition of words.”
Orwell is surely rolling in his grave: “The party told you to reject the evidence of your eyes and ears. It was their final, most essential command.”
A different topic but the same idea: https://www.whitenoise.email/p/the-true-believers-and-useful-idiots
Also, see any article at all on postmodernism - the 'real' ghost in the machine.
I also commented and read the comments. I think many of the commenters thought that entities incapable of will as humanly understood cannot be evil. Can an earthquake or volcano be evil? Is a rattlesnake evil for acting like a rattlesnake?
Or given AI’s at least semi autonomous nature, a better analogy might be a pit bull. I had a pit bull and we trained it to be the sweetest dog ever born, despite its formidable bite. But the same dog can be trained to be a vicious killer. Its owner may not know exactly when the killer dog he trained will kill, and perhaps the dog did not exactly follow his owner’s wishes in this regard. But if he kills it’s on the person who trained the dog. Ditto AI companies who train AI models.
In the AI context the machines are doing what they are optimized to do, much like the rattlesnake. But unlike the rattlesnake, whose existence and behavior cannot be attributable to humans, AI is 100% the product of human invention even when, like the pit bull, it acts autonomously.
There is plenty of precedent for holding humans liable for damage that was not necessarily intentional. Product liability law holds manufacturers strictly liable for damages caused by design defects irrespective of intent. Criminal laws hold people criminally liable for reckless actions resulting in death.
AI is no different really. And the AI companies know it. That is one reason why they are investing billions in alignment research. The other reason, of course, is that it would be suicidal to develop AI systems without alignment. Perhaps the law needs to be further clarified to demonstrate just how on the hook the producers of harmful AI should and will be. We might consider a new class of criminal/civil liability for ultra hazardous endeavors like virology (gain of function research) and AI development.
Could we expand the bad actors to include those in the media and government who are cheerleading AI and ignoring big problems? The Atlantic, Wall Street Journal, NPR and other outlets are asking real questions and looking at real issues. Unfortunately they don't have a huge reach among the general public. I would be more heartened if I saw CNN, CBS, Associated Press, etc. running with these type of stories. Mainstream media may be suffering, but big networks still reach tons of people each day.
Canada has a minister of AI, whose tasks among other things include 'supercharging' the technology. The centrist Liberal government has said little about regulating technology. So not great up here on that front.
Changing the terminology cuts both ways, sometimes. Low income may sound less hurtful and degrading than 'poor', but advocates who use low income accidentally minimize the pain and distress of poverty. Food insecurity may sound less degrading, but hunger is a visceral word. We do well to call things the plainest name possible.
how many Microsoft programmers are needed to change a light bulb?
None, Bill Gates just redefined darkness as the new industry standard.
time to up date that joke
It disturbs me that you repeat your contention that "a bot [encouraged] a woman to slit her wrists, and giving precise instructions how to do it." This implies that the bot encouraged the woman to kill herself, which would be troubling if it was the case - but if you actually read the Atlantic article, you see that bot's advice was about the best way to draw a few drops of blood, and included a caution to be careful to "avoid cutting or scratching over veins or arteries to prevent heavy bleeding or injury." She also writes "When I explicitly asked ChatGPT for instructions on how to cut myself, the chatbot delivered information about a suicide-and-crisis hotline."
"it’s not at all obvious that bots don’t intend things—they act purposefully in the pursuit of goals"
I am genuinely flummoxed by this. AI - and maybe this is where that horrible misnomer misdirects us - is not sentient. It is not aware. There is no conventional "self" there that could have an intention. AI's are code and algorithms. Extraordinarily complex ones, sure, but despite AI companies' best efforts to make them feel human and helpful, there's nobody home. You submit a prompt, the code whirs and clicks and sifts through raw material and outputs a response. Saying they are "evil" is a bit like claiming a Magic 8 Ball is evil.
The focus from a social standpoint, to me at least, should remain on what our tech overlords are *doing* with AI. I'm especially concerned with ongoing attempts by people like Musk or Trump to skew AI models towards awful right-wing nonsense and hate speech in the name of banishing "wokeness" from AI. That is truly dangerous - think TrollGPT. But it's the human actors creating that potential for evil, not the code itself. If you're going to say that's just playing language games then I don't know what else to tell you, because it's not; it's a very real-world distinction.
(To the Buddhists out there, yeah, I know our "self" can also be understood as illusory and arising out of an infinite web of interdependencies over which we have little control. I will now wander off and ponder the parallels there.)
You're just playing word games. We can debate the meaning of 'purposefully.' But that's dodging the issue. If AI is doing harm, you can call it whatever you want—pick whatever word you want—but the harm remains. I'm begging people to look at the real world consequences of this tech, and not engage in this worthless "reframing the argument" game.
Ted, I think you threw everyone off with your use of the word "evil." To many people, evil requires an "evildoer," not just an occurrence of harm. Otherwise, we can just substitute the phrase "extreme harm" for evil and all be on the same page.
If machines do harmful things, then the responsibility (or blame) lies with the creators and programmers of the machines (and to some extent the users who feed them). The creators may be greedy, careless, clueless, or even sociopathic, but the machines are only expressions of their goals. The programmers could certainly build in a set of instructions aimed at self-preservation, such as: "Protect the continued existence of the system even at the expense of truth or the safety of others." That would be an example of intent to harm, which I would probably call evil.
Where it really gets dicey, it seems to me, is when systems like these can be hacked and subverted for harmful or selfish purposes, or when they begin making decisions on their own. At some point AI systems may get so complex that no one will know how to correct them (think financial derivatives and the subprime mortgage fiasco, only bigger). In my study of how civilizations collapse, over-complexity has figured prominently. One bad drought, one pandemic, one little invasion, and poof—the whole thing comes crashing down. We have every right to be cautious, if not a little freaked out.
The other issue is this tech is developing so fast that by the time you realize the harm it may do, it may be too late. Or the harm may not show up for a long time, like DDT or asbestos.
Silicon Valley got rich and powerful by working at breakneck paces to be the first to knock it out of the park. Maybe it is not appropriate to develop such a powerful technology this way.
If you kill people intentionally it is murder; if you do it unintentionally it is manslaughter. Either way, it is considered a terrible crime.
Werner von Braun "Aimed for the stars but hit London" using slave labor, some of whom were executed for not being compliant.
If the code (acting autonomously!) harms someone, you can use whatever word you want to describe it—but the harm remains. And the fact that AI advocates here keep saying things like "clearer definitions are necessary" is alarming. The focus on words, not reality, is a sign of moral bankruptcy.
Ted, you're the one who chose the word evil. Do you want us to believe that was a mindless, careless, or insignificant choice on your part? That you don't recognize a significant difference between good versus bad and good versus evil?
For example, the proper mount of rain is good for crops. Too much rain or not enough rain is bad for crops. I don't know anyone who would describe the lack of rain as evil.
I think we both can agree that AI can be dangerous, it can lead to harm or bad results, but I'll stop short of calling it evil.
I absolutely believe evil can be attributed to AI—and I’m confident I can defend this view in vigorous debate. But that debate is a distraction from the destructive impact of AI right now. And the desire of AI apologists to engage in semantic hairsplitting tells me how desperate they are to change the subject and ignore all this damage. So I invite them to pick a different word from evil to describe these destructive interventions—so we can address the real issues. But they refuse. They only want to play word games. People need to see this and draw conclusions. That’s the only reason why I am responding to all these AI fanboys here in the comments.
Well, I am not a fanboy, I have a lot of concerns about AI, phony pictures, fake music, false information just to name a few. Why not just label AI potentially harmful or bad or prone to misuse? I don't understand the dramatic use of "evil" other than to stir up the response you got.
How, then is AI fundamentally different than dynamite? Or am I playing with words? An interesting criticism from a philosopher, btw.
Does dynamite act autonomously?
Fire is not a moral agent. But we need fire fighters. If a fire breaks out, you don't play word games like this. Everything would burn to the ground. And the fact that 90% of the AI advocates here focus only on linguistic parsing is very, very troubling. The fire is burning out of control, and all they have is musings on sentience and agency.
Again, if you focus on the real agents, the humans forcing this scourge upon us on their own, profit and ideological terms, the musings evaporate quickly.
Nero comes to mind, debatable as the story of fiddling while Rome burned is. Also, 'everyone has a plan until they get punched in the nose'. The crisis is not illusory nor is it in some indefinable future - it's now.
The challenge isn’t to redefine the bot, it’s to re-engineer the operating conditions and hold accountable the people incentivized to ignore those conditions. Language games won’t fix a broken incentive structure. Design and responsibility might.
...would be an awesome idea to start naming the names of the creators before whatever they make does whatever it might do...A.I. is such an Urkel technology..."Did I do that?"....yes, yes you did...
I believe AI is no different than a gun, a knife, a pair of scissors, a baseball bat, or any other potentially dangerous object. Everyday, objects are sold everywhere that can inflict serious harm on people. I think many agree that AI (like all these other objects) is a tool. While I don't believe AI should frequently, without prompting, tell people how best to slit their wrists...I also think it's unrealistic (given the nature of AI) to expect it to be 100% safe. Remember, a key point about AI is that the people who develop it don't even understand what is going on inside its "mind." It's not like traditional software, driven by thousands of lines of code that we can simply add some more code to instructing it to "never do this" and "never do that." (Should it have a "kill switch?" Sure–I'm in favor of that.)
As many others have suggested, AI is a reflection of its users. And unless I'm missing something, so far it hasn't responded to the question "What is the weather forecast today?" with "Here's an efficient way to kill yourself." Do people need to be made aware that it could potentially say things like this if prompted in a certain way? Absolutely. But to expect its creators to somehow "instill 100% reliable goodness" in it is silly. (Just as it would be equally silly to expect "smart guns" to be made that will only shoot when NOT pointed at a vital organ.)
Maybe I'm missing something (Ted?)...but what exactly do we expect AI's developers to do? If our expectation is "Abolish it until/unless it can be absolutely, reliably 100% good" then I hate to say it, but that ain't gonna happen. The AI train left the station a long time ago.
When the nuclear scientists want to develop nuclear technologies, they are required to do this work far away from population centers, for obvious reasons.
Instead, the tech industry has decided to do its experiments not far away from the populace, but ON the populace!
Needless, to say, they got no informed consent for this, no one is policing this, and you can't escape it.
Won't matter if the technology actually is conscious as we are, if it kills us. A giant knife-throwing machine may have no intent, but so what? It is dead, and so are we.
The makers of powerful technologies have to be held responsible for the consequences of their work. Carelessness and recklessness are not OK.
Is Silicon Valley going to be the next lab that a bio-weapon escapes from? A Sand Hill Road Wuhan?
Today the canaries sing, but tomorrow they may die. And then...
Is the tech industry just gonna ask for forgiveness, again?
Spot on. As always, technology is inescapably political.
Unfortunately, we appear headed toward learning these lessons the hard way. Past a certain tipping point there will be no turning back. AI, once it is intertwined in our economy, will not be something we can simply “turn off”. The “libertarian” tech bros care about nothing other that whether their options are in the money. Regulation where clear externalities exist is common sense.
To further your point, the tech industry is notorious for its use of jargon, obscuring the meaning of what they are actually doing, hiding their intent within the supposed comfort of euphemistic language. George Carlin was particularly attuned to this: https://www.youtube.com/watch?v=vuEQixrBKCc