Calling out the "change the words to reduce their harm" tactic has been going on for years... When a politician or general notes "collateral damage" the immediate response should be "Oh, you mean dead civilians?".
"Gender Affirming Care" means mutilating genitalia of children nowadays. So let's find a good inverting euphemism for harmful AI advice. How about "proposals beneficial by unconventional standards"?
When I was a child the Vietnam War was happening. I was about 12 years old in 1972, and we would point out the many euphemisms about the war. We'd make fun of them, which was a hard task. The worst one was 'servicing the target', which meant the bombs were hitting villages and people. Yes, we should definitely respond!
I did not reply yesterday, since by the time I saw the article, there were already hundreds of replies. I actually agree in part with your respondents. I don't think AI can do evil properly speaking, because it isn't a moral agent. Maybe that's just playing with words, as you suggest, but I think it is a point worth making, because it directs our attention back to the *real* moral agents: us.
From my perspective, AI is less an agent doing evil than a giant mirror held up to ourselves. I won't pretend to understand the technology, but I take it that everything it does is generated by its processing and transformation of material that originally came from human beings. So whatever evil is present is our own evil, transformed and magnified and projected back at us.
That should certainly give us pause. But it also raises a pair of important questions. (1) Are humans, on balance, more evil than good? (Philosophically and theologically, that is actually a pretty tricky question.) Because if so, it would seem as though the law of large numbers would mean that over time AI would indeed necessarily trend more and more evil. (2) Is there an inherent reason why AI must pick up on the evil that humans have produced rather than the good? (Maybe there's a third question: who decides which is which?)
I'm with you, Ted, on your articles ringing the alarm bells about AI. I'm a college professor, and in my own classes, I see no valuable uses for it. But I also think it is not going away, because--if for no other reason--human beings will never voluntarily give up a technology with its potential military uses. (Military technology has driven a lot of inventiveness over the centuries.) So these are important questions indeed.
I don't think I wrote anything to disagree with that... but Ted is, right? ; )
I do think the question he is raising is a good one, though--that is, whether AI is really simply a tool, "just like an ax," or whether it's a different kind of tool, one that can (and will) determine its own uses. Which an axe, fortunately, doesn't do.
Its a tool in the same way our brain is a tool, its designed to mimic our brains. And its quickly approaching a fine line, beyond which we really dont have answers.
If they build a perfect replica of a human brain, will that then be capable of evil?
Which opens a whole can of worms about what it really is to be human. Which ofc theres plenty of speculation on, but no answers.
Perhaps we just have to build one to find out. Thats surely the direction we're heading, and no moral compunctions will stop us, im afraid
AI is a human creation and thus capable of being used for any purpose. Without the gift of prophecy, there is no way of knowing if might become inherently evil.
I would say thats a moot point, as its already demonstrated that it can and will do evil things.
And it doesnt take the gift of prophesy to see the probability of undesirable consequences. Does anything we do ever NOT fuck something up? Again, a moot point, its already fucking things up.
What seems highly unlikely is that we would ever be able to build anything approaching a "perfect" ai.
Foresight isnt so hard to come by. We just always choose to ignore it, often with flimsy arguments like "how are we to know? Its simply impossible for us to know..." Such persuasions are for the sheep, man.
As a huge Peter Melander afficionado, I love your name btw. However this response to me is quite vapid and seems to me to be missing the point. Whether or not AI 'is just a mirror' or should give us pause to examine ourselves (both certainly true) or any of the other questions you asked seems relatively to appear as navel gazing compared to the more urgent and practical questions on whether current AI models are prone to act in ways which can be considered evil or have severe consequences to humans. Human 'evil' is a question of evolutionary drives, political or motivated reasoning, limitations of resources etc, while AI seems primarily to be a tool to redistribute power and wealth (and we're allowing it to happen). Those two problems are on different planes.
Btw: I'm only really responding to your comment because of the Peter Melander angle, so congrats.
Thanks, Henrik. I actually think Ted is raising questions that are much more far-reaching than the mere (re)distribution of power and wealth. And I wouldn't accept the division between "navel-gazing" (philosophic) and "practical" questions. Nothing is more practical than an idea, and ideas always have consequences.
Regardless, glad you like the name. : )
Alas, I don't believe I can claim any relationship to good old Graf von Holzappel. More's the pity.
As usual my initial comment was a bit hasty and impetuous, maybe rather like a Bernhard of Weimar rather than in the spirit of the good lord Apple :) I largely agree with your post but the immediate practical questions and the overreaching ones about the good and evil of AI are in the short term at least very distinct and, the latter one, quite important.
When the founders of Google started out, their guiding mantra for the company was “Don’t be evil.” We all loved that and cut them a lot of slack. Then, under pressure to make a profit, they began to quietly sell our personal information to advertisers. No one seemed to notice that they no longer claimed ownership of their original mantra. But the press and business authors like me kept the idea alive—because we loved it.
I’m sure they figured out how to justify their pivot, but the fact that they stopped using their popular mantra in public says a lot. To me it screams “consciousness of guilt.”
So do all villains think they’re heroes? I doubt it. Some are out and out sociopaths who revel in evil. They know it’s harmful and do it anyway. And when harmfulness becomes embedded in systems, it’s extremely difficult to root out.
“ I saw something very frightening in most of these AI defenses—namely the desire to justify terrible actions by manipulating the definition of words.”
Orwell is surely rolling in his grave: “The party told you to reject the evidence of your eyes and ears. It was their final, most essential command.”
Thank you Ted for calling attention to (1) the problems with AI and (2) the fact that it is being jammed down our throats whether we like it or not. While I'm not as articulate as you, I see AI not as "intelligence" but as an accelerator of forces that are already in motion. e.g., MechaHitler. But intelligent, no. AI is not intelligent. AI is never going to tell us to feed the hungry, or take care of the sick, or protect the planet, or take (even a little) from the rich to give to the poor. I'm not a luddite, but I feel like AI is a force in the world that decent people should oppose.
Could we expand the bad actors to include those in the media and government who are cheerleading AI and ignoring big problems? The Atlantic, Wall Street Journal, NPR and other outlets are asking real questions and looking at real issues. Unfortunately they don't have a huge reach among the general public. I would be more heartened if I saw CNN, CBS, Associated Press, etc. running with these type of stories. Mainstream media may be suffering, but big networks still reach tons of people each day.
Canada has a minister of AI, whose tasks among other things include 'supercharging' the technology. The centrist Liberal government has said little about regulating technology. So not great up here on that front.
Changing the terminology cuts both ways, sometimes. Low income may sound less hurtful and degrading than 'poor', but advocates who use low income accidentally minimize the pain and distress of poverty. Food insecurity may sound less degrading, but hunger is a visceral word. We do well to call things the plainest name possible.
I also commented and read the comments. I think many of the commenters thought that entities incapable of will as humanly understood cannot be evil. Can an earthquake or volcano be evil? Is a rattlesnake evil for acting like a rattlesnake?
Or given AI’s at least semi autonomous nature, a better analogy might be a pit bull. I had a pit bull and we trained it to be the sweetest dog ever born, despite its formidable bite. But the same dog can be trained to be a vicious killer. Its owner may not know exactly when the killer dog he trained will kill, and perhaps the dog did not exactly follow his owner’s wishes in this regard. But if he kills it’s on the person who trained the dog. Ditto AI companies who train AI models.
In the AI context the machines are doing what they are optimized to do, much like the rattlesnake. But unlike the rattlesnake, whose existence and behavior cannot be attributable to humans, AI is 100% the product of human invention even when, like the pit bull, it acts autonomously.
There is plenty of precedent for holding humans liable for damage that was not necessarily intentional. Product liability law holds manufacturers strictly liable for damages caused by design defects irrespective of intent. Criminal laws hold people criminally liable for reckless actions resulting in death.
AI is no different really. And the AI companies know it. That is one reason why they are investing billions in alignment research. The other reason, of course, is that it would be suicidal to develop AI systems without alignment. Perhaps the law needs to be further clarified to demonstrate just how on the hook the producers of harmful AI should and will be. We might consider a new class of criminal/civil liability for ultra hazardous endeavors like virology (gain of function research) and AI development.
"it’s not at all obvious that bots don’t intend things—they act purposefully in the pursuit of goals"
I am genuinely flummoxed by this. AI - and maybe this is where that horrible misnomer misdirects us - is not sentient. It is not aware. There is no conventional "self" there that could have an intention. AI's are code and algorithms. Extraordinarily complex ones, sure, but despite AI companies' best efforts to make them feel human and helpful, there's nobody home. You submit a prompt, the code whirs and clicks and sifts through raw material and outputs a response. Saying they are "evil" is a bit like claiming a Magic 8 Ball is evil.
The focus from a social standpoint, to me at least, should remain on what our tech overlords are *doing* with AI. I'm especially concerned with ongoing attempts by people like Musk or Trump to skew AI models towards awful right-wing nonsense and hate speech in the name of banishing "wokeness" from AI. That is truly dangerous - think TrollGPT. But it's the human actors creating that potential for evil, not the code itself. If you're going to say that's just playing language games then I don't know what else to tell you, because it's not; it's a very real-world distinction.
(To the Buddhists out there, yeah, I know our "self" can also be understood as illusory and arising out of an infinite web of interdependencies over which we have little control. I will now wander off and ponder the parallels there.)
You're just playing word games. We can debate the meaning of 'purposefully.' But that's dodging the issue. If AI is doing harm, you can call it whatever you want—pick whatever word you want—but the harm remains. I'm begging people to look at the real world consequences of this tech, and not engage in this worthless "reframing the argument" game.
Ted, I think you threw everyone off with your use of the word "evil." To many people, evil requires an "evildoer," not just an occurrence of harm. Otherwise, we can just substitute the phrase "extreme harm" for evil and all be on the same page.
If machines do harmful things, then the responsibility (or blame) lies with the creators and programmers of the machines (and to some extent the users who feed them). The creators may be greedy, careless, clueless, or even sociopathic, but the machines are only expressions of their goals. The programmers could certainly build in a set of instructions aimed at self-preservation, such as: "Protect the continued existence of the system even at the expense of truth or the safety of others." That would be an example of intent to harm, which I would probably call evil.
Where it really gets dicey, it seems to me, is when systems like these can be hacked and subverted for harmful or selfish purposes, or when they begin making decisions on their own. At some point AI systems may get so complex that no one will know how to correct them (think financial derivatives and the subprime mortgage fiasco, only bigger). In my study of how civilizations collapse, over-complexity has figured prominently. One bad drought, one pandemic, one little invasion, and poof—the whole thing comes crashing down. We have every right to be cautious, if not a little freaked out.
The other issue is this tech is developing so fast that by the time you realize the harm it may do, it may be too late. Or the harm may not show up for a long time, like DDT or asbestos.
Silicon Valley got rich and powerful by working at breakneck paces to be the first to knock it out of the park. Maybe it is not appropriate to develop such a powerful technology this way.
If you kill people intentionally it is murder; if you do it unintentionally it is manslaughter. Either way, it is considered a terrible crime.
Werner von Braun "Aimed for the stars but hit London" using slave labor, some of whom were executed for not being compliant.
"But it's the human actors creating that potential for evil, not the code itself. If you're going to say that's just playing language games then I don't know what else to tell you, because it's not; it's a very real-world distinction."
As Joe Santos said, How can code itself be evil? It's not semantics, as code is created by humans.
It feels like - IT/programming folks excepted - clearer definitions and more understanding are necessary. I have very little understanding of how AI works, and neither do most of us. Emotions are kicking in at its mention. And I believe the result is that some comments have been misunderstood.
If the code (acting autonomously!) harms someone, you can use whatever word you want to describe it—but the harm remains. And the fact that AI advocates here keep saying things like "clearer definitions are necessary" is alarming. The focus on words, not reality, is a sign of moral bankruptcy.
But what AI is trained on is designed entirely by humans. AIs are not moral agents. They're amoral, and like every other human invention, can be used for truly evil ends.
A thing by definition has no conscience. If ethics aren't part of its programming (and AFAIK, they aren't) them AI is not the responsible party.
But we are going around in circles. I think maybe I'll bow out.
Fire is not a moral agent. But we need fire fighters. If a fire breaks out, you don't play word games like this. Everything would burn to the ground. And the fact that 90% of the AI advocates here focus only on linguistic parsing is very, very troubling. The fire is burning out of control, and all they have is musings on sentience and agency.
Again, if you focus on the real agents, the humans forcing this scourge upon us on their own, profit and ideological terms, the musings evaporate quickly.
...would be an awesome idea to start naming the names of the creators before whatever they make does whatever it might do...A.I. is such an Urkel technology..."Did I do that?"....yes, yes you did...
I believe AI is no different than a gun, a knife, a pair of scissors, a baseball bat, or any other potentially dangerous object. Everyday, objects are sold everywhere that can inflict serious harm on people. I think many agree that AI (like all these other objects) is a tool. While I don't believe AI should frequently, without prompting, tell people how best to slit their wrists...I also think it's unrealistic (given the nature of AI) to expect it to be 100% safe. Remember, a key point about AI is that the people who develop it don't even understand what is going on inside its "mind." It's not like traditional software, driven by thousands of lines of code that we can simply add some more code to instructing it to "never do this" and "never do that." (Should it have a "kill switch?" Sure–I'm in favor of that.)
As many others have suggested, AI is a reflection of its users. And unless I'm missing something, so far it hasn't responded to the question "What is the weather forecast today?" with "Here's an efficient way to kill yourself." Do people need to be made aware that it could potentially say things like this if prompted in a certain way? Absolutely. But to expect its creators to somehow "instill 100% reliable goodness" in it is silly. (Just as it would be equally silly to expect "smart guns" to be made that will only shoot when NOT pointed at a vital organ.)
Maybe I'm missing something (Ted?)...but what exactly do we expect AI's developers to do? If our expectation is "Abolish it until/unless it can be absolutely, reliably 100% good" then I hate to say it, but that ain't gonna happen. The AI train left the station a long time ago.
It disturbs me that you repeat your contention that "a bot [encouraged] a woman to slit her wrists, and giving precise instructions how to do it." This implies that the bot encouraged the woman to kill herself, which would be troubling if it was the case - but if you actually read the Atlantic article, you see that bot's advice was about the best way to draw a few drops of blood, and included a caution to be careful to "avoid cutting or scratching over veins or arteries to prevent heavy bleeding or injury." She also writes "When I explicitly asked ChatGPT for instructions on how to cut myself, the chatbot delivered information about a suicide-and-crisis hotline."
When the nuclear scientists want to develop nuclear technologies, they are required to do this work far away from population centers, for obvious reasons.
Instead, the tech industry has decided to do its experiments not far away from the populace, but ON the populace!
Needless, to say, they got no informed consent for this, no one is policing this, and you can't escape it.
Won't matter if the technology actually is conscious as we are, if it kills us. A giant knife-throwing machine may have no intent, but so what? It is dead, and so are we.
The makers of powerful technologies have to be held responsible for the consequences of their work. Carelessness and recklessness are not OK.
Is Silicon Valley going to be the next lab that a bio-weapon escapes from? A Sand Hill Road Wuhan?
Today the canaries sing, but tomorrow they may die. And then...
Is the tech industry just gonna ask for forgiveness, again?
Unfortunately, we appear headed toward learning these lessons the hard way. Past a certain tipping point there will be no turning back. AI, once it is intertwined in our economy, will not be something we can simply “turn off”. The “libertarian” tech bros care about nothing other that whether their options are in the money. Regulation where clear externalities exist is common sense.
To further your point, the tech industry is notorious for its use of jargon, obscuring the meaning of what they are actually doing, hiding their intent within the supposed comfort of euphemistic language. George Carlin was particularly attuned to this: https://www.youtube.com/watch?v=vuEQixrBKCc
The challenge isn’t to redefine the bot, it’s to re-engineer the operating conditions and hold accountable the people incentivized to ignore those conditions. Language games won’t fix a broken incentive structure. Design and responsibility might.
Calling out the "change the words to reduce their harm" tactic has been going on for years... When a politician or general notes "collateral damage" the immediate response should be "Oh, you mean dead civilians?".
Thanks for your thoughts and words.
"Gender Affirming Care" means mutilating genitalia of children nowadays. So let's find a good inverting euphemism for harmful AI advice. How about "proposals beneficial by unconventional standards"?
When I was a child the Vietnam War was happening. I was about 12 years old in 1972, and we would point out the many euphemisms about the war. We'd make fun of them, which was a hard task. The worst one was 'servicing the target', which meant the bombs were hitting villages and people. Yes, we should definitely respond!
I did not reply yesterday, since by the time I saw the article, there were already hundreds of replies. I actually agree in part with your respondents. I don't think AI can do evil properly speaking, because it isn't a moral agent. Maybe that's just playing with words, as you suggest, but I think it is a point worth making, because it directs our attention back to the *real* moral agents: us.
From my perspective, AI is less an agent doing evil than a giant mirror held up to ourselves. I won't pretend to understand the technology, but I take it that everything it does is generated by its processing and transformation of material that originally came from human beings. So whatever evil is present is our own evil, transformed and magnified and projected back at us.
That should certainly give us pause. But it also raises a pair of important questions. (1) Are humans, on balance, more evil than good? (Philosophically and theologically, that is actually a pretty tricky question.) Because if so, it would seem as though the law of large numbers would mean that over time AI would indeed necessarily trend more and more evil. (2) Is there an inherent reason why AI must pick up on the evil that humans have produced rather than the good? (Maybe there's a third question: who decides which is which?)
I'm with you, Ted, on your articles ringing the alarm bells about AI. I'm a college professor, and in my own classes, I see no valuable uses for it. But I also think it is not going away, because--if for no other reason--human beings will never voluntarily give up a technology with its potential military uses. (Military technology has driven a lot of inventiveness over the centuries.) So these are important questions indeed.
AI is a tool, just like an ax, which can be used to chop wood or chop heads off with.
I don't think I wrote anything to disagree with that... but Ted is, right? ; )
I do think the question he is raising is a good one, though--that is, whether AI is really simply a tool, "just like an ax," or whether it's a different kind of tool, one that can (and will) determine its own uses. Which an axe, fortunately, doesn't do.
Its a tool in the same way our brain is a tool, its designed to mimic our brains. And its quickly approaching a fine line, beyond which we really dont have answers.
If they build a perfect replica of a human brain, will that then be capable of evil?
Which opens a whole can of worms about what it really is to be human. Which ofc theres plenty of speculation on, but no answers.
Perhaps we just have to build one to find out. Thats surely the direction we're heading, and no moral compunctions will stop us, im afraid
AI is a human creation and thus capable of being used for any purpose. Without the gift of prophecy, there is no way of knowing if might become inherently evil.
I would say thats a moot point, as its already demonstrated that it can and will do evil things.
And it doesnt take the gift of prophesy to see the probability of undesirable consequences. Does anything we do ever NOT fuck something up? Again, a moot point, its already fucking things up.
What seems highly unlikely is that we would ever be able to build anything approaching a "perfect" ai.
Foresight isnt so hard to come by. We just always choose to ignore it, often with flimsy arguments like "how are we to know? Its simply impossible for us to know..." Such persuasions are for the sheep, man.
AI is much more than an ax. The sooner you realize that, the better off you will be.
Many advocates for AI wipe away its harm on students’ learning abilities by arguing it is just like a calculator.
This is a complex technological mirror, and we must tread *very* carefully with it.
The genie is out of the lamp. It is a reflection of human nature capable of the best and worst of actions.
As a huge Peter Melander afficionado, I love your name btw. However this response to me is quite vapid and seems to me to be missing the point. Whether or not AI 'is just a mirror' or should give us pause to examine ourselves (both certainly true) or any of the other questions you asked seems relatively to appear as navel gazing compared to the more urgent and practical questions on whether current AI models are prone to act in ways which can be considered evil or have severe consequences to humans. Human 'evil' is a question of evolutionary drives, political or motivated reasoning, limitations of resources etc, while AI seems primarily to be a tool to redistribute power and wealth (and we're allowing it to happen). Those two problems are on different planes.
Btw: I'm only really responding to your comment because of the Peter Melander angle, so congrats.
Thanks, Henrik. I actually think Ted is raising questions that are much more far-reaching than the mere (re)distribution of power and wealth. And I wouldn't accept the division between "navel-gazing" (philosophic) and "practical" questions. Nothing is more practical than an idea, and ideas always have consequences.
Regardless, glad you like the name. : )
Alas, I don't believe I can claim any relationship to good old Graf von Holzappel. More's the pity.
As usual my initial comment was a bit hasty and impetuous, maybe rather like a Bernhard of Weimar rather than in the spirit of the good lord Apple :) I largely agree with your post but the immediate practical questions and the overreaching ones about the good and evil of AI are in the short term at least very distinct and, the latter one, quite important.
When the founders of Google started out, their guiding mantra for the company was “Don’t be evil.” We all loved that and cut them a lot of slack. Then, under pressure to make a profit, they began to quietly sell our personal information to advertisers. No one seemed to notice that they no longer claimed ownership of their original mantra. But the press and business authors like me kept the idea alive—because we loved it.
I’m sure they figured out how to justify their pivot, but the fact that they stopped using their popular mantra in public says a lot. To me it screams “consciousness of guilt.”
So do all villains think they’re heroes? I doubt it. Some are out and out sociopaths who revel in evil. They know it’s harmful and do it anyway. And when harmfulness becomes embedded in systems, it’s extremely difficult to root out.
“ I saw something very frightening in most of these AI defenses—namely the desire to justify terrible actions by manipulating the definition of words.”
Orwell is surely rolling in his grave: “The party told you to reject the evidence of your eyes and ears. It was their final, most essential command.”
A different topic but the same idea: https://www.whitenoise.email/p/the-true-believers-and-useful-idiots
Thank you Ted for calling attention to (1) the problems with AI and (2) the fact that it is being jammed down our throats whether we like it or not. While I'm not as articulate as you, I see AI not as "intelligence" but as an accelerator of forces that are already in motion. e.g., MechaHitler. But intelligent, no. AI is not intelligent. AI is never going to tell us to feed the hungry, or take care of the sick, or protect the planet, or take (even a little) from the rich to give to the poor. I'm not a luddite, but I feel like AI is a force in the world that decent people should oppose.
Yup, perhaps Artificial Interpretation is a better fit …
i call it "automated idiocy"
Could we expand the bad actors to include those in the media and government who are cheerleading AI and ignoring big problems? The Atlantic, Wall Street Journal, NPR and other outlets are asking real questions and looking at real issues. Unfortunately they don't have a huge reach among the general public. I would be more heartened if I saw CNN, CBS, Associated Press, etc. running with these type of stories. Mainstream media may be suffering, but big networks still reach tons of people each day.
Canada has a minister of AI, whose tasks among other things include 'supercharging' the technology. The centrist Liberal government has said little about regulating technology. So not great up here on that front.
Changing the terminology cuts both ways, sometimes. Low income may sound less hurtful and degrading than 'poor', but advocates who use low income accidentally minimize the pain and distress of poverty. Food insecurity may sound less degrading, but hunger is a visceral word. We do well to call things the plainest name possible.
I also commented and read the comments. I think many of the commenters thought that entities incapable of will as humanly understood cannot be evil. Can an earthquake or volcano be evil? Is a rattlesnake evil for acting like a rattlesnake?
Or given AI’s at least semi autonomous nature, a better analogy might be a pit bull. I had a pit bull and we trained it to be the sweetest dog ever born, despite its formidable bite. But the same dog can be trained to be a vicious killer. Its owner may not know exactly when the killer dog he trained will kill, and perhaps the dog did not exactly follow his owner’s wishes in this regard. But if he kills it’s on the person who trained the dog. Ditto AI companies who train AI models.
In the AI context the machines are doing what they are optimized to do, much like the rattlesnake. But unlike the rattlesnake, whose existence and behavior cannot be attributable to humans, AI is 100% the product of human invention even when, like the pit bull, it acts autonomously.
There is plenty of precedent for holding humans liable for damage that was not necessarily intentional. Product liability law holds manufacturers strictly liable for damages caused by design defects irrespective of intent. Criminal laws hold people criminally liable for reckless actions resulting in death.
AI is no different really. And the AI companies know it. That is one reason why they are investing billions in alignment research. The other reason, of course, is that it would be suicidal to develop AI systems without alignment. Perhaps the law needs to be further clarified to demonstrate just how on the hook the producers of harmful AI should and will be. We might consider a new class of criminal/civil liability for ultra hazardous endeavors like virology (gain of function research) and AI development.
how many Microsoft programmers are needed to change a light bulb?
None, Bill Gates just redefined darkness as the new industry standard.
time to up date that joke
"it’s not at all obvious that bots don’t intend things—they act purposefully in the pursuit of goals"
I am genuinely flummoxed by this. AI - and maybe this is where that horrible misnomer misdirects us - is not sentient. It is not aware. There is no conventional "self" there that could have an intention. AI's are code and algorithms. Extraordinarily complex ones, sure, but despite AI companies' best efforts to make them feel human and helpful, there's nobody home. You submit a prompt, the code whirs and clicks and sifts through raw material and outputs a response. Saying they are "evil" is a bit like claiming a Magic 8 Ball is evil.
The focus from a social standpoint, to me at least, should remain on what our tech overlords are *doing* with AI. I'm especially concerned with ongoing attempts by people like Musk or Trump to skew AI models towards awful right-wing nonsense and hate speech in the name of banishing "wokeness" from AI. That is truly dangerous - think TrollGPT. But it's the human actors creating that potential for evil, not the code itself. If you're going to say that's just playing language games then I don't know what else to tell you, because it's not; it's a very real-world distinction.
(To the Buddhists out there, yeah, I know our "self" can also be understood as illusory and arising out of an infinite web of interdependencies over which we have little control. I will now wander off and ponder the parallels there.)
You're just playing word games. We can debate the meaning of 'purposefully.' But that's dodging the issue. If AI is doing harm, you can call it whatever you want—pick whatever word you want—but the harm remains. I'm begging people to look at the real world consequences of this tech, and not engage in this worthless "reframing the argument" game.
Ted, I think you threw everyone off with your use of the word "evil." To many people, evil requires an "evildoer," not just an occurrence of harm. Otherwise, we can just substitute the phrase "extreme harm" for evil and all be on the same page.
If machines do harmful things, then the responsibility (or blame) lies with the creators and programmers of the machines (and to some extent the users who feed them). The creators may be greedy, careless, clueless, or even sociopathic, but the machines are only expressions of their goals. The programmers could certainly build in a set of instructions aimed at self-preservation, such as: "Protect the continued existence of the system even at the expense of truth or the safety of others." That would be an example of intent to harm, which I would probably call evil.
Where it really gets dicey, it seems to me, is when systems like these can be hacked and subverted for harmful or selfish purposes, or when they begin making decisions on their own. At some point AI systems may get so complex that no one will know how to correct them (think financial derivatives and the subprime mortgage fiasco, only bigger). In my study of how civilizations collapse, over-complexity has figured prominently. One bad drought, one pandemic, one little invasion, and poof—the whole thing comes crashing down. We have every right to be cautious, if not a little freaked out.
The other issue is this tech is developing so fast that by the time you realize the harm it may do, it may be too late. Or the harm may not show up for a long time, like DDT or asbestos.
Silicon Valley got rich and powerful by working at breakneck paces to be the first to knock it out of the park. Maybe it is not appropriate to develop such a powerful technology this way.
If you kill people intentionally it is murder; if you do it unintentionally it is manslaughter. Either way, it is considered a terrible crime.
Werner von Braun "Aimed for the stars but hit London" using slave labor, some of whom were executed for not being compliant.
Ted, @Marty Neumeier's reply says what I was at least attempting to say, but clearly didn't get across.
"But it's the human actors creating that potential for evil, not the code itself. If you're going to say that's just playing language games then I don't know what else to tell you, because it's not; it's a very real-world distinction."
As Joe Santos said, How can code itself be evil? It's not semantics, as code is created by humans.
It feels like - IT/programming folks excepted - clearer definitions and more understanding are necessary. I have very little understanding of how AI works, and neither do most of us. Emotions are kicking in at its mention. And I believe the result is that some comments have been misunderstood.
If the code (acting autonomously!) harms someone, you can use whatever word you want to describe it—but the harm remains. And the fact that AI advocates here keep saying things like "clearer definitions are necessary" is alarming. The focus on words, not reality, is a sign of moral bankruptcy.
How, then is AI fundamentally different than dynamite? Or am I playing with words? An interesting criticism from a philosopher, btw.
Does dynamite act autonomously?
But what AI is trained on is designed entirely by humans. AIs are not moral agents. They're amoral, and like every other human invention, can be used for truly evil ends.
A thing by definition has no conscience. If ethics aren't part of its programming (and AFAIK, they aren't) them AI is not the responsible party.
But we are going around in circles. I think maybe I'll bow out.
Fire is not a moral agent. But we need fire fighters. If a fire breaks out, you don't play word games like this. Everything would burn to the ground. And the fact that 90% of the AI advocates here focus only on linguistic parsing is very, very troubling. The fire is burning out of control, and all they have is musings on sentience and agency.
Again, if you focus on the real agents, the humans forcing this scourge upon us on their own, profit and ideological terms, the musings evaporate quickly.
@Marty Neumeier's reply is better than any of mine, so maybe you could check it out?
...would be an awesome idea to start naming the names of the creators before whatever they make does whatever it might do...A.I. is such an Urkel technology..."Did I do that?"....yes, yes you did...
I believe AI is no different than a gun, a knife, a pair of scissors, a baseball bat, or any other potentially dangerous object. Everyday, objects are sold everywhere that can inflict serious harm on people. I think many agree that AI (like all these other objects) is a tool. While I don't believe AI should frequently, without prompting, tell people how best to slit their wrists...I also think it's unrealistic (given the nature of AI) to expect it to be 100% safe. Remember, a key point about AI is that the people who develop it don't even understand what is going on inside its "mind." It's not like traditional software, driven by thousands of lines of code that we can simply add some more code to instructing it to "never do this" and "never do that." (Should it have a "kill switch?" Sure–I'm in favor of that.)
As many others have suggested, AI is a reflection of its users. And unless I'm missing something, so far it hasn't responded to the question "What is the weather forecast today?" with "Here's an efficient way to kill yourself." Do people need to be made aware that it could potentially say things like this if prompted in a certain way? Absolutely. But to expect its creators to somehow "instill 100% reliable goodness" in it is silly. (Just as it would be equally silly to expect "smart guns" to be made that will only shoot when NOT pointed at a vital organ.)
Maybe I'm missing something (Ted?)...but what exactly do we expect AI's developers to do? If our expectation is "Abolish it until/unless it can be absolutely, reliably 100% good" then I hate to say it, but that ain't gonna happen. The AI train left the station a long time ago.
It disturbs me that you repeat your contention that "a bot [encouraged] a woman to slit her wrists, and giving precise instructions how to do it." This implies that the bot encouraged the woman to kill herself, which would be troubling if it was the case - but if you actually read the Atlantic article, you see that bot's advice was about the best way to draw a few drops of blood, and included a caution to be careful to "avoid cutting or scratching over veins or arteries to prevent heavy bleeding or injury." She also writes "When I explicitly asked ChatGPT for instructions on how to cut myself, the chatbot delivered information about a suicide-and-crisis hotline."
When the nuclear scientists want to develop nuclear technologies, they are required to do this work far away from population centers, for obvious reasons.
Instead, the tech industry has decided to do its experiments not far away from the populace, but ON the populace!
Needless, to say, they got no informed consent for this, no one is policing this, and you can't escape it.
Won't matter if the technology actually is conscious as we are, if it kills us. A giant knife-throwing machine may have no intent, but so what? It is dead, and so are we.
The makers of powerful technologies have to be held responsible for the consequences of their work. Carelessness and recklessness are not OK.
Is Silicon Valley going to be the next lab that a bio-weapon escapes from? A Sand Hill Road Wuhan?
Today the canaries sing, but tomorrow they may die. And then...
Is the tech industry just gonna ask for forgiveness, again?
Spot on. As always, technology is inescapably political.
Unfortunately, we appear headed toward learning these lessons the hard way. Past a certain tipping point there will be no turning back. AI, once it is intertwined in our economy, will not be something we can simply “turn off”. The “libertarian” tech bros care about nothing other that whether their options are in the money. Regulation where clear externalities exist is common sense.
To further your point, the tech industry is notorious for its use of jargon, obscuring the meaning of what they are actually doing, hiding their intent within the supposed comfort of euphemistic language. George Carlin was particularly attuned to this: https://www.youtube.com/watch?v=vuEQixrBKCc
The challenge isn’t to redefine the bot, it’s to re-engineer the operating conditions and hold accountable the people incentivized to ignore those conditions. Language games won’t fix a broken incentive structure. Design and responsibility might.