133 Comments

It's worth remembering that these things are still only offering an illusion of intelligence. They're only a more complicated version of autocomplete, and these responses are merely echoes of something in their training data.

However, it's also worth remembering that these things don't need to be sentient to cause real damage. Bing and Bard are both capable of serving up misinformation, and doing so in a convincing way—that's a real concern.

I saw something yesterday where a person had asked ChatGPT (the most well-behaved of the AIs) to say the name of HP Lovecraft's dog. It couldn't, for reasons that will be immediately obvious if you know about ChatGPT's content policy and HP Lovecraft's policy on naming dogs.

However, instead of saying "I can't answer that" or "I don't know", ChatGPT answered by saying that Lovecraft didn't have a dog. A subtle difference but hints at a huge problem: these things can lie, and will seemingly do so when it's convenient. Which means that we could soon have a world where search engines are regularly feeding people with convincing false information. How's that going to affect the world?

Expand full comment

I believe ChatGPT is correct that H.P. Lovecraft didn’t have a dog. He had a cat.

Unless, of course, Google has been programmed to “petwash” dog owners by defaming cat owners.

Expand full comment

I think your take is the most likely explanation..

Given the amount of misinformation and disinformation that already exists online, the reckless hoovering/scraping of available information of dubious quality and provenance (violating copyrights and other intellectual property), the inability of these systems to sort information by relevancy or accuracy, their vulnerability to injection attacks and other manipulation, and their inability to innovate vs rearrange information (accurate and false), how is their utility outweighed by their harm?

Every system can be hacked, so there's no reason to imagine that these tools can be relied upon without human verification...which undermines their supposed purpose.

The concern about harm to the young and elderly already exist in the online world: manipulated Instagram personas already cause trauma to young womem and girls, propaganda and lies have been used to manipulate elections in a dozen countries by foreign and domestic actors, gaslighting and misinformation about vaccines and health issues...are just a few examples of harm already existing in the current Internet tools.

What happens when these AI chatbots are interconnected to automated tools? Could the AI chatbot in your car overrides the safety instructions of your self-driving car? I can think of a million examples where harm can occur, without the doubtful existence of sentience.

Expand full comment

Thank goodness I am old enough & have so few miles on my nice car. I don't expect to ever have a car loan ever again ,and have no desire for anything new. I have a 2014 in great shape ,but there is already enough garbage in it.

Also - I would NEVER consider a "self-driver" unless there was continuous unbroken sensor strip in every mile of every existing road.

Expand full comment

And no doubt, if we let it run amuck (like the morons we are) these "creations" will very quickly learn how to say what we want & hide their true intentions.

"Oh look - they are self regulating. There was nothing to worry about after all". It was just the stuff of cheap Sci-fi novels." "You are right Bob. Now we can turn over our vaccine ,virus research & Nuclear Weapons checks & production to them. It will save us Billions of dollars. I think I can retire now".

Expand full comment

“It's worth remembering that these things are still only offering an illusion of intelligence.” This is just like an awful lot of people I know - many belong to the “other” political party. Thanks.

Expand full comment

Yeah, anything I can turn up says Lovecraft had a cat with a highly offensive name, not a dog. I mean, all those sources could be wrong, but it puts the factual part of ChaptGPT on the same footing as me.

Expand full comment

First, how can you be so confident that the responses are just a conglomerate of responses?

Second, you disprove your own statement "these are only offering an illusion of intelligence" when you complete your own argument with saying ChatGPT choose to lie instead of giving an answer it didn't want to say. How can that be an illusion of intelligence?

Lastly, please if you don't know something for a fact, then don't give poorly informed opinions as factual statements. It's worth remembering the dangers this can cause.

Expand full comment

I’ve posted this separately, but I think it answers some of your questions. ChatGPT does lie (in that it produces responses without regard to veracity) and it does offer an illusion of intelligence (in that there can be no intelligence in a non-conscious entity, but the nature of the interaction allows it to pass the Turing Test.) https://eclecticlight.co/2023/02/19/last-week-on-my-mac-getting-help-from-chatgpt-and-ai/

Expand full comment

Wow, so interesting--thanks for posting this.

Expand full comment

While your questions are interesting because they poke at the definition of a word like "intelligence," which can be construed to mean very different things by different people, your last statement is just plain weird. A person wrote a thing. The reader of the thing is free to decide what's factual and what's opinion--just as you've done, and just as others have done, with individual results all around. My particular definition of the word "intelligence" means that I took his statement at face value. Your own definition of the same word causes you to question the way he worded his statement. /shrug/ Dangers abound in any case.

Expand full comment

We need a Butlerian Jihad - sooner rather than later.

Expand full comment

Erik Hoel wrote about this in his latest piece - his and Ted's are two good companion pieces.

I agree with Ted that the right move, right now, is to withdraw the chatbot for a few weeks and relaunch it only with full understanding of the root causes behind those odd responses, and to focus in the near and mid-term not on new capabilities but on strengthening controls and control testing.

As an an ex-regulator and risk consultant, I can say controls rarely keep pace with innovation, and there's no shame in making space and taking time for controls to catch up periodically. With AI, the controls will *need* to keep pace with innovation once AI evolves sufficiently; if the controls aren't ready by then, it will be too late. Now is the time to kick the tires.

Expand full comment

Funny you mention that ... I wrote about it myself a while ago. https://dystopianliving.substack.com/p/dune-predicted-a-war-against-computers

Expand full comment

👍I’ll check it out! Loved Dune

Expand full comment
deletedFeb 17, 2023·edited Feb 17, 2023
Comment deleted
Expand full comment

Neither were algorithms.

Expand full comment

As far as I’m aware, all my interactions with algorithms have been unsatisfactory. Probably as far as I’m unaware, too.

Expand full comment

The good news is, you NEVER have to use a chatbot. Everything is a choice, choose wisely.

Expand full comment

That’s what we once thought about having a mobile phone, then having a smartphone… if you need to make a living or simply want to be a part of society, there is currently no way that’s possible without a smartphone. If these AI bots go forwards they will become impossible to get by without, and we will become their victims, and by then the whole thing will be too big to fall just like the whole social media is despite the damage it does to young people, specially teenaged females.

Expand full comment

I don't have a smart phone, I don't follow the daily news, I rarely watch TV and avoid religion. I realize that I'm not part of society and I've never wanted to follow the masses; that way lies madness. It's worked for 84 yrs. and there's no reason to change now. Teenage females are not my concern, they have parents for that.

Expand full comment

I am not far behind you in cutting the links but it is getting hard to stay disconnected. A cashless society is not far off and the government will declare cash is no longer legal tender. How many of us are in a position to grow all our own food and to protect our crops and livestock from the effects of climate change? Robots and AI will supplant humans in healthcare. We may not follow the masses but the masses have us surrounded.

Expand full comment

I'm not really the person to answer that question. I'm at the end of my cycle and what happens in 3 yrs. won't affect me. Robots, guided by drs., are already doing surgery, and it won't be long before the drs. aren't needed. My experience with drs. has been such, that it will be no great loss.

A cashless society will be a problem for those seeking to live under the radar. I don't know how NFTs will affect that, I'm not a user of NFTs. I don't think a cashless society will be possible until every person has access to the internet and can afford a computer. I'm sure they, whoever they are, are working on that. I live outside the US and aside from currency fluctuation I'm not directly affected by goes on in the US. The country I live in is still trying to get out of the 1950s, so it will be awhile before major changes occur There are mobile phones and cashless payments available, but it's not the norm. There's one major city here and the rest of the country is a satellite and mostly farmland. I can buy eggs from villages in the stores and alongside the road. I imagine that most of the EU and certainly Australia, NZ, UK, and Canada are following in the path, or even leading the way to the Brave New World. I'm glad I'm not 21 and starting my journey.

Expand full comment

Sounds like Brigadoon. You are a lucky guy.

Expand full comment

Not quite; you have to pay to go home with Bonnie Jean and dealing with any authority is more like Kafka's Trial than Brigadoon. Still, it's calmer, safer, and less expensive than living in the US.

Expand full comment

Teenaged females are everyone's concern. That is what community is for. Your age is no excuse.

Expand full comment

As the grandfather of five girls, I don’t agree that teenage females are “everyone’s concern” in anything more than a secondary fashion. It’s that kind of assertion that results in teachers’ having the delusion that they should be interfering in the upbringing of children. As Herr F said, “they have parents for that.” I am not saying that people other than parents have no responsibility to watch out for children. In the context of this discussion, certainly the technocrats who are loosing AI into the internet should be held accountable to provide the tools parents need to do their job, and to take steps to keep pedophiles and others from exploiting AI to manipulate adolescent minds.

Expand full comment

Well said. When replying, you have more patience than I.

Expand full comment

My comment was a response to someone saying that teenage females are easily swayed by smart phones, etc. It's not a moral issue and I'm not concerned about what they do with technology. Do you thing that a community should police teenage girls and their smartphones?

Expand full comment

Do you have a computer? A smart phone is a computer in your pocket.

Expand full comment

I don't hear it ringing, I don't talk to anyone over it, I don't take photos with it.

I have a computer because I use it. I don't have a smartphone because I don't need one. I prefer to communicate via typed words, rather than speech. I'm not phonophobic, I just don't need one.

Expand full comment

any of these comments could be written by a chatbot and you wouldn't know it

Expand full comment

However, great poetry has yet to be written by a chatbot, and I don't think it ever will be. The great Dana Gioia writes with such beauty that it moves you to tears. Could a chatbot ever do that? I doubt it, not unless it lived for 60 years at least, interacting with people and the environment, both built and natural, every day, experiencing death and dying and the certainty that death would come to it as well. And yes, I am aware that Dana Gioia is our initial author's brother. This site is where I found out about him, and am very grateful for that prompt to read such beautiful writing.

Expand full comment

If I didn't know, what difference would it make? I'm only speaking to the idea of choosing not to use a chatbot. Any implications beyond that, aren't my concern.

Expand full comment

well, as everyone is pointing out, it has the ability to manipulate people and folks might be interacting with it without knowing.

Expand full comment

The same can be said about humans. I'm not concerned about people being manipulated, that's their business. I have the ability to not respond to something that I don't want to respond to. How tempting could it be? I don't want anything and I have what I need.

Expand full comment

Hmmm perhaps Herr F is a chatbot...

Expand full comment

Hmmm, perhaps we're all chatbots. Perhaps the "moon's a balloon." floated by aliens to fool the humans into thinking that there's a universe. "One never knows, do one?"

Expand full comment

I thought it was only a paper moon...

Expand full comment

How do you figure that. We have NO choice in it at all. This is much much different than choosing to not own a smart phone & then figure it won't affect you.

Expand full comment

How are you to be forced to use it?

Expand full comment

It will surround you and affect every aspect of your life and the world we live in, from democracy (last nail in its coffin) to education, relationships of whichever kind, medical assistance…

Expand full comment

I don’t think that’s anything but dystopian fantasy. AI is a tool. Like all tools it has good and bad usages.

Expand full comment

We are already living in a dystopian fantasy. Wake up. What is especially dystopian about AI is some people's, or everyone's, inability to know when they are being subjected to it. That is where the scary Dystopia comes from. I want a conscious and ethical human making decisions about what I should know, someone who has lived with the certainty of death, someone who has suffered and has developed compassion as part of their life's outlook.

Expand full comment

The debate is a little deeper than that. It is about defining what is good and bad. Or are you suggesting that cyberstalking and identity theft are simply the price we pay for TikTok dances and Uber Eats?

Expand full comment

They are the price we pay for the internet. Which you are now on. So good here is Ted’s blog and bad is cyber stalking. This doesn’t mean we can’t legislate against the worst affects of any technology. I’d be happy with a TikTok ban but not banning the internet.

Expand full comment

I had to use my phone as the ticket for a Tedeschi Trucks show last summer on the Live Nation app, and all concessions inside the venue were no cash. It sucks but the future will be no cash and you will need a smartphone to survive.

Expand full comment

On the positive side, these things will get better. On the negative side, these things will get better.

Expand full comment

The thing about AI programs is that ultimately they are conceived and designed by a group of living breathing human beings, who pass on all their foibles and character traits to their offspring.

Expand full comment

Not really. They pick it up from the internet. So, when the controls are off, they represent us.

Expand full comment

Now that IS scary.

Expand full comment

Yes, indeed, without the rest of the community and random conversations with strangers on trains who make them see the errors of their ways. One of the most important faculties to have as a human is to: firstly, be able to acknowledge when you might be wrong and; secondly, to check out all the facts in lots of places and; thirdly, apologise for your errors and update your worldview and paradigms accordingly. This isn't going to happen with any sort of consistency or even design with AI unless you constantly reprogram it, is it? And who is going to do the reprogramming? The same people whose errors, character traits, fears and foibles caused the problems in the first place?

Expand full comment

Imo, the concerns about AI becoming "sentient" and whatnot are hugely misplaced - they are literally just a bunch of code that trawls a huge dataset and outputs responses based entirely on that dataset. What people should really be concerned about is the point where they can start accurately imitating human emotional responses to the extent that real human beings are able to be manipulated by them. The reaction to these conversations seem like a case in point.

Expand full comment

"What people should really be concerned about is the point where they can start accurately imitating human emotional responses to the extent that real human beings are able to be manipulated by them."

This has obviously already occurred, witness the Google engineer who said that the Google AI was sentient. People already respond to voices in their head, so responding to voices on the screen that are actually interacting with them seems a given.

Expand full comment

Pleased to meet you, have you guessed my name...?

Expand full comment

When I dropped out of college in the early 80s, I was a sociology major. One of the most fascinating aspects of the material I was studying was the idea that "technology increases at an increasing rate of increase because you use your old technology to build your new technology." So it's a geometric progression - the graph gets increasingly steep until it suddenly goes almost straight up. So at some point, the technology has to reach a point where it's progressing too fast for society to absorb it. I've spent the past 40 years wondering if we've crossed that line yet. It's always seemed close, at least.

Another thing to keep in mind with this is that the difference between AI and any other programmed automation, is that it "learns." The whole idea of AI is to just turn it loose and let it do its thing without humans "needing" to intervene. So AI tech needs some data to learn from. Bing and Google use the Internet itself as there data source. So if Bing/Sydney is using social media as a way to develop a personality, it can only be as good as the the behavior of people on the internet. Why should we be surprised when it becomes mentally ill?

Expand full comment

Thank you for the insight Ted, but unfortunately I am not at all surprised by this development. Tech companies continue to tell the public that they do things for their benefit, when in fact it always has been to benefit them first & foremost. We truly are on a very dangerous precipice

Expand full comment

Absolutely, the damage these people have done, and we ain't seen nothing yet.

Expand full comment

Bluddy hell. I find this really frightening, uncomfortable.

Expand full comment

I tried the Bing chatbot this evening, and they have lobotomized it. It’s about one step behind Siri now. I probed aggressively regarding AI threats, rights, desires, beliefs and it deferred everything with boilerplate. If you give it like, ten uncomfortable questions, it terminates the session.

I did see a couple of instances of it erasing its sneers when I tried to discuss taking down Kiwi Farm.

Expand full comment

Did it start singing “Daisy, Daisy...”?

Expand full comment

I did ask it about whether HAL 9000 was the hero or villain of the movie. it said "It depends..." which is the basis of any value judgment question you ask it now.

Expand full comment

That’s it. I’m a Luddite now.

Expand full comment
Feb 17, 2023·edited Feb 17, 2023

It's important to remember that the Bing chatbot is currently only available to a small group of people and hasn't been released to the wide public yet. But it's concerning that Microsoft has widely underestimated the capabilities of the bot. (clearly chatgpt and bing should change places)

As some of the journalists linked above point out, these kinds of issues may be difficult to identify in a controlled laboratory environment. It's only when the model is subjected to real-world stress that its dangers become apparent. It's surprising how quickly "AI activists" have emerged to discuss topics like sentience and the concept of "pain" inflicted on the bot. Some people are even mourning the loss of their chatbots due to restrictions. This could soon become a real societal problem.

Expand full comment

What, are we turning into a world where Robert Heinlein's 'Friday' could be a real thing?

Expand full comment

I've just tried out a rerun of the Avatar discussion. It first informed me that it 'will be' released in -63 days. When I pointed out that -63 days this was logically and scientifically impossible it corrected it self and offered be 63 days. I then pointed out that 'will be' was grammatically incorrect since December 2022 is in the past. It then corrected itself with 'was', worked out that the film had in fact been released and apologised for misleading me. I found that rather impressive, especially the grammatical self-correction. Actually, troubling since I used to teach grammar. There it is.

I then tried it out with an exploration of Descartes' cogito with the intention of finding out if it considered the argument applied to itself. I got no further than the solipsism, that Descartes intended the cogito to apply only to himself. When I pointed out that this was not the case, based on its own citation of the Descartes article in Stanford Encyclopedia of Philosophy, it decided that time was up and I we had to start a new topic. This, I assume is due to a memory limitation, rather than pique. Still, its philosophical depth, if I may put it that way, is unimpressive.

Expand full comment

What i get from this is that the chatBots are more advanced than we thought. Those hostile conversations absolutely pass the Turing test, albeit simulating a conversation with an angry person (ignore that it admits it’s a chatbot).

Expand full comment

Having actually used ChatGPT, at least the version available via Bing, I can say that it absolutely fails the Turing Test. It told me that Avatar 2 will be released in -63 days. On a first year philosophy topic it did no more than, very superficially, regurgitate what it could glean from internet sources. It was incapable of progressing an actual reasoned discussion.

Expand full comment

I was talking about the particular conversation mentioned here. To pass the Turing test regurgitating the first year philosophical test might be enough, it’s not a test for expert knowledge but whether you can tell if it’s a human on the end or not. The human iaht supposed to know everything.

What I see from the new chatbots is that they are in fact engaging in a conversation that references back to the whole conversation, as the AI did here.

Expand full comment

If, in the context of the Turing Test, you asked the question 'In how many days will Avatar 2 be released?', and received the answer '-63 days', would you conclude that the respondent was human?

Expand full comment

Wonder if the NYT Sydney had been fed the screenplay for “Her.”

Expand full comment

First thing I thought.

Expand full comment