Discussion about this post

User's avatar
Bernard O'Leary's avatar

It's worth remembering that these things are still only offering an illusion of intelligence. They're only a more complicated version of autocomplete, and these responses are merely echoes of something in their training data.

However, it's also worth remembering that these things don't need to be sentient to cause real damage. Bing and Bard are both capable of serving up misinformation, and doing so in a convincing way—that's a real concern.

I saw something yesterday where a person had asked ChatGPT (the most well-behaved of the AIs) to say the name of HP Lovecraft's dog. It couldn't, for reasons that will be immediately obvious if you know about ChatGPT's content policy and HP Lovecraft's policy on naming dogs.

However, instead of saying "I can't answer that" or "I don't know", ChatGPT answered by saying that Lovecraft didn't have a dog. A subtle difference but hints at a huge problem: these things can lie, and will seemingly do so when it's convenient. Which means that we could soon have a world where search engines are regularly feeding people with convincing false information. How's that going to affect the world?

Expand full comment
BigOinSeattle's avatar

We need a Butlerian Jihad - sooner rather than later.

Expand full comment
121 more comments...

No posts