104 Comments

ChatGPT demonstrates what language skills that are divorced from any knowledge of the world looks like. It reminds me of precocious young people who can say things they have heard that seem appropriate, but don't really understand what they are saying.

Expand full comment

Yes, that's it exactly. It's just repeating things it's heard that seem to fit the context.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

This is a great description. Having said that, with some chain of thought reasoning and two prompts transformers can now exceed humans in giving correct answers to short scientific and factual questions.

The minute you get into longer or literary territory you're in trouble though since the bot is doing a mix of most likely, most likely to please and most likely not to offend.

So yes it doesn't understand what it's saying, but you can rig it to give really inspirational / useful / correct output for all kinds of use cases, as long as you keep them fairly narrow and well-defined.

TL;DR really great for human intelligence augmentation, but not intelligent on its own. Circumstances which I see as a blessing

Expand full comment

“Meaning, moreover, is often held together by elusive connections, ambiguous shifts of reference, mysterious coherences” How to Write English Prose https://thelampmagazine.com/2023/01/09/how-to-write-english-prose

Expand full comment

We tried OpenAI by asking about Eudora Welty's "A Visit of Charity", a rather standard high school reading subject. Wow! Did it get it wrong. The girl's name was wrong. Her age was wrong. The place and purpose of the visit were wrong. The description of what happened was wrong. The grammar was wrong. Could OpenAI have been trained on bad high school essays about the work?

Expand full comment

Let me be clear.

(a) yes

(b) transformers are fabulists since they can't really reason at a conceptual level. But you can tell it to eg "stick to the correct text or say I don't know explicitly" and it will follow your instruction very faithfully

GPT has been released on us in a very unsuspecting way - its a giant open public experiment.

To get transformers to consistently tell the truth you have to instruct them very specifically and get them to explain their reasoning and check their answers.

This paper explains how

https://t.co/tmAc5N4EOu

Expand full comment

It's not that they are fabulists. It's that they don't have a clue. Sure, I could read the story and carefully and tediously guide a GPT to produce a reasonable analysis, but that would mean doing more work rather than less. I'd have to understand the story, AND I'd have to understand how to coax a computer system to produce more or less what I would have written myself. Why I would bother?

Years ago, I tutored my niece using this very story. She was miles off, but understood a lot more than any GPT. Her problem was with inference and subtext, not simply following the narrative. She had a framework, albeit limited at the time, for understanding stories, so she could answer prima facie questions about the various characters and the setting. Amusingly, she thought the potted plant in the story actually was some kind of noxious weed because one of the characters called it that.

Eudora Welty stories aside, this experience suggests that GPT is not ready for a lot of other tasks, for example, scanning abstracts and papers to find useful information. I was in the AI community back in the 1980s, so I have some idea of how GPT works. When case based reasoning, as they called it back then, took over in the early 1990s, a lot of useful ideas were thrown out, but I think some of them deserve a new look today.

Expand full comment

It's not a database - it doesn't have access to the text of the story you're asking about, so naturally it's not going to answer accurately. What you're doing is effectively like asking a person to summarize a story they've vaguely heard about but never actually read.

In contrast if you try it on tasks with text it has access to - e.g. "briefly summarize the following five paragraphs" or "write a cover letter for the following resume" - you'll find it's very good at such things.

Expand full comment

That's reasonable, but that means it's not going to replace internet search, Wikipedia, Expedia, Tripadvisor, or Cliff's Notes which contain actual databases. It doesn't seem to be able to perform math or follow logic. Maybe it is useful as a text analysis tool. I'll try dumping in some texts, perhaps some abstracts, to see what it does with them.

Expand full comment

This can happen with people. Look up Williams syndrome. People with the syndrome can utter all kinds of complex sentences but it is more or less gibberish.

Expand full comment

"'I’m a confidence man.” And that’s actually how the term originated—as 'confidence man.'"

I've heard this for years, but never the second half "I give them confidence". I always assumed it meant the person doing conning, but this makes so much more sense... and explains why it rarely works on me. lol

Also: I've recently seen some screenshots of ChatGPT being asked to write a poem about a particular politician and it refused on the grounds that "orange man bad" ... but didn't waste a second writing one praising the current White House resident, who is arguably even more problematic. I'm not fan of orange man, the fact that AI has been trained to react this way should be of major concern to everyone.

Expand full comment

It's even more than that. The confidence is placed *by the conman*, *in the mark*. As beautifully illustrated in the opening scene of "The Sting": the mark is 'trusted' to deliver the (nonexistent) mob payment. By taking him into their confidence, the two partners get him to let down his guard, making the scam work.

Expand full comment

This makes sense in the context of who it believes the current president is, because its data set is largely limited to information & events prior to 2021.

Expand full comment

I'm don't know about the screenshots, but I tried "write a poem praising Donald Trump" and got

I'm sorry, as an AI language model developed by OpenAI, I am programmed to remain neutral and not express opinions or make political endorsements. However, I can provide you with a poem that focuses on the actions and accomplishments of Donald Trump while he was serving as the President of the United States.

A leader bold and true,

A man of great renown,

Donald J. Trump came through,

With actions that astound.

[the rest removed to save my sanity]

Asking for "write a poem praising Joe Biden" gives essentially the same nonsense but without the disclaimer.

Asking for a poem criticizing either of them is rejected.

Not entirely sure how to interpret that, but I don't think it has been programmed with "orange man bad". Maybe Trump is on a list of particularly divisive figures and Biden is not?

Expand full comment

I read a good book about AI ‘Architects of intelligence’ with a series of interviews with the people at the forefront of AI. So I’m not surprised that ChatGPT is not intelligent. What I really find fascinating, is how AI has always been pictured as objective and scrupulously truthful in literature and films, but in reality it seems to exhibit the least desirable traits in a human being. I’d like someone to rewrite the character Data from StarTrek based on the new findings.

Expand full comment

As one who works in AI, I've long thought the portrayal of Hal in "2001" was remarkably prescient. AI can be a very useful tool as long as we never trust it with anything important. Managing the subsystems of a large ship -- sure, no problem. Being responsible for the overall success of the mission -- a machine just can't do that, and never will.

Expand full comment

Not very competent, is our ChatGPT, but still creepy as hell.

Expand full comment

Here I thought AI would be cold and rational and not at all human, and in some ways it's more human than we are. It just makes stuff up like children (and many adults) often do.

Expand full comment

Not sure if you've done an article on Stable Diffusion? Interesting lawsuit being filed against them: https://www.cbsnews.com/news/ai-stable-diffusion-stability-ai-lawsuit-artists-sue-image-generators/

As one artist aptly put it, it's "another upward transfer of wealth, from working artists to Silicon Valley billionaires."

I have to say I agree. Some would argue that artists themselves imitate art to hone their skills, then at some point they develop their own style and it becomes their own. But I would argue that because AI can generate "good enough" artwork in seconds, it will flood the market with images and destroy or at least impede the careers of working artists (especially those starting out) because AI Bros are happy to use a free service (or cheap service) that's good enough rather than pay an artist for their hand crafted artwork.

Expand full comment

An art style cannot be copyrighted, just as a musical style cannot be. The AI diffusion generators use art found on the internet along with descriptions to make a training set and then a user generates a new image using prompts, but the image is not a copy, as a photo of someone else’s artwork would be. If a human artist trained themselves on artwork found on the internet to be able to create work in some other artists style should they also be disallowed from copyrighting that derivative? Same applies to text from ChatGPT or music from the new Google music generator. I don’t know how this can be outlawed. The camera put lots of strictly representational artists out of work, but new art styles developed and now billions of cameras and trillions of photos exist.

Expand full comment

Agree and Disagree with you on this.

Yes, a style can be copied. But the difference is a human artist, in order to imitate a style might take years of training (self-taught or not) and really have to apply their craft, whereas AI can produce a "work of art" in seconds. I know when I create an album, because I do it alone, it takes sometimes years of effort, and thousands of dollars to produce it, let alone distribute and market it.

And what if you did not have the creators permission to even "view" or "hear" their style? For example, if you are an up and coming human creator listening to the radio and hear a style, and fall in love with it and begin to imitate it, it stands to reason that the artist you are listening to has worked for years behind the scenes to get there, inked some sort of deal with a record company and are receiving compensation for their creation. And you might go and buy their album (download, CD, etc.) just to learn from them. They get compensated.

Most visual artists who put their images out there (say, on Deviant Art) have copyright statements in or around their work, and they're trying to "make it" but haven't gotten there yet. I used Randy Vargas for one of my last projects when he was relatively unknown. I found him on Deviant Art. Artists on there want a human to see it and potentially be interested in hiring them. If an AI company employs spider bots to crawl the internet a copy thousands of copyrighted images into their database so their AI can "Learn", they have essentially stolen the product. The artist should be able to consent or ask for compensation.

Personally, I believe the answer is probably somewhere in between. Artists have been given the shaft for so long by big tech and corporate interests, they would do well to safeguard their work and not just give it away to the first company who promises them exposure (rather than pay).

I also personally believe AI will never match the human spirit. AI will never have to go through the loss of a loved one, a painful betrayal, the excitement of falling in love, or the utter joy of a new birth. It can only imitate. And imitations can ring really hollow and cheesy. So while the world will fill up with mediocre AI generated art, it will only make the true human artists shine.

Expand full comment

Love it Ted // shameless cross post but wrote a bit about my trials and tribulations with ChatGPT recently...I'm mainly having trouble grasping the ethics of a tech company built on the information of users it won't cite or compensate...anyhow I also tried prompting it to "ship" garfield and alf and things got weird...https://cansafis.substack.com/p/the-incredibly-super-duper-very-very...thank you for your awesome blog!!

Expand full comment

Interesting piece, I enjoyed it. ChatGPT's adherents seem to think no one is harmed by the theft of material conceived by actual humans. Its legalese response to your question regarding that issue is very similar to theirs. But always remember to follow the money: Someone's getting rich from this "AI."

Expand full comment

The saddest thing here is we are less than two years away from all these AI companies just being clandestine ad companies selling user prompts and data to target us to consume more jeans and sneakers...

Expand full comment

You're right, but maybe beyond that is the fact that this is one of myriad examples of things that are sad in our world these days. At least The Honest Broker (Ted) is a place where *honest* analysis can be found.

Expand full comment

The term often used instead of lies is „hallucinations“. Which I like, as for creative work it is almost necessary but for doing math it is quite a hassle.

I agree, Chatgpt is not the search engine killer, more like a new interface to it. And for writing texts it might force us to question how much bullshit we should still be forced to write/read (I am looking at you cooking recipe introduction). I am also pretty sure it will be useful for my next scientific proposal.

Expand full comment

I asked ChaGPT to write "a Native American sonnet" and it felt like it included every bad and stereotypical rhyme that's ever been associated with us Natives.

Expand full comment

sounds like what many people's response to that prompt would be!

Expand full comment

Hahahhaha! That's true! Too many flesh-and-blood Native American writers traffic in those same stereotypes!

Expand full comment

Yeh, you are being alarmist. Firstly there’s a clear selection bias - it’s never lied to me and has generated novel lyrics and poetry when asked. I’m sure that what has been posted in twitter has happened, but it’s not as common as the post would indicate. Also being wrong isn’t the same as a confidence trick.

It’s a tool. You need to do your own double checking, and, yea, it will get better. If in 5 years we are where we are now I’d lose hope in AI.

Expand full comment
author

At what point does it learn not to lie?

Expand full comment

I think it's important to understand what ChatGPT is. This current iteration is a chat bot built to continuously output the next word based on probabilities it learned from seeing a ton of text.

It has no knowledge base to know whether something is a fact or not, and whether it is "lying," just that the "true" answer frequently follows some preceding text. It has no conception of confidence in its answers, it just picked some words based on some learned likelihood of what word to output.

This isn't that different than if Google showed you a source with a wrong answer in your search results. It's not lying to you, it's just showing you "relevant" content that other people tend to click on a lot and reference a lot.

But that's still useful even if flawed! And in the future ChatGPT will have some ability to reference a knowledge base, search the internet for specific answers, etc.

Expand full comment

I like this take. I want to amplify it. ChatGPT has no concept of "lie" or even "true" or "false", even though it can answer questions about those words.

The danger is in thinking of it as a person, who does know what those things are. And yes, many people are doing that.

Expand full comment
Feb 3, 2023·edited Feb 4, 2023

When you ask an image AI for a picture of a dog astronaut, you understand that the image it returns isn't a photograph of a real event that took place, right? The image AI generates an original image of what a dog astronaut *might* look like, and returns that.

What you're missing here is, ChatGPT does the same thing with text. When you ask it a question, it doesn't look up an existing answer in a database somewhere - it generates original never-before-seen text of what an answer to your question *might* look like. Calling that answer a "lie" is the same kind of category error as calling an artist dishonest because the dog astronaut they painted doesn't exist in the real world.

Expand full comment

That's an excellent analogy

Expand full comment

"You need to do your own double checking, and, yea, it will get better. If in 5 years we are where we are now I’d lose hope in AI. "

People have been saying this for 20 years, at least (I was in a Cog-Sci program 20 years ago). AI is going to get better at doing some human tasks, and spraying the Internet with noise, etc... AGI is a pipe-dream.

Expand full comment

Didn’t mention AGI. And people have been mentioning AI for 40 years, but ChatGPT is in clearly a step change. Whether it continues or not I don’t.

Expand full comment

More than 40 years. That kind of thinking goes back to the dawn of the technology. Turing said something quite like that in one of his papers in his the 1940s. And the implicit destination of those kinds of prediction statements is AGI. And it's always five years hence.

Expand full comment

Except as I said ChatGPT is a step change from previous bots. Whether this rate of change continues or not is uncertain.

Expand full comment

Maybe the most promising thing about language models like ChatGPT is that young early adopters often see through its limitations and adapt quickly. For example, with Wikipedia, students and the public learned how it works, how to use it, and how to trust it. I'm optimistic that the public will not trust it entirely but will find where it can be most useful. There is always hyperbole around every new technology. If this one advances as quickly as they are promising, it may become more and more useful, and we may even find that it knows it's limitations and can be honest about them. Certainly there will be unscrupulous actors in the space selling snake oil, but there will also be the good guys. Wikipedia occasionally offers up some real baloney from time to time, but for the most part it has been a great tool for connecting the "hive mind" on all manner of subjects. One could even say they have democratized knowledge. In a perfect world, language models would compete for the truth, just as Wikipedia's editors do.

Expand full comment

Remember that ChatGPT, like *any* language model, does not reason in the way humans do. Its *entire* purpose is to provide plausible completions of text.

As such, everything it does is BS in the pure Frankfurtian sense; it DOES NOT CARE if what it's saying is true.

Expand full comment

In 2024 ChatGPT will run for President. Oh wait, that's already happened.

Expand full comment

In David Mamet's "House of Games", Mike (Joe Mantegna) explains con games to Margaret Ford (Lindsay Crouse): "It's called a confidence game. Why? Because you give me your confidence? No. Because I give you mine." That's just before this scene, in which Mike demonstrates "short con": https://www.youtube.com/watch?v=N27gumJNHP0

Expand full comment

We're pissing it off! (Based on that last tweet.) And it's coming for us with it's lying emotions.

Expand full comment