7 Comments
User's avatar
⭠ Return to thread
Aditya Advani's avatar

This is a great description. Having said that, with some chain of thought reasoning and two prompts transformers can now exceed humans in giving correct answers to short scientific and factual questions.

The minute you get into longer or literary territory you're in trouble though since the bot is doing a mix of most likely, most likely to please and most likely not to offend.

So yes it doesn't understand what it's saying, but you can rig it to give really inspirational / useful / correct output for all kinds of use cases, as long as you keep them fairly narrow and well-defined.

TL;DR really great for human intelligence augmentation, but not intelligent on its own. Circumstances which I see as a blessing

Expand full comment
Kaleberg's avatar

We tried OpenAI by asking about Eudora Welty's "A Visit of Charity", a rather standard high school reading subject. Wow! Did it get it wrong. The girl's name was wrong. Her age was wrong. The place and purpose of the visit were wrong. The description of what happened was wrong. The grammar was wrong. Could OpenAI have been trained on bad high school essays about the work?

Expand full comment
Aditya Advani's avatar

Let me be clear.

(a) yes

(b) transformers are fabulists since they can't really reason at a conceptual level. But you can tell it to eg "stick to the correct text or say I don't know explicitly" and it will follow your instruction very faithfully

GPT has been released on us in a very unsuspecting way - its a giant open public experiment.

To get transformers to consistently tell the truth you have to instruct them very specifically and get them to explain their reasoning and check their answers.

This paper explains how

https://t.co/tmAc5N4EOu

Expand full comment
Kaleberg's avatar

It's not that they are fabulists. It's that they don't have a clue. Sure, I could read the story and carefully and tediously guide a GPT to produce a reasonable analysis, but that would mean doing more work rather than less. I'd have to understand the story, AND I'd have to understand how to coax a computer system to produce more or less what I would have written myself. Why I would bother?

Years ago, I tutored my niece using this very story. She was miles off, but understood a lot more than any GPT. Her problem was with inference and subtext, not simply following the narrative. She had a framework, albeit limited at the time, for understanding stories, so she could answer prima facie questions about the various characters and the setting. Amusingly, she thought the potted plant in the story actually was some kind of noxious weed because one of the characters called it that.

Eudora Welty stories aside, this experience suggests that GPT is not ready for a lot of other tasks, for example, scanning abstracts and papers to find useful information. I was in the AI community back in the 1980s, so I have some idea of how GPT works. When case based reasoning, as they called it back then, took over in the early 1990s, a lot of useful ideas were thrown out, but I think some of them deserve a new look today.

Expand full comment
Andy's avatar

It's not a database - it doesn't have access to the text of the story you're asking about, so naturally it's not going to answer accurately. What you're doing is effectively like asking a person to summarize a story they've vaguely heard about but never actually read.

In contrast if you try it on tasks with text it has access to - e.g. "briefly summarize the following five paragraphs" or "write a cover letter for the following resume" - you'll find it's very good at such things.

Expand full comment
Kaleberg's avatar

That's reasonable, but that means it's not going to replace internet search, Wikipedia, Expedia, Tripadvisor, or Cliff's Notes which contain actual databases. It doesn't seem to be able to perform math or follow logic. Maybe it is useful as a text analysis tool. I'll try dumping in some texts, perhaps some abstracts, to see what it does with them.

Expand full comment
Andy's avatar

Well, it's useful to remember that ChatGPT currently is just an extremely general solution to the general problem of natural language. If you wanted a GPT that was conversant about Eudora Welty stories you could certainly train (i.e. fine-tune) one, or conversely if one wanted to build a service where the GPT *can* go and look up source texts it's asked about, there's nothing technical preventing that. So in a sense one can think of ChatGPT as more of a UI than a task-doer for now.

But for tasks it's good at, it's really good. Here's a surprising one: "ChatGPT, I copied the following text out of a PDF and the formatting is all messed up. Reformat it into a data table." It's better at stuff like that than, frankly, it has any right to be.

Expand full comment