It's just glorified auto-complete

“What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be.”

This morning I came upon an article in my Google News feed, which claims that ChatGPT is way stupider than people realize. I’ve made a similar case before, but I’m not an “AI expert”, so my opinion doesn’t count as much.

Rodney Brooks is a rotobiticist and AI researcher, and while his main area of study isn’t large language models (such as ChatGPT), he’s still more qualified than I am to discuss them. In the interview, summarized in the article, he explains that a LLM, like ChatGPT is not able to reason like a human, “Because it doesn’t have any underlying model of the world.” Remember: It’s a language model, not a thinking model, wisdom model, reality model, or anything else.

He summarizes: “What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be.”

If you think of ChatGPT, and its ilk, as a glorified auto-complete feature (which is literally what it is), you’ll be in a much better position to understand its capabilities, than those who believe it actually represents any sort of general “intelligence.”


For a more pragmatic discussion of how LLMs like ChatGPT can be used in tech, you may be interested in listening to Jillian Rowe and me discuss the topic in this week’s episode of Adventures in DevOps.

Share this