Do Chatbots Get Us Any Closer To Human-Level Artificial Intelligence?

For AI to mirror human intelligence, it must perceive the world like we do.

Alex Kantrowitz
2 min readFeb 13, 2023

This is an excerpt from my weekly newsletter, Big Technology, published on Substack. Check it out here and sign up to recieve it each Thursday.

AI chatbots are getting so good people are starting to see them as human. Several users have recently called the bots their best friends, others have professed their love, and a Google engineer even helped one hire a lawyer. From a product standpoint, these bots are extraordinary. But from a research perspective, the people dreaming about AI reaching human-level intelligence are due for a reality check.

Chatbots today are trained only on text, a debilitating limitation. Ingesting mountains of the written word can produce jaw-dropping results — like rewriting Eminem in Shakespearian style — but it prevents the perception of the nonverbal world. Much of human intelligence isn’t marked down. We pick up our innate understanding of physics, craft, and emotion by living, not by reading. And without written material on these topics to train on, AI comes up short.

“The understanding these current systems have of the underlying reality that language expresses is extremely shallow,” said Yann LeCun, Meta’s chief AI scientist and a professor of computer science at New York University. “It’s not a particularly big step towards human-level intelligence.”

Holding up a sheet of paper, LeCun demonstrated ChatGPT’s limited understanding of the world in a recent Big Technology Podcast episode. The bot, he promised, would not know what would happen if he let go of the paper with one hand. Upon consultation, ChatGPT said the paper would “tilt or rotate in the direction of the hand that is no longer holding it.” For a moment — given its presentation and confidence — the answer seemed plausible. But the bot was dead wrong.

Lecun’s paper moved toward the hand still holding it, something humans know instinctually. ChatGPT, however, blanked out because people rarely describe the physics of letting go of a paper in text (well, perhaps until now).

“I can come up with a huge stack of similar situations, each one of them will not have been described in any text,” LeCun said. “So then the question you want to ask is, ‘How much of human knowledge is present and described in text?’ And my answer to this is a tiny portion. Most of human knowledge is not actually language-related.”

Without an innate understanding of the world, AI can’t predict. And without prediction, it can’t plan. “Prediction is the essence of intelligence,” said LeCun. This explains, at least in part, why self-driving cars are still bumbling through a world they don’t completely understand. And why chatbot intelligence remains limited — if still powerful — despite the anthropomorphizing.

--

--

Alex Kantrowitz

Veteran journalist covering Big Tech and society. Subscribe to my newsletter here: https://bigtechnology.com.