What AI Can’t Do

Backtracking

ChatGPT and other LLMs have trouble with anything that involves re-processing something that has already been generated. They can’t write a palindrome, for example.

“Write a sentence that describes its own length in words”

Inference

LLMs are fundamentally incapable of inference as proven in The Reversal Curse

Berglund et al. (2023)

More Examples

Raji et al. (2022): “Despite the current public fervor over the great potential of AI, many deployed algorithmic products do not work.” Although written before ChatGPT, this lengthy paper includes many examples where AI shortcomings belie the fanfare.

via Amy Castor and David Gerard: Pivot to AI: Pay no attention to the man behind the curtain

Former AAAI President Subbarao Kambhampati articulates why LLMs can’t really reason or plan

Much of the success is a “Clever Hans” phenomenon

Local vs. Global

Gary Marcus: > current systems are good at local coherence, between words, and between pixels, but not at lining up their outputs with a global comprehension of the world. I’ve been worrying about that emphasis on the local at the expense of the global for close to 40 years, >

References

Berglund, Lukas, Meg Tong, Max Kaufmann, Mikita Balesni, Asa Cooper Stickland, Tomasz Korbak, and Owain Evans. 2023. “The Reversal Curse: LLMs Trained on "A Is B" Fail to Learn "B Is A".” arXiv. http://arxiv.org/abs/2309.12288.
Raji, Inioluwa Deborah, I. Elizabeth Kumar, Aaron Horowitz, and Andrew D. Selbst. 2022. “The Fallacy of AI Functionality.” In 2022 ACM Conference on Fairness, Accountability, and Transparency, 959–72. https://doi.org/10.1145/3531146.3533158.