Quote Investigator®
1 min readMar 26, 2024

--

GPT models are powerful cognitive agents, but the intelligence they display does not fit within the traditional scale of human intelligence. GPT-4 displays super-human intelligence (or expert intelligence) on some tasks, and sub-human intelligence on other tasks.

Examples of sub-human intelligence are discussed in this paper:

The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A" by Lukas Berglund et al

[Begin excerpt]

For instance, if a model is trained on "Olaf Scholz was the ninth Chancellor of Germany", it will not automatically be able to answer the question, "Who was the ninth Chancellor of Germany?".

[End excerpt]

Another example of the reversal curse is discussed in an article on Medium (and elsewhere):

[Begin excerpt]

Ask ChatGPT “Who is Tom Cruise’s mother” and it will answer. However, flip this question and ask ChatGPT, “Who is Mary Lee Pfeiffer’s son?” and it will not be able to answer.

[End excerpt]

Researchers are exploring techniques to avoid this problem, e.g., this paper:

Reverse Training to Nurse the Reversal Curse by Olga Golovneva et al

The point of this comment is to state that GPT models have serious cognitive weaknesses.

--

--

Quote Investigator®
Quote Investigator®

Written by Quote Investigator®

Garson O'Toole specializes in tracing quotations. He operates the QuoteInvestigator.com website which receives more than 4 million visitors per year

Responses (1)