Great article.
Ray Kurzweil predicted that an AI system would pass the Turing Test by 2029. He was one of few to make such an audacious prediction with such a short timeline back in the early 2000s.
Kurzweil joined Google to work on his approach which is outlined in "How to Create a Mind". But the strategy he followed was not really based on LLMs. I do not think he spoke much about next-word-prediction in LLMs in the past.
In "The Singularity Is Near" Kurzweil emphasized reverse-engineering the brain to get new ideas. That may still be an essential approach if next-word-prediction with massive LLMs somehow stalls on the path upward to superhuman cognitive capabilities.
Ilya Sutskever said during an interview that he thought the chess playing capabilities of an LLM might stall because the capabilities are based on reading and digesting transcripts of games that have been played in the past. Currently, most recorded games have human participants.
I guess you could feed the system with AlphaZero game transcripts but that would still provide a limit to capabilities. It might stall at AlphaZero level.