what is going on here? read more to find out

Cold Water AGI

posted June 8, 2025 #

Gary Marcus recently posted this insightful piece about the current state of "advanced" thinking within AI, entitled A knockout blow for LLMs?. He discusses a June 2025 research paper from Apple called The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity, an enormous title that basically boils down to - "reasoning" in AI is not as great as these companies want you to believe.

Marcus piece is not dense or unapproachable. If you have the slightest interest in AI and how the ongoing advancements are proceeding, it's a very thoughtful discussion. A friend of mine called it "Some good cold water on how close AGI." Meaning, we're not that close to Artificial General Intelligence despite what some of the hype may lead you to believe. The pattern matching on current LLMs is incredibly impressive but "thinking" is the wrong word for it - a 7-year old has better reasoning skills.

I do not consider myself an AI evangelist or expert in any regard. I am a fan of it and use it regularly but also agree with a great many critiques of it. Mostly I try to remind myself that it is early days. Of course OpenAI and Claude and Google and all the rest want you to believe they have the best models - they want you to pay them to use the software! It's all marketing! But that doesn't mean it's not also impressive! It just has to be taken with a grain of salt, like all marketing does.

Marcus and Apple make a great point about the state of things - they are not wrong. AGI is not impending and AI "thinking" has a long way to go (for now). That's good for us because we need to figure out how these tools fit into our landscape from an ethical standpoint and how we can regulate them.