LLMS thinking
Data Science, My Digital Universe, Portfolio

What Large Language Models Really Are: Not Minds, Just Math with Tools

LLMs like ChatGPT are often described as if they think. They don’t. At least not qiote like humans. Which, may not be a bad thing given the kind of stuff us humans have conjured up over the years.

Back in 2011, Daniel Kahneman introduced System 1 and System 2 thinking:

  • System 1: fast, intuitive, automatic
  • System 2: slow, deliberate, reasoned

LLMs are pure System 1 engines. They don’t reason. They don’t understand. They predict the next token … that’s it.

Every “intelligent” response is just a string of highly probable guesses. Step by step, word by word. Recent research on Agentic LLMs make this a tad bit interesting.

By plugging LLMs into tools for reasoning, retrieval, symbolic logic, interaction, we build the appearance of System 2 thinking:

  • Step-by-step prompting
  • Calling calculators or search engines
  • Planning with external tools
  • Interacting with other agents

This isn’t true deliberation. It’s orchestration. We’re layering deliberate behavior on top of probabilistic word prediction.

The magic of modern LLMs isn’t intelligence. It’s composition, blending fast token prediction with structured workflows and external tools. They’re not minds. They’re language interfaces made powerful through math, scale, and tool use.

Understanding this doesn’t undercut them. It makes us better at using them.

Leave a comment