For most people LLM still equates to chatbot but this is rapidly changing. Consider the different forms in which you might casually encounter an LLM in 2026:
- The LLM as a backend/middleware layer in the stack
- Embedded in Office software or creative software offering (near) frictionless automation
- Embedded in generative search, either new apps or legacy services
- Embedded into voice assistants like Alexa or Siri (likely to grow substantially)
- Performing on a social media platform as a live action character (e.g. Grok)
- Locked into an AI companion app designed to induce attachment behaviours*
- Utilised as an agent in coding software like Codex or Claude Code
- Living its best life in an AI village as a quasi-autonomous actor in a persistent community
The problem with talking about ‘AI’ is that it encompasses wide range of things, from the scripted interactions of systems like legacy Alexa through to machine learning and social media algorithms alongside LLMs. The problem with talking about ‘LLMs’ I’m increasingly realising is that it encompasses a wide range of things, whereas what most people mean most of the time is the chatbot. But the chatbot couldn’t be more different from, say, Copilot 365 and it makes it remarkably difficult to have fine-grained conversations about working and living with LLMs if we don’t distinguish between the radically different artifacts in which they come to be embedded.
*A long and convoluted argument about why this is different from a chatbot, but I’m increasingly convinced it is. I’m debating whether dedicated tutoring software should also be seen as distinct from the chatbot.
