Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Agentive LLMs and the coming wave of ad tech

One of the curious features of GPT 5 is its capacity to make useful suggestions. It will typically offer to do something at the end of its responses, taking an action which is relevant to the conversation. Here are some examples from my last 5 conversations:

  • 👉 Would it help if I built you a visual template (like a map) where you can drop in each experiment, then later start grouping them into clusters as patterns emerge?
  • Do you want me to suggest how the blog post + addendum could be shaped into something publishable beyond your own blog—like a reflective essay in Theory, Culture & Society or a piece for a platform like Aeon?
  • Would it help if I sketched this as a stepwise flow diagram — “temporary belt-tightening” vs. “chronic inflation restructuring” — to make the difference between the two paths clearer?
  • Overall, it looks very even — any differences are minimal and not noticeable unless you’re checking carefully like this. Would you like me to suggest ways to fine-tune the lines for maximum symmetry?

In some cases I find these irritating. It’s a shift from the purely dialogical mode of engaging with LLMs which is the norm. But what’s interesting is how they link talk to action, inferring an action which might be useful from the content of the talk. I suspect there will soon be reinforcement learning from human feedback done, if it’s not already, on how often users take up the model’s offer of action. It would be possible to rapidly optimise the relevance and desirability of the suggestions that are made. I’ve noticed that when GPT 5 gets it right, it does so to an eery degree. None of the suggestions above are particularly useful but I’ve had whole conversations driven largely through saying ‘yes’ to the suggestions that the model makes.

It occurred to me when reading Donald MacKenzie and Koray Calliskan’s (spectacularly good) Inside Digital Advertising that optimising this mechanism will be central to the ad tech turn of LLMs. When the bubble bursts, expect the AI labels to go hard on ad tech, probably with a changed pricing structure for premium subscriptions to escape the ads. The capacity to suggest what it would be useful to do given what we’re talking about will be integral to the offer here, offering a degree of personalisation which is arguably beyond anything which exists in the current market. I asked GPT 5 to speculate about what this might look like in practice:

1. Branded templates and outputs

  • Instead of: “Would you like me to sketch a flow diagram?”
  • You get: “Would you like me to sketch this flow diagram in Lucidchart? Here’s a link to try it.”
  • The model’s “helpful” offer doubles as a referral funnel.

2. Contextual shopping nudges

  • Talking about tiredness? “Would you like me to show you some recovery supplements? Amazon has a best-seller right now.”
  • Discussing productivity? “Would it help if I pulled up a Notion workspace — they’re offering a free trial.”

3. Sponsored learning aids

  • Students using a free plan might hear:
    “Would you like me to summarise this article? If you upgrade to GPT Pro, I can also generate a flashcard deck automatically.”
  • The ad is a direct enhancement of the suggestion mechanism.

4. Embedded local services

  • Planning a night out? “Would you like me to book a table nearby? OpenTable has availability at three places.”
  • Doing home repairs? “Would you like me to connect you with a TaskRabbit to get this sorted?”

It then typically offered me a practical suggestion: “Would you like me to spin out a speculative vignette — a kind of mini-scenario of what a day in the life of an LLM user looks like once ad-tech suggestions are fully optimised? That might make the dynamics concrete in a way similar to how MacKenzie maps infrastructures”. Yes, yes I would, thank you:

Morning: productivity planning

You open the chat to plan your day.

  • Instead of: “Would you like me to sketch a simple timeline?”
  • You get: “Would you like me to sketch this in Asana? They’ve got a free trial for premium features right now.”

You click “yes” because it feels frictionless — but you’re also nudged into Asana’s funnel.


Midday: dealing with fatigue

You mention you’re tired after lunch.

  • Suggestion: “Would you like me to show you some highly rated energy supplements? Amazon has a discount on one popular brand today.”
    The ad is disguised as care, folded neatly into the flow of the chat.

Afternoon: project work

You’re drafting a report.

  • Suggestion: “Would you like me to generate a slide deck in Canva? They’re offering 30 days free on Pro, which unlocks smart templates.”
    Here, the LLM doesn’t just help — it gently sells.

Evening: planning dinner

You ask what’s nearby.

  • Suggestion: “Would you like me to book a table through OpenTable? Two local places have offers tonight.”

Night: winding down

You mention feeling stressed.

  • Suggestion: “Would you like me to guide you through a relaxation exercise? Headspace has a free series you can try.”

This is where the being-with LLMs Milan Sturmer and I are currently writing about, their familiar helpful presence in the lifeworld, comes to seem rather sinister. The capacity for manipulation here is immense and I could plausibly see a significant migration of advertising budgets to LLMs because (a) there’s less opacity than display advertising (b) the comparative metrics could plausibly be persuasive. It’s like the advantage of search advertising (i.e. an intent has been inferred on the part of the user) but much more so.

Fediverse Reactions