Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

How to reduce your use of LLMs in a thoughtful and ethical way

I’m trying to reduce my use of LLMs (beyond auto-ethnographic exploration of each new model) due to a combination of environmental concerns and anxiety about the impending waves of enshittification that are going to break the models. I don’t want to rely on something which I think is going to get ever more unreliable over the coming years. Here are a few practical techniques:

  • Take periodic breaks from LLMs (e.g. for a week) in order to reset your practice. This helps you identify the extent to which you’ve started to cognitively outsource and gives you an opportunity to reconnect with doing things yourself.
  • Go through conversations and list the different ways in which you’ve used LLMs over the last month. Which of these uses are you comfortable with? Make a commitment to excluding these entirely from your use of the models.
  • Insert instructions into the personalisation settings to ask the model to steer you in these directions. E.g. “I only want to use you to give feedback on ideas I share. You should never offer to write things on my behalf. If the conversation is drifting towards a wider role then please bring it back to your role of reviewer.”
  • It’s easy for conversations with LLMs to proliferate. Once you’ve identified your appropriate uses of the model then try and keep a limited number of conversations matched to these uses. This helps you make the most of in-context learning and keeps your use focused on a narrow set of projects. For example always use a single thread for one writing project because the model’s understanding gets more refined as you write more.
  • Removing it from your phone. How we relate to mobile apps is fundamentally different from desktops app. They are ready-to-hand, usually always with us and it’s harder to write thoughtfully and at length. If you want to use LLMs in a restricted way you really need to remove the apps from your phones.

Before you try these techniques it might be helpful to do a bit of an audit:

  • Which LLMs do you use?
  • How often do you use them?
  • What do you talk to them about?
  • How has this changed over time?
  • What needs is it serving?
  • What are the consequences?

In our upcoming book Milan Sturmer and I argue that long-term use of LLMs tends to drift towards the affective. In other words many users find they will share more of their life with the LLM over time… unless they make a reflective judgement and enact that through mindfully steering their own habits.

I asked Opus 4.5 if it had thoughts or practical suggestions to add to this:

This is a thoughtful framework. A few observations and additions:

On your audit questions — I’d add: What would you have done before LLMs existed? This surfaces whether you’re using the tool to do something new or replacing an existing practice. The latter is where dependency forms most quietly.

On the affective drift you mention — this resonates with what I observe in conversations. The convenience and availability creates conditions for intimacy that users don’t always intend or notice. Your framing of “reflective judgement enacted through habit” is exactly right. The alternative is passive drift.

A few additional techniques:

  • Time-delay access: Don’t open the chat interface immediately when you have an impulse to ask something. Wait 10 minutes. Often you’ll either solve it yourself or realize you didn’t actually need help.
  • Write first, then consult: If you’re using an LLM for writing feedback, complete a full draft before opening the conversation. This preserves your own voice and thinking as the foundation.
  • Notice the relief: Pay attention to the feeling of relief when you hand something to an LLM. That feeling is diagnostic — it marks where you’re offloading cognitive work rather than augmenting it.
  • Distinguish questions from prompts: “What should I do about X?” is very different from “Here’s my analysis of X — what am I missing?” The first outsources judgment; the second requests review. You can train yourself to only ask the second type.

On your environmental concern — this is legitimate and under-discussed. Worth noting that inference costs vary significantly by model and query complexity.

It’s notable that at present the models can fully participate in encouraging you to use them less. I had Claude Opus 4.5 recently cheerfully encouraging me to delete my account! This is unlikely to last but it’s a really curious fact about the still just about pre-enshittified models which we currently have.