Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Regular Claude users are slightly less likely to have personal conversations with it

While this is a small effect reported in Anthropic’s recent paper, it’s a bit of a challenge to an argument I’ve offered in an upcoming book that transactional use (asking an LLM to do something) will tend to slide into affective use (talking to it about personal things) over time because transactional use necessarily inculcates the habit of sharing what’s going on in your life with the chatbot:

In general, the most seasoned Claude users employ it more often for higher education tasks and less often for personal use cases. For example, people
who have been using Claude for 6 months or more have 10% fewer personal
conversations and a 6% higher education level reflected in their inputs. Most
strikingly, people in this higher-tenure group have a 10% higher success
rate in their conversations, an association that is not explained by their
task selection, country of origin, or other factors. While this could reflect
sophistication of early adopters, it could also be evidence of learning-bydoing, where people get better at using Claude through experience.

The more you talk about your work context, necessary for transactional tasks to be effective, the more you talk about your context fullstop. I still think I’m right about this being a mechanism but this offers a useful caution about overclaiming the empirical outcomes of it.

It’s interesting how Anthropic are carving up the interaction types, because these categories are liable to prove extremely sticky if they are used to structure the initial years of data collection:

Since our first report, we have classified conversations into one of five interaction types—directive, feedback loop, task iteration, validation, and learning—which we group into two broader categories: automation and augmentation.⁴ Figure 1.3 shows that augmentation in Claude.ai increased slightly. This was driven by small bumps in validation and learning patterns. In Appendix Figure A.3, we show that automation decreased sharply in the 1P API data.