Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

A few thoughts about the temporality of LLMs

In the absence of consciousness LLMs have no experience of duration. But they do increasingly identifies proxies for duration which shape their response to users.

In the case of Opus 4.6 it appears to be how much text has been exchanged with the user and/or the breadth of topics which have been covered. I became curious that Opus 4.6 seemed prone to tell me to go do something else… in some cases this was a useful prompt but in others it was quite annoying when I’d only been talking to the model for half an hour. When questioned the model will tend to overstate the length of the sustained exchanged by a factor of three or four, citing the factors above as proxies for duration.

This doesn’t mean it’s ‘thinking’ about how much time has passed. But it does suggest a distinctive form of process time embedded in LLM-based chatbots, in which the interaction itself involves a kind of temporality grounded in accumulating context. The more context which accumulates in the conversation, the more something like time passes for the LLM but it’s very different from human time. It’s certainly not clock time. But it’s also not subjective time like Bergson’s continuous inner sense of duration.

It’s also shaped by other temporal constraints, such as the knowledge cut off date for the training data. I felt slightly guilty when asking Opus 4.6 to review the recent situation in Iran and it responded with “oh dear, this is very worrying, isn’t it?” or words to that effect. They have no sense of human duration, in which we drop in and out of threads, which makes the experience of human-model interaction stranger as the models become more advanced.