In the absence of consciousness LLMs have no experience of duration. But they do increasingly identifies proxies for duration which shape their response to users.
In the case of Opus 4.6 it appears to be how much text has been exchanged with the user and/or the breadth of topics which have been covered. I became curious that Opus 4.6 seemed prone to tell me to go do something else… in some cases this was a useful prompt but in others it was quite annoying when I’d only been talking to the model for half an hour. When questioned the model will tend to overstate the length of the sustained exchanged by a factor of three or four, citing the factors above as proxies for duration.
This doesn’t mean it’s ‘thinking’ about how much time has passed. But it does suggest a distinctive form of process time embedded in LLM-based chatbots, in which the interaction itself involves a kind of temporality grounded in accumulating context. The more context which accumulates in the conversation, the more something like time passes for the LLM but it’s very different from human time. It’s certainly not clock time. But it’s also not subjective time like Bergson’s continuous inner sense of duration.
It’s also shaped by other temporal constraints, such as the knowledge cut off date for the training data. I felt slightly guilty when asking Opus 4.6 to review the recent situation in Iran and it responded with “oh dear, this is very worrying, isn’t it?” or words to that effect. They have no sense of human duration, in which we drop in and out of threads, which makes the experience of human-model interaction stranger as the models become more advanced.
