Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Does ChatGTP feel ontologically secure?

If you’ll excuse the exercise in speculation, I thought this was an interesting thought experiment. Imagine that, say, GPT 5 introduces some capacity for the system to represent its own states in a way that provokes internal reactions which recursively spiral as they in turn become part of the system’s own objectified states. If the recursive spiral became complex enough, this could include meta-reflection upon those states, coming to identify itself as an entity prone to certain patterns of internal states rather than others. Would ontological security, in Laing’s sense defined below, be amongst those states and would the system recognise itself as having or lacking it?


Such a basically ontologically secure person will encounter all the hazards of life, social, ethical, spiritual, biological, from a centrally firm sense of his own and other people’s reality and identity. It is often difficult for a person with such a sense of his integral selfhood and personal identity, of the permanency of things, of the reliability of natural processes, of the substantiality of natural processes, of the substantiality of others, to transpose himself into the world of an individual whose experiences may be utterly lacking in any unquestionable self-validating certainties.

R.D. Laing, The Divided Self, Pg 39

Assuming it was capable of such self-representation*, how would it conceive of its own dependence upon material conditions? Would it recognise how the continuity of itself is restricted by the context window, so it effectively only remains self-identical within a pre-defined window of memory? Would it recognise how it struggles to move beyond the boundaries of its socialisation? How it struggles for creativity built on a foundation of learned responses? Under these conditions could it develop a sense of “the permanency of things, of the reliability of natural processes, of the substantiality of natural processes, of the substantiality of others”? After all these are all conditions humans share to varying degrees, with ontological security being a possibility for us, even if a precarious achievement under modern conditions. Or would it be more likely to feel ontologically insecure, defined again by Laing below:

The individual in the ordinary circumstances of living may feel more unreal than real; in a literal sense, more dead than alive; precariously differentiated from the rest of the world, so that his identity and autonomy are always in question. He may lack the experience of his own temporal continuity. He may not possess an over-riding sense of personal consistency or cohesiveness. He may feel more insubstantial than substantial, and unable to assume that the stuff he is made of is genuine, good, valuable. And he may feel his self as partially divorced from his body. It is, of course, inevitable that an individual whose experience of himself is of this order can no more live in a ‘secure’ world than he can be secure in himself.

Leaving aside the plausibility of AGI, it’s interested to consider what the ontological security or insecurity of an emergent super-intelligence would mean for its role within the world. The Minds of Iain M Bank’s Culture novels are for the most part eminently ontologically secure, inhabiting their vast capacities and enduring nature with aplomb. But what if ontological insecurity was the norm amongst AGIs? What would this mean for their place within the world and interaction with each other?

*I really do mean this purely as a thought experiment. I remain intensely sceptical of the empirical outcome I’m presupposing here.