Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Against an instrumentalist understanding of AI: critical realism and conceptualising artificial intelligence

This is a useful concept from Andrew Dryhurst in a recent paper in JCR. I’ve been prone to arguing for the same framing by talking about the need to historicise AI, in terms of a broader history of digitalisation then platformisation. I think Dryhurst’s framing here helps me account for how a particular framing of AI emerges both from failing to historicise it, as well as contributes to making it more difficult to do so in the future:

Traditionally, a large amount of philosophical functionalism has pervaded the AI space (Bryson Citation2019; Searle Citation1984), which has served to underpin an instrumentalist understanding of AI technology in much of the social science literature on the topic. Instrumentalism here refers to AI being understood as a tool and solely in terms of what it does. This is of course necessary at a certain level, given the wide-reaching scope of AI-use cases, the diversity of models and training sets, and the opacity that frequently surrounds AI’s societal deployment (O’Neil Citation2017). Nevertheless, instrumental notions of AI are inescapably presentist in their analytical scope, and it is important to consider that different AI are themselves embedded in an enormous variety of material relations and processes. AI are constructed and deployed by agents who are imbued with their own structural and institutional contexts, interests, ideals, and situational logics. A particular company’s AI systems are necessarily intertwined with the dynamics of (inter)national regulations, supply chains, and (national) accumulation regimes, as well as corporate agents’ reflexive and culturally conditioned actions in and through time. That is, AI are open complex systems embedded in other open complex systems.

https://www.tandfonline.com/doi/full/10.1080/14767430.2023.2279950#abstract

I think you can make this point without the CR vocabulary but it is a very important point which is very powerfully made here:

there is a research gap to be filled through tracing AI’s conceptual and material development in relation to the morphogenetically derived systemic imperatives traversing the political economy of the Internet and its history. For example, the ubiquitous deployment of AI models across all aspects of society presupposes questions about attribution concerning the datasets that are fed into different models; the transparency of data collection and processing; and the complex regulatory challenges that widescale AI deployment creates

And this is exactly what I’m interested in addressing, particularly the notion of models as cultural technologies, even if I arrived there through a slightly different route:

Similarly, the recursive and emergent consequences of people’s interactions with powerful AI models across industry and society make the models akin to cultural substrates from which particular worldviews may be inscribed and cultivated. To paraphrase Marshall McLuhan, the model may well be the message (Bratton and Agüera y Arcas Citation2022). All of these connote significant economic and social outcomes, and also exemplify a situation where the rise of powerful AI companies, possessive of their own intellectual property, datasets, and modelling practices, ought clearly to be situated within the accumulation imperatives and systemically persistent dynamics shaping the Internet’s development in capitalism because they are intertwined with and shaped by AI’s regulation and deployment as well.