Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Mobile generative AI as the next stage of surveillance capitalism

This was extremely interesting in the Qualcomm* CEO’s interview with the FT. In essence I think he’s saying that the consumer economics of GAI won’t work until it’s in mobile devices, but there will never be enough compute on a mobile device to match the performance of a cloud-based model, which means (a) the local LLM will be parasitic upon the cloud-based LLM in ever more clever ways (b) it will rely on contextual data to support the inferences upon which this relationship depends:

When you think about AI models, they’re coming to devices. A number of different models are being ported to Snapdragon [Qualcomm’s mobile chip platform]. I think [mobile] platforms are becoming more open, with more potential for innovation. We [think] a hybrid model is going to be important, especially if you look at the high cost and high energy consumption of the data centres with gen AI. You want to be able to make use of distributed computing. A lot of the models that you see today are running on the cloud. But if there’s an inference version of that model [installed] in the device, the device will give the cloud model a head start. When you make a query on a device, it is way more precise, because the device has real context: where is Tim? What was Tim doing before? So the result is that this hybrid model is cheaper. Because when you leverage the computation on the device, the query is more precise and it’s actually a more efficient way of doing computing [than running AI apps in the cloud].

https://www.ft.com/content/dbc0984b-4801-4aeb-bcab-480704c34161

This suggests to me an early form of the digital daemon is coming soon, led by a desire to jump start the decelerating upgrade cycle of smart phones. But that access to these superpowers is going to be extremely data-hungry, leading to another round of competitive datafication. It’s all so depressingly familiar yet weirdly novel at the same time.

*Does anyone else think this firm sounds like a qualitative methods in communications research conference?