Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Will generative AI kill discovery?

This is interesting by Rob Horning, riffing on Noah Smith’s suggestion that:

because LLMs are great at summarizing the broad corpus of human written knowledge, they can also act as a how-to manual for any task that can be explained in writing — much like YouTube acts as a how-to manual for any task that can be demonstrated with a short video. LLMs won’t just explain how to do things, either; they’ll draw on the collective accumulation of human wisdom to offer ideas on how to solve problems.

https://noahpinion.substack.com/p/why-americans-fear-the-ai-future

Horning raises the same thought which has preoccupied me recently, in a much more articulate way than I had proved able:

But the gap between imagining and executing is not inert dead time, anymore than the process of reading is a wasteful inefficiency arresting the flow of information. That gap is where engagement occurs and is sustained. Chatbots and other content generators are sometimes promoted as offering shortcuts around the work of being invested in something, so you can just get to the thing itself. This makes sense only for things you aren’t interested in thinking about in the first place. What they produce won’t be a revelation; it instead marks the absence of a process of discovery, an unmarked grave for a moment of curiosity.

https://robhorning.substack.com/p/april-4?utm_source=post-email-title&publication_id=1073994&post_id=112444300&isFreemail=true&utm_medium=email

Consider the affordance of automatically generating code. I’d recently been contemplating dusting off my meagre Python skills which, as someone committed to Phil Brooker’s approach, meant finding a real world project which I actually cared about. Cross checking reference lists against Google Scholar was exactly such a project. I could vaguely see the underlying logic of how it would work but it would have taken a lot of tinkering to put it together. However when I asked Chat-GPT to assist me it immediately wrote me code which, with one major exception probably related to the ambiguity of my prompt, could do the job for me. So what happens to the process of discovery which was exactly the pedagogical rationale for identifying a real world project for me to tinker with?

He offers the intriguing idea of post-content to describe this dynamic. This chimes with the argument we made in Public and their Platforms that content is something around which groups can coalesce as groups. They become objects of attention and deliberation, as much as the quality of that deliberation might be intensely variable on a platform like Facebook or Twitter. The immediacy of Chat-GPT creates a rush towards the output which obviates the meaning or significance of the content on which we use to rely to get to that output:

Generative AI could save us from having to think about unfulfilling obligatory tasks so we have more cognitive power left for the “real” content we want to think about. To put that another way, AI’s usefulness lies in convincing us that it’s not a tool that we are obligated to use but an “intelligence” that saves our own. But if that’s the case, it will be oriented toward producing our ignorance as evidence of its efficacy. It may work to habituate us to an immediacy that makes all thinking appear arduous, eliminating the perceived difference between tasks worth automating and those worth engaging. If “content” denotes something that produces engagement, generative AI may be understood less as a content generator and more as a content destroyer. Instead of content, it just yields output. Instead of everyone becoming “content creators,” we’ll all become content to be inputters.

https://robhorning.substack.com/p/april-4?utm_source=post-email-title&publication_id=1073994&post_id=112444300&isFreemail=true&utm_medium=email