Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

What social infrastructure do we need to make the most of generative AI?

I thought the way Gloria Mark frames this question was very important. This is the question around which I’ve structured Generative AI for Academics, even if I failed to articulate so concisely:

Large Language Models (LLMs) (like ChatGPT, Google Gemini, and dozens of others) are tools at our fingertips for doing everyday knowledge work—like searching for information and summarizing and writing text.  Current experiences though with LLMs highlight how using them is not so straightforward as we had imagined (see my recent Substack about an experiment creating 600 summaries). In addition to problems with inaccuracy we haven’t yet figured out the social infrastructure in which to use them. Are there any guiding thoughts to help us use them more effectively in knowledge work?

https://gloriamark.substack.com/p/can-human-insight-and-ai-computation

In essence I’ve argued that academics need to learn to undertake articulation work, as its called within Computer-Supported Cooperative Work (CSCW), summarised by Mark here. I’d argue there is an epistemic bias which gets in the way of academics undertaking this work; though I don’t make this argument in the book, I think it’s the best explanation of why otherwise competent people struggle with the relatively straightforward matter of interacting with the systems:

Articulation work refers to the additional work that needs to be done to make a collaboration work. In successful human-human collaboration, people have to spend time upfront defining and coordinating the task, and figuring out roles and responsibilities. Articulation work is ongoing, to make sure that collaborating partners remain on the same page.

https://gloriamark.substack.com/p/can-human-insight-and-ai-computation

What Mark describes at the end is exactly what I hope my book contributes to in higher education. I feel like we urgently need to have this conversation because malpractice is proliferating in domains like publishing and reviewing:

Conventions are needed within groups and organizations to agree on the types of tasks for which AI use is acceptable; how to address accountability when errors are made, and how the produced text should be integrated with other work.

https://gloriamark.substack.com/p/can-human-insight-and-ai-computation