Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Tfw the only sections left in your book are the difficult ones you have been putting off

In one sense Generative AI for Academics is almost finished. In another sense it is spiralling out of control because the only bits left are the immensely tricky sections on ethics and politics which I have been determinedly avoiding. The remaining bits of the manuscript are a matter of filling in a few blanks and tidying up now. But I realise that my avoidance of the ethics chapter (largely I suspect because at least one of my internal voices suggests the whole thing is just a disaster environmentally and politically, raising the question of why I am writing a book encouraging people to use it) is leading the rest of the manuscript to balloon.

I’m adding stuffing in elsewhere to avoid tackling the bits I’m ambivalent about, creating more need for tidying up and taking me further away from finishing the book. This is starting to remind me of how I made such a mess with the second edition of Social Media for Academics which is an experience I was determined to learn from. This blog post is an act of public accountability: this week I will work out how to turn the ethics chapter into something coherent and stop fiddling with the rest of the book!


Perhaps inevitably given the subject matter I just turned to Claude AI for assistance. I provided it with a detailed explanation of this chapter and the broader book, alongside (a) 6000 words of disjointed notes on ethics (b) the complete drafts of the previous two chapters. I asked it to provide section headings and bullet points which capture the topic of my notes, organised in a way which reflects their internal logic and which builds upon the arguments of the previous two chapters. This is pretty damn good and exactly what I needed to make this section tractable, after feeling like I was drowning in my own notes and unable to get traction on writing this chapter:

Introduction

  • Bridge from prior discussion of principles and process view of scholarship to focus on ethics
  • Emphasize these issues arise from your own exploration of using generative AI reflectively

Section 1: Environmental Impacts

  • Energy usage contradicting climate goals of higher education
  • Individual use contributes to broader harms of large models like GPT-3
  • Explain environmental costs when introducing new tools to colleagues

Section 2: Data Ethics and Privacy

  • Lack of consent in web scraped training data
  • Potential to perpetuate inaccuracies and biases
  • Surveillance capitalism business models behind many tools
  • Transparency enables informed decisions about tradeoffs

Section 3: Algorithmic Bias and Representation

  • Risk of amplifying exclusion and marginalization
  • Hard to audit biases without transparency from creators
  • Issues extend beyond text to images, audio and video
  • Surface and mitigate biases through sustained dialogue

Section 4: Cultivating Responsible Use

  • Avoid overstating capabilities to students and public
  • Consider unintended consequences and second order effects
  • Foster collegial cultures supporting reflective use
  • Integrate ethical prompts into your own generative dialogues

Conclusion

  • Ethics of care and responsibility vital in era of generative AI
  • Process view supports using tools to amplify values, not substitute judgment
  • Intentional cultivation of reflective practices key to mitigating risks