Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

A depressing fable about how ChatGPT is corroding trust in scholarship

In preparation for next week’s keynote on generative AI and the crisis of trust, I picked up a book about trust by a philosopher, who I’ve decided not to name, when I saw it in the Tate bookshop earlier today. It began with a quote from bell hooks which caught my attention:

Trust is both a personal and a political endeavour, an affirmation of our shared humanity and our collective potential for growth and transformation. By embracing trust, by fostering connections, grounded in love and compassion, we have the power to not only change our own lives but also to reshape the world around us…

I wanted to post it on my blog, so I immediately looked for a citation. I could find no result for the exact quote but Google returned this site at the top of the list, where I found nearly the same quote:

In the end, trust is both a personal and a political endeavor, an affirmation of our shared humanity and our collective potential for growth and transformation. By embracing trust, by fostering connections grounded in love and compassion, we have the power to not only change our own lives but also to reshape the world around us, one relationship at a time.

The problem is that this site hosts imagined responses by philosophers to the question ‘what is trust?’ produced by ChatGPT. These (genuinely quite interesting) LLM outputs were posted in April 2023, only to feature in a book published in 2024. I can find no other source for the quote the author includes, other than this nearly exact quote produced by ChatGPT.

The most obvious explanation here is that they decided they want to start the book with a quote from bell hooks. They then typed in ‘bell hooks and trust’ which returns the site above as its second result. They didn’t read the introduction which explains the exercise with the LLM and instead copy & pasted the ChatGPT output into his book, without checking for the source of the citation.

The irony being that I now don’t trust the rest of the book. A philosopher writing a book about trust begins the book with such lazy scholarship that I now struggle to trust them. I hope I’m wrong. But without wishing to personalise things, I’m tempted to use this an example in next week’s keynote. It illustrate how LLMs are contributing to an environment in which lazy scholarship, cherry picking a quote from a google search, becomes much riskier given the circulation of synthetic content.