Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

The risk to authorship when using GenAI for notetaking

I’ve found a valuable use of conversational agents like Claude and ChatGPT in accelerating the notetaking process because it helps turn shortform notes into longform notes, as well as drawing out connections within them and suggesting potential implications. However I began to try this at a conference earlier and I rapidly realised how dangerous it is when you’re writing about other’s work.

In sitting through a series of talks my notes are intended to give me points to follow up on, prompted by the work of the speakers. What stood out to me, what I need to think about, what I want to explore further. This will involve attributing ideas to the speaker in a clear and unambiguous way. In contrast the acceleration that GenAI facilitates will tend to lose these references, helping pull out the key points for my own thinking at the cost of sustained and reliable attribution to the authors I am responding to.

There are risks to authorship inherent in LLMs which have been widely discussed. But there are additional risks to authorship arising from how we use LLMs which I think we need to understand carefully, if we want to use these systems in scholarly ways.

This was Claude 3.5’s summary of potential solutions to this problem:

Two-stage notetaking:

First, take detailed notes with clear attributions during the conference.
Only after this, use AI to help expand and analyze your notes, being careful to maintain original attributions.

Use structured formats:

Develop a note-taking template that clearly separates speaker content from your own thoughts and AI-assisted analysis.

Attribution tagging:

Use a consistent system of tags or markers to indicate the source of each idea (e.g., [Speaker: Dr. Smith], [My Thought], [AI Analysis]).

Regular review and verification:

Periodically review your AI-expanded notes against your original shortform notes to ensure accuracy of attribution.

Emphasize query formulation:

When using AI assistants, explicitly ask for help in ways that maintain attribution (e.g., "Expand on Dr. Smith's point about X, clearly marking which parts are direct expansion and which are potential implications").

Metadata inclusion:

Include conference details, speaker information, and session context in your notes to help maintain the provenance of ideas.

Ethical guidelines:

Develop personal or institutional guidelines for the ethical use of AI in academic notetaking and writing processes.