Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

What the gardener ruining my shrubs illustrates about prompting LLMs

I came home recently to find that my request to a gardener to “cut back the shrubs” led him to absolutely decimate them:

What does ‘cut back’ mean? I meant slightly trim overgrowth but leave them otherwise intact. It was a text I sent while travelling and listening to a podcast, without putting much real thought into it. To him it meant significantly reduce the quantity of the shrub. The problem was my instruction rather than the gardener’s execution. I gave a vague and misleading request which was easy to misinterpret.

I would suggest this is what many users do with LLMs. It often leads them to blame the machine for what was actually a failure on their part to provide adequate instructions. Prompting requires clarity about your intentions, which is something most of us lack at least some of the time.