Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Prompt engineering is an expression of cultural capital. Some (critical) notes on Ethan Mollick’s Co-Intelligence book

I hesitate to use the term ‘prompt engineering’ because it carries a lot of baggage. It suggests this is a precise skill constituting a form of expertise, lending itself to being framed as the basis for a new occupation for the 21st century. There’s a lot of similarity between the ‘prompt engineering’ discourse and how ‘data scientists’ were talked about in the early 2010s. Conversational agents will now do this precise work for you in an unsettlingly effective way:

Instead I think we should frame prompting as an expression of cultural capital. Much as the capacity to manipulate symbols in socially valued ways can help you ‘get on’ in organisations, it can help you get on with conversational agents. These systems are fundamentally rewarding articulacy in a way parallel to other arenas where articulacy is rewarded. To be able to explain your intentions, provide a plan of action and share goals enables you to use conversational agents effectively. It’s what enables us to parameterise, as I call it Platform & Agency, but which Mollick describes more straightforwardly as “giv[ing] the LLM some context and constraints by telling it who it is and what it should do”. It’s the difference between elaborate prompting and, as Ethan Mollick observes, pasting in the exact question you were asked and expecting ChatGPT to answer it for you (loc 1771).

My experience is that the literacy which I’ve developed as a long-term blogger is exceptionally powerful for using conversational agents effectively. I’ve had 20 years of practice of taking vague intuitions or ideas and quickly explicating them in a long form way, which enables me to easily deploy conversational agents in a range of ways which would be much more difficult and/or time consuming without this literacy.

This helps us unpack what Ethan Mollick somewhat underwhelmingly describes as a ‘natural gift’ on loc 2474 of his book Co-intelligence: Living and Working with AI:

While, as we discussed in the last chapter, prompt crafting is unlikely to be useful for most people, that doesn’t mean it is entirely useless. It may be that working with AI is itself a form of expertise. It is possible that some people are just really good at it. They can adopt Cyborg practices better than others and have a natural (or learned) gift for working with LLM systems. For them, AI is a huge blessing that changes their place in work and society. Other people may get a small gain from these systems, but these new kings and queens of AI get orders of magnitude improvements. If this scenario is true, they would be the new stars of our AI age and would be sought out by every company and institution, the way other top performers are recruited today.

Please sociologists don’t neglect this terrain because you find generative AI creepy and hype-ridden! Clearly it is both, but we have a lot to contribute here.

The other aspect of this ‘natural gift’ is domain expertise. It rewards domain expertise but it also rewards a working knowledge sufficient to move between domain. As Mollick points out here, on loc 1641:

But there is no index or map to what they know and where they might be most helpful. Thus, we need people who have deep or broad knowledge of unusual fields to use AI in ways that others cannot, developing unexpected and valuable prompts and testing the limits of how they work. AI could catalyze interest in the humanities as a sought-after field of study, since the knowledge of the humanities makes AI users uniquely qualified to work with the AI.

The “weird revival of interest in art history among people who use AI systems, with large spreadsheets of art styles being passed among prospective AI artists” (loc 1636) is an interesting example of this. It also makes it easier to expand your intellectual hinterland because the time/energy necessary to explore a topic drops precipitously. People who have the traits needed to prompt effectively are likely to develop those traits significantly, unless the organisational setting militates against it to a significant degree. There’s second-order digital inequality here (the third digital divide) which we’ve barely begun to analyse adequately. Combine cultural capital, the particular form of digital literacy I described earlier, domain expertise and generalism to get someone who can gain a huge amount of value from the use of conversational agents. From loc 2374:

The issue is that in order to learn to think critically, problem-solve, understand abstract concepts, reason through novel problems, and evaluate the AI’s output, we need subject matter expertise. An expert educator, with knowledge of their students and classroom, and with pedagogical content knowledge, can evaluate an AI-written syllabus or an AI-generated quiz; a seasoned architect, with a comprehensive grasp of design principles and building codes, can evaluate the feasibility of an AI-proposed building plan; a skilled physician, with extensive knowledge of human anatomy and diseases, can scrutinize an AI-generated diagnosis or treatment plan.

The disposition towards staying with uncertainty, creatively thriving in relation to uncertain stimuli, further contributes to the capacity to use conversational agents in valuable ways. They can be used in the form of speculative methods, as elicitation devices because “it is costless to skim them to see if they inspire better ideas (loc 1538). I’ve been trying for a while to articulate what Mollick says quite concretely here (loc 1502):

When you do include AI in idea generation, you should expect that most of its ideas will be mediocre. But that’s okay—that’s where you, as a human, come into the equation. You are looking for ideas that spark inspiration and recombination, and having a long list of generated possibilities can be an easier place to start for people who are not great at coming up with ideas on their own.

My ongoing debate with Helen Beetham has led me to realise how unusual I am in what I’m coming to conversational agents with, as well as how my general approach to digital scholarship further exceptionalness my experience. I felt some degree of resistance about recognising this but I can’t unsee it now, so I have to work out what to do with it. I’ve advocated sociological realism about the application of GAI, in the sense that we should compare actual labour in actual organisations to it rather than an idealised human agent under non-specified conditions. But the flip side is realism about how people will approach prompting, as Mollick points out: “the vast majority of participants didn’t even bother editing the AI’s output” (loc 1685).

The capacity and willingness to spend time is part of this as well, initially exploring and then learning to apply what you’ve learned. These characteristics are unevenly distributed through organisations, leading temporal autonomy to take on a new positive valence alongside the breakdown of time as a proxy for value, as Mollick notes loc 1692:

The fact that it is time-consuming is somewhat the point. That a professor takes the time to write a good letter is a sign that they support the student’s application. We are setting our time on fire to signal to others that this letter is worth reading.