Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

How to use blog length prompts to ask ChatGPT 4 complex conceptual questions

Over the last month I’ve become far more enthusiastic about ChatGPT as I’ve learned to write blog post length prompts. I’ve often feeding blog posts I’ve written into it to have a conversation about the ideas. Here’s an example of what a freshly written long form prompt looks like:

I recently read an anthropologist and STS scholar make the following claim in a reflection on the development of algorithmic recommendation systems for music listeners: “a very classic STS lesson, which is that successful technologies develop their terms of success”. I understand this to be pointing to how the uptake of technologies can create new norms and standards through which those technologies are subsequently evaluated. I would like to know more about the intellectual origins of this argument, the theoretical positions associated with it and differences of opinion between scholars exploring this proposition. My interest is motivated by the analysis of the wider implications of generative AI on the scholarly work undertaken by academics, as well as how this is evaluated within universities. What could this perspective bring to understanding this topic? What are the practical lessons which might be learned from such an analysis, with a view to crafting policies and reshaping practice within higher education?

This is ChatGPT 4’s answer which is enormously helpful for the project I’m currently working on. It’s not a replacement for other methods of developing ideas but it is an extremely powerful supplement to them once you develop an effective way of engaging with it. It’s notable that it struggles slightly with the first part of the question, seemingly reading too much into the phraseology used by Nick Seaver and suggests a determinism which I don’t think he intended. this suggests to me that I should have given it more context from the original piece. The rest of the answer is superbly useful however:

This is a complex and multi-faceted question, so let’s break it down and address each part.

The claim that “successful technologies develop their terms of success” seems to be rooted in a few different theories from the fields of Science and Technology Studies (STS), sociology, and anthropology. Here are a few:

  1. Social Construction of Technology (SCOT): This theory posits that technology does not exist in a vacuum, but rather its design, development, adoption, and uses are shaped by social, cultural, and political factors. In other words, the success or failure of a technology is not just determined by its intrinsic properties, but also by how it is received and used in society. This theory was first developed by Wiebe Bijker and Trevor Pinch in the 1980s.
  2. Actor-Network Theory (ANT): Developed by Bruno Latour, Michel Callon, and John Law, this theory views both human and non-human entities (like technologies) as actors that can influence each other in a network. From this perspective, a technology’s success can be understood as an outcome of its interactions with other actors in the network.
  3. Technological Determinism: This is the belief that technology is the primary driver of societal change, and its development follows a linear and inevitable path. This theory is often contrasted with SCOT and ANT, which emphasize the role of social factors in shaping technology.

When examining the implications of generative AI on scholarly work and its evaluation, these theories can provide valuable perspectives:

  • They highlight that the adoption and impact of AI in academia are not just determined by the capabilities of the AI itself, but also by how it is received and used by academics, institutions, and society at large.
  • They suggest that the standards by which AI’s contributions to academia are evaluated are not fixed, but can evolve over time as AI becomes more integrated into academic work.
  • They raise questions about the potential power dynamics and inequalities that might be introduced or exacerbated by the use of AI in academia. For example, who has access to AI tools and resources? Who gets to define what counts as valuable or successful academic work in an AI-driven context?

As for the practical implications, these theories could inform policies and practices that aim to foster a more equitable, inclusive, and socially responsible use of AI in academia. For instance:

  • Policies could be implemented to ensure that all academics have access to AI tools and resources, regardless of their institutional affiliation, discipline, or geographic location.
  • Evaluation criteria for academic work could be revised to recognize and value the diverse ways in which AI can contribute to scholarship, rather than privileging certain uses or outcomes of AI over others.
  • Universities could engage in ongoing dialogue with their communities (both internal and external) to understand their needs, concerns, and aspirations related to AI, and to co-create strategies for integrating AI into academic work in a way that respects and benefits all stakeholders.

From an STS perspective, the uptake of AI in academia is shaped by a wide range of social, cultural, and political factors, not just the capabilities of AI itself. For instance, the growing demand for AI expertise has led to competition between academic institutions and the private sector to nurture and invest in junior AI researchers. This has resulted in higher salaries, improved resource allocation, and enhanced datasets, transforming both academia and the industry​1​. AI is also being used to automate tasks such as data entry and grading, freeing up professors to focus on more important things and customize the learning experience for students​1​.

However, the acceptance of AI in academia varies. While some researchers are willing to read AI-assisted articles, a significant percentage still would not be willing to read AI-generated articles. This suggests that the standards by which AI’s contributions to academia are evaluated are not fixed, but can evolve over time as AI becomes more integrated into academic work​1​.

AI and data science are disrupting our assumptions about institutional structures, as well as the speed, scope, and scale of control. Expertise is both widespread across many layers of social power and more concentrated in artifacts of knowledge production that maintain existing social orders. This raises new questions of our existing theories and critical approaches to governance, including legitimacy, risk, institutional logics, datafication, trust, and scientific development​2​.

In terms of practical implications, these insights could inform policies and practices aimed at fostering a more equitable, inclusive, and socially responsible use of AI in academia. For example, collaborative efforts could be fostered to develop an inclusive approach towards AI technologies and their application in academia and beyond, potentially making academia more accessible while reducing current limitations researchers often face​1​.

Finally, it’s worth noting that AI is transforming the world of academia and scientific collaboration and impact assessments brought about by AI are changing how academic research is conducted​1​. As AI continues to evolve and become more sophisticated, it is likely that the ways in which it is integrated into academic work and evaluated will continue to change, reflecting the idea that “successful technologies develop their terms of success”.

The latter part of the response was undertaken through Bing search. There are only two references used, both of which are relevant. It leaves me wondering what a more comprehensive search would have produced though, given ChatGPT’s evident capacity for synthesis. It was interesting it also reproduced the three part typology of intellectual traditions at this stage, suggesting this was crucial to how it was parsing the question. Again raising the prospect I could have got a better response if I had given it more context for the original quote.