Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms populism Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Claude: how can you help me analyse open response questions on surveys?

Huge issue about (1) reproducibility (2) data governance which might rule this out, but otherwise there are interesting possibilities here:

  • Sentiment analysis: I can detect the overall sentiment (positive, negative, neutral) in open-ended text responses to get a high-level sense of respondents’ attitudes.
  • Topic modeling: By finding common topics and themes in open responses, I can discover the key subjects that are top of mind for respondents. This provides insight into what’s most important.
  • Text classification: Open responses can be automatically categorized into predefined topics and themes to quickly structure and make sense of large volumes of text data.
  • Keyword extraction: Identifying common keywords and phrases can reveal shared language and concerns amongst respondents for qualitative analysis.
  • Named entity recognition: Recognizing named entities like people, organizations, locations etc. allows quick identification of key subjects respondents are mentioning.
  • Summarization: I can generate summaries of common opinions and perspectives from large open response data sets to efficiently synthesize key takeaways.
  • Comparison analysis: Responses can be grouped and compared across segments of respondents to uncover differences in perspectives between audiences.
  • Data validation: Open response analysis can help identify inconsistent, gibberish or irrelevant responses to filter out bad data.

I’m not sure when I’ll get round to it, but I’d like to explore generative AI for digital methods next year. There’s the practical use which can be made of it, as well as the broader epistemological and methodological questions for digital methods posed by the human/content link breaking down.