Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Metrics and Measurement in #HigherEd

Paul Kirby and Meera Sabaratnam have written a thought-provoking response to the HEFCE consultation on using metrics for research assessment. Archived here because I plan on coming back to this properly at a later date. This is their account of the motivations driving this turn towards metrics, which they go on to critique:

  • The research assessment exercises conducted at a national level (RAE 2008; REF 2014) and at institutional levels are difficult, time-consuming, expensive and laborious because they consume large quantities of academic energy. Universities and academics themselves have complained about this.
  • Ministers, civil servants, research administrators and managers might prefer modes of assessment that do not require human academic input and judgement. This would be cheaper, not require academic expertise and would be easier to administer. This would facilitate the exercise of greater administrative control over the distribution of research resources and inputs.
  • Moreover, in an age of often-digitised scholarship, numerical values associated with citations are being produced – mostly by data from large corporate journal publishers – and amongst some scholarly communities at some times they are considered a mark of prestige.

http://thedisorderofthings.com/2014/06/16/why-metrics-cannot-measure-research-quality-a-response-to-the-hefce-consultation/

I agree with them that ‘quality’ and ‘impact’ should not be conflated. But I think it’s instructive to consider the (many) reasons why the former tends to get subsumed into the latter. Evaluative processes that are adequate to measuring ‘quality’ exist but they don’t scale easily – on my understanding that’s a large part of the problem with the present system. However I do agree with much of what they’re saying about the limitations of citation counting given the diversity of reasons underlying an act of citation:

  • It exists in the field or sub-field we are writing about
  • It is already well-known/notorious in our field or sub-field so is a useful reader shorthand
  • It came up in the journal we are trying to publish in, so we can link our work to it
  • It says something we agree with/that was correct
  • It says something we disagree with/that was incorrect
  • It says something outrageous or provocative
  • It offered a specifically useful case or insight
  • It offered a really unhelpful/misleading case or insight

http://thedisorderofthings.com/2014/06/16/why-metrics-cannot-measure-research-quality-a-response-to-the-hefce-consultation/

I like the phrase they use here: citation counts track centrality to networks of scholarly argument. I agree this can’t be treated as a proxy for quality but I think it’s an important measure nonetheless. It also tracks marginality to networks of scholarly arguments. My suspicion is that the undesirability of absolute marginality, books and papers that are never read or cited, becoming pervasive is IOTTMCO (Intuitively Obvious to the Most Casual Observer). I also suspect many within the academy basically share that view, given how readily urban myths about low citation rates circulate. However the risk is that the category of ‘never read or cited’ is immediately collapsed into ‘never cited’.

I’m running out of time but I’ll try and come back to this later in the week. It’s a really thought provoking contribution by them and it’s proving very helpful in thinking through my own views on this topic.