Metrics and Measurement in #HigherEd

Paul Kirby and Meera Sabaratnam have written a thought-provoking response to the HEFCE consultation on using metrics for research assessment. Archived here because I plan on coming back to this properly at a later date. This is their account of the motivations driving this turn towards metrics, which they go on to critique:

  • The research assessment exercises conducted at a national level (RAE 2008; REF 2014) and at institutional levels are difficult, time-consuming, expensive and laborious because they consume large quantities of academic energy. Universities and academics themselves have complained about this.
  • Ministers, civil servants, research administrators and managers might prefer modes of assessment that do not require human academic input and judgement. This would be cheaper, not require academic expertise and would be easier to administer. This would facilitate the exercise of greater administrative control over the distribution of research resources and inputs.
  • Moreover, in an age of often-digitised scholarship, numerical values associated with citations are being produced – mostly by data from large corporate journal publishers – and amongst some scholarly communities at some times they are considered a mark of prestige.

I agree with them that ‘quality’ and ‘impact’ should not be conflated. But I think it’s instructive to consider the (many) reasons why the former tends to get subsumed into the latter. Evaluative processes that are adequate to measuring ‘quality’ exist but they don’t scale easily – on my understanding that’s a large part of the problem with the present system. However I do agree with much of what they’re saying about the limitations of citation counting given the diversity of reasons underlying an act of citation:

  • It exists in the field or sub-field we are writing about
  • It is already well-known/notorious in our field or sub-field so is a useful reader shorthand
  • It came up in the journal we are trying to publish in, so we can link our work to it
  • It says something we agree with/that was correct
  • It says something we disagree with/that was incorrect
  • It says something outrageous or provocative
  • It offered a specifically useful case or insight
  • It offered a really unhelpful/misleading case or insight

I like the phrase they use here: citation counts track centrality to networks of scholarly argument. I agree this can’t be treated as a proxy for quality but I think it’s an important measure nonetheless. It also tracks marginality to networks of scholarly arguments. My suspicion is that the undesirability of absolute marginality, books and papers that are never read or cited, becoming pervasive is IOTTMCO (Intuitively Obvious to the Most Casual Observer). I also suspect many within the academy basically share that view, given how readily urban myths about low citation rates circulate. However the risk is that the category of ‘never read or cited’ is immediately collapsed into ‘never cited’.

I’m running out of time but I’ll try and come back to this later in the week. It’s a really thought provoking contribution by them and it’s proving very helpful in thinking through my own views on this topic.

2 thoughts on “Metrics and Measurement in #HigherEd

  1. I found it thought-provoking, I wasn’t sure why you seemed to find their original contribution quite as frustrating as you seemed to though.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

About Mark