In John Thompson’s Merchants of Culture, there’s an interesting remark about the structural position of first time authors which I think has wider purchase. From pg 200:

Ironically, in a world preoccupied by numbers, the author with no track is in some ways in a strong position, considerably stronger than the author who has published one or two books with modest success and muted acclaim, simply because there are no hard data to constrain the imagination, no disappointing sales figures to dampen hopes and temper expectations. The absence of sales figures sets the imagination free. The first-time author is the true tabula rasa of trade publishing, because his or her creation is the book for which it is still possible to imagine anything and everything.

A world where metrics are ubiquitous is a world where imagination has died. When everyone has a track record, the space to imagine someone’s future as radically different from their past collapses.

In John Thompson’s Merchants of Culture, he describes what might be termed the bounded autonomy enjoyed by some editorial teams within publishing houses. From pg 128:

the devolution of editorial decision-making to small editorial teams operating with a high degree of autonomy within certain financial parameters is the best way to maximize your chances of success. As one senior manager in a large corporation put it, ‘We’re giving somebody a playing field and we’re putting fences around the edge of it and saying, “If you want to cross one of those fences, you have to ask a question. But if you’re playing in the field you can do what you like.” You give people a lot of scope, but you provide a framework within which they operate.’

I was struck by how absent this seems in (British) academia, with the possible exception of some business schools. Rather than seek to return to a full system of collegial self-regulation, does this provide a model for arguing for autonomy within managerial structures? E.g. Scholarship is something which needs autonomy to flourish but this can be bounded in terms of outcomes and rules?

It’s worth noting that there’s a brutally instrumental attitude which underwrites this bounded autonomy. From pg 131:

‘There is an unspoken rule,’ explains one senior editor who has worked at Star for some 30 years, ‘put one toe out of the elevator to interfere with us and we will cut you off at the knees. And the only thing that enables us to take that attitude is profitability. As long as we make the money, we can tell them to go fuck themselves. It’s as simple and as old-fashioned as that. The second that goes wrong, we’ve had it. If we stop being profitable, the incursions will start.’

And underlying this dynamic is a certain ineffable trait, a resistance to quantification amidst demonstrable sources of profit and gain to the organisation. From pg 131-132:

This is part of the mystique of the imprint, ‘and the one thing corporate owners are scared shitless of is messing with mystique,’ said another senior editor. ‘Mystique is what they don’t understand. All they know is, if it works, don’t break it.’

Thompson later offers counter-examples to this. I’m intruiged by the analogy between high prestige imprints and successful research groups. How does the negotiation of bounded autonomy empower group leaders? The figure Milena Kremakova calls ‘the troll on the bridge’ could become very powerful here: mediating pressures towards granular control within the group and negotiating bounded autonomy for the group as a whole.

In our discussion of metrics systems, it’s easy to treat subjectivity as a cipher, regarding people as passively moulded by algorithms or blindly governed by the incentives that operate through the institutionalisation of the metrics. My objection to the former is not the claim that people are shaped by metrics, but rather the assumption that this process is basically passive. My interest is in how metrics come to matter to us. How are people shaped over time? How do their biographically accumulating dispositions and concerns influence the actions they take over time? How do these feed back into the metrics system and the organisations within which they are institutionalised?

The fictional portrayals that are starting to emerge of this – novels like Super Sad True Love Story, the Circle and Whiskey Tango Foxtrot, films like Nerve – often struggle to represent this engaged subjectivity because the imperatives of effective story telling militate against it. What we really need is a novel or film that explores metricisation through the internal monologue of what I imagine would turn out to be an unreliable narrator.

Good news! This week it was learnt that CWTS will play host to the second annual conference ‘The Accelerated Academy: Evaluation, Acceleration and Metrics in Academic Life’. Generously sponsored by the Swedish Foundation for Humanities and Social Sciences, the event will take place from 30th November to 2nd December 2016 in the beautiful city-centre of Leiden, the Netherlands.

Theme of the Conference

From the 1980s onward, there has been an unprecedented growth of institutions and procedures for auditing and evaluating university research. Quantitative indicators are now widely used at various levels, from the level of individual researchers to that of entire universities, serving to make academic activities more visible, accountable and amenable to university management and marketing. The Accelerated Academy aims to draw together a number of cross-disciplinary conversations about the effects that acceleration towards metric forms of evaluation is having upon research, and the implications this holds for academic life more widely. Building on the successful maiden edition of the Accelerated Academy series in Prague in 2015, this year’s Leiden conference will be especially focussed towards the following questions:

  • What does acceleration mean in different research contexts?
  • What are the implications of digitally mediated measurement and tools for quantifying scholarly performance?
  • What are the knowledge gaps regarding the effects of metrics on scientific quality and societal relevance of research?
  • How can we harness the positive and minimize the adverse effects of performance measurement in universities?

Confirmed keynote speakers include Professor Michael Power (LSE), Professor Ulrike Felt (University of Vienna) and Professor Peter Dahler-Larsen (University of Copenhagen).

Conference organisers

Dr. Sarah de Rijcke
Dr. Björn Hammarfelt
Dr. Alex Rushforth

Scientific committee

Dr. Mark Carrigan, University of Warwick
Dr. Tereza Stöckelová, Czech Academy of Sciences
Dr. Filip Vostal, Czech Academy of Sciences
Prof.dr. Paul Wouters, Leiden University
Dr. Milena Kremakova, University of Warwick

A call for papers will be announced shortly. Event registration will be free of charge. In addition, a limited number of travel and accommodation support bursaries will made available for researchers especially inhibited by the costs of travel.

– See more at: https://www.cwts.nl/news?article=n-q2v2c4&title=2nd-accelerated-academy-conference#sthash.NGoOUKZY.dpuf

An interesting snippet in this Fast Company profile of BuzzFeed about their viral lift metric and how this trumps page views as a measure of success:

Stopera, an Internet savant so steeped in pop culture that he appeared on an episode of MTV’s Fanography as a teenager for his “psychotic” love of Britney Spears, is explaining how he and his 500-plus peers in the editorial department define success. They rely on an internal proprietary metric, known as “viral lift,” that quantifies how much and how quickly a piece of content is shared. “If something has a 1.5 viral lift and 100,000 views and above, that was worth doing,” he tells me. “It’s a failure if you have 400,000 views and a 1.1 or 1.2 lift. That’s a flop.”

Most publishers would perceive the post with 400,000 views to be the success, but at BuzzFeed sharing is paramount. As Stopera explains, “It wasn’t shared. It was all seed. The fun in the game is getting people to share something. I click on shit all the time. ‘Oh, let’s look at what this person posted on Instagram,’ and you saw their butt cheek. It’s like, click, but I’m not going to share it.”

http://www.fastcompany.com/3056057/most-innovative-companies/buzzfeed-for-shaking-up-media-across-the-globe

Call for Papers
A workshop on Competition(s)

As part of a broader project – Performances of Value: Competition and Competitions
Inside and Outside Markets – we call for papers for a workshop on Competition(s) that will
take place on June 10-11, 2016 at the Copenhagen Business School. Costs for travel,
lodging, and meals for workshop participants will be covered by a grant from The
Leverhulme Trust.

Organizers: David Stark (PI), Elena Esposito, Kristian Kreiner, Celia Lury,
Fabian Muniesa, and Christine Musselin.

For more information about the project,
see: http://blogs.cim.warwick.ac.uk/valueperformances/

What’s valuable? This question – whether at the personal, organizational, or societal level – is increasingly being answered through various forms of competition. These can be
through the prices of market competition but they can also be through the prizes of
contests, ratings, rankings, and other forms of organized competitions.

The Competitions workshop will explore the relationship between market competition
and organized competitions. The phrase, “they are competing,” might refer, for example,
to banks competing on the credit card market. But, in addition to such market
competition, it could also refer to organized competitions and games such as the World
Cup, architectural competitions, book prizes, Twitter scores, university rankings and other types of contests. Thus, alongside market competition as a coordinating mechanism of valuation in the economy we also find organized competitions. In the first type we find actors competing on markets. In the second type, we find contests with entry rules,
judges, and prizes granted to the announced winners. On one side, competition is an
ongoing, seamless, and seemingly endless process; on the other, competitions are
discrete, bounded in time and location.

While market competition has been the subject of sustained attention, studies of
organized competitions are more scarce and are rarely brought together. For the
Copenhagen workshop, we are particularly interested in studies of organized
competitions, addressing questions such as (but not limited to) the following:

– How are competitions (whether in sports, arts, business, politics, or science) staged and
structured?
– How do scoring systems evolve? How do new performance metrics emerge?
– How do judges and juries go about reaching judgements?
– What are the roles of audiences and experts?
– What happens when forms of competition move from one domain to another?
– How are social agents equipped with competitive dispositions? What devices, tools and
settings enable forms of competitive agency?
– Should we assume that everyone wants to win or that everyone accepts to play the games
of competition? What are the consequences of not joining in?

Abstracts of no longer than 500 words should be submitted by February 15, 2016. If the abstract is accepted, a full paper will be required by May 15, 2016. All submissions should be made to Ana.Gross@warwick.ac.uk.

Performances of Value is an International Research Network based at the Centre for Interdisciplinary Methodologies, University of Warwick, supported by a grant from The Leverhulme Trust.

GamingMetricsUC Davis, February 4-5, 2016

The event is open to the public. Please register here.

Follow on Twitter at #GamingMetrics.

This conference explores a recent evolution of scholarly misconduct connected to the increasing reliance of metrics in the evaluation of individual faculty, departments, and universities.  Misconduct has traditionally been tied to the pressures of “publish or perish” and, more recently, to the broadened opportunities enabled by electronic publications. The conference takes the next step and asks whether the modalities of misconduct have changed in time to adapt themselves not just to the general demands of “publish or perish” but to the specific features and techniques of the modern processes of academic evaluation variously connected to the notion of “metrics.” Have we moved from “publish or perish” to “impact or perish”? If so, are metrics of evaluation now creating new incentives for misconduct? Are metrics also helping the evolution of forms of misconduct in specific and innovative directions? And, crucially, can we reliably draw a clear separation between gaming the metrics game and engaging in misconduct?  Traditional discourses and policies of misconduct were rooted in oppositions between truth and falsehood, right and wrong, honest mistake and fabrication, but new metrics-based misconduct seem to be defined less by opposition than by degree — the amount of gaming involved.  In sum, are new metrics-based forms of misconduct asking us to rethink what misconduct means?

DAY 1 (Vanderhoef Studio Theatre, Mondavi Center)

9:00-9:15 Welcoming remarks (Ralph Hexter, Provost, UC Davis)

9:15-9:30 “FROM PUBLISH OR PERISH TO IMPACT OR PERISH” (Mario Biagioli, STS & Law, UC Davis)
A brief discussion of the conference themes and working hypothesis concerning the relation between academic metrics and misconduct. Current scenarios exemplify a vast increase of kinds of misconduct compared to traditional definitions (fabrication, falsification, plagiarism), but also point to a shift in the very goals of misconduct.  Initially driven by “publish or perish,” misconduct has become geared toward maximizing more complex metrics of academic credit encapsulated in a new imperative: “have impact or perish.”

9:30-10:30 “GAMING THE GAME ACROSS THE BOARD”
This session is meant to provide a baseline for the conference’s subsequent discussions by casting a wider net on metrics-gaming well beyond the specific field academic publishing, looking at how different communities and professions construe the line between acceptable and unacceptable gaming.  Mapping a wide range of gaming scenarios will then allow us to contextualize the specific forms of academic misconduct involving metrics gaming concerning academic credit.

  • Timothy Lenoir (UC Davis, Cinema and Digital Media & Science and Technology Studies) (Chair)
  • Sally Engle Merry (NYU, Anthropology) “The Seductions of Quantification”
  • Alex Csiszar (Harvard University, History of Science) “(Gaming) Metrics Before the Game”
  • Paul Wouters (Leiden University, Science and Technology Studies) “The Mismeasurement of Quality and Impact”
  • Karen Levy (NYU, Media, Culture, and Communication) “Networks of Resistance in Trucking”

10:45-11:00 BREAK

11:00-12:00 “UNIVERSITY RANKINGS: GAMING OR COOKING?”
As university rankings are gaining increasing importance across the globe, they have been praised as agents of democratization against traditional academic “brands” living off reputational rent, but also criticized for the substantial ranking distortions that their easy gaming allows for.  When can these practices be treated as ranking gaming, and when do they cross over into institutional misconduct?

  • Martin Kenney (UC Davis, Human Ecology) (Chair)
  • Barbara Kehm (University of Glasgow, School of Education Robert Owen Centre for Education Change) “Global University Rankings: Impacts and Applications”
  • Lior Pachter (UC Berkeley, Mathematics) “How King Abdulaziz University Became a ‘Better’ University than MIT in Mathematics”
  • Daniele Fanelli (Stanford University, METRICS) “Institutional Pressures to Publish: What effects do we see?”

12:00-1:30 LUNCH

1:30-3:00  “PERSONAL V. INDUSTRIAL CHEATING”
One conspicuous difference from the days of “traditional” misconduct is the shift between misconduct as the work of individual scientists and scholars to scenarios in which misconduct is a more “collaborative” endeavor, as in the case of citation rings among journals to maximize their impact factors. (The production of fake alternative impact factors may be another example).  In addition to these novel conspiracies (which typically involve editors and publishers rather than traditional individual cheats like scientists and scholars), modern misconduct also involves businesses and organizations providing tools, platforms, and opportunities to academics interested in misconducting themselves.  These include so-called “predatory” journals, fake conferences, fake prizes, etc., that is, tools that enable and entice academics to meet the demands of their institutions’ evaluation metrics by gaming/cheating them.  Also, while these activities concern publications, they are not limited to the production of a fraudulent text (as “traditional” misconduct typically was), but aim at facilitating its publication.  They may be perhaps termed “postproduction” misconduct.

  • MacKenzie Smith (UC Davis, University Librarian) (Chair)
  • Finn Brunton (NYU, Media, Culture, and Communication) “Making People and Influencing Friends: Citation Networks and the Appearance of Significance”
  • Sarah de Rijcke (Leiden University, Science and Technology Studies) “System Identity: Predatory publishing as socio-technical disruption”
  • Jeffrey Beall (University of Colorado, Denver, Information Science) “Fake Impact Factors and the Abuse of Bibliometrics”
  • Dan Morgan (University of California Press, Collabra Project) “Cui Bono? Judging Intentions (and Outcomes) of Personal and Industrial Cheating”

3:00-3:15 BREAK

3:15-5:00 “META GAMING, META CHEATING”
This session has a double goal.  First, to analyze the kind of gaming that involve not the manipulation of a metric but the construction or adoption of a metrics – not gaming an established game, but the gaming that goes into defining the game itself. Is the competitive market of academic metrics (from faculty performance to university rankings) a form of gaming the game itself?  And where/when/how can it become misconduct?  Second, this session aims at engaging with Goodhart’s law, which is taken to show not only that the introduction of any kind of metric creat, es a market for gaming it, but that by so doing it invalidates the significance of that metrics.  If so, one could argue that any metrics will create the possibility of misconduct, but that the articulation of forms of misconduct specific to that metric will eventually “crowd” that market, thus creating an incentive to change the metrics, which in turn will usher in the next generation of innovative misconduct.  Or can we argue, against Goodhart, that it is possible to find a metrics of academic evaluation that can break the nexus with gaming/misconduct?

  • Anupam Chander (UC Davis, Law) (Chair)
  • Johan Bollen (Indiana University, School of Informatics and Computing) “From Bibliometric Metrics to Crowd-Sourced Science Funding Systems”
  • Carl T. Bergstrom (University of Washington, Biology) “It’s All a Game: The twin fallacies of epistemic purity and a scholarly invisible hand”
  • Jennifer Lin (Crossref) “Trust through Transparency: O brave new world/ That has such data in’t!”
  • Michael Power (London School of Economics, Accounting) “Research Impact and the Logic of Auditability: Solicited testimony as a case of meta-gaming”
  • James Griesemer (UC Davis, Philosophy) “Taking Goodhart’s Law Meta: Gaming, Meta-Gaming, and Hacking Academic Performance Metrics”

————————————————————————————————————
DAY 2 (Kalmanovitz Appellate Courtroom, King Hall)

9:15-9:30 Welcoming Remarks (Kevin Johnson, Dean, UC Davis School of Law)

9:30-10:30 “MISCONDUCT WATCHDOGS (I)”
The emergence and pervasiveness of new forms of misconduct exceed the
reach, resources, and conceptual framework of traditional governmental watchdog organizations typically connected to funding agencies like, in the US, the ORI.  This has spawned a new generation and new figures of misconduct surveillance, detection, and prosecution. Among these is a new breed of “watchdogs” — new actors who are often institutionally unaffiliated. These “watchdogs” have assumed an important role and a credible voice, often by creating new “ecologies of support” for themselves — websites, blogs, wikis, social media, etc.  Does their somewhat unique role indicate something about the specific nature of modern academic misconduct?  Does it suggest that the “battlefield” of misconduct is moving away from governmental agencies (acting according to traditional and possibly outdated definitions of misconduct) and toward journals and the watchdogs who monitor their publications?

  • Jonathan Eisen (UC Davis, Genome Center) (Chair)
  • Ivan Oransky (Retraction Watch & NYU) “Retraction Watch: What We’ve Learned Since 2010”
  • John Bohannon (Science Magazine) “Grey Hat Hacking for Science”
  • Elizabeth Wager (Sideview) “Why Do We Need a Committee on Publication Ethics and What Should It Do?”

10:30-10:45 BREAK

10:45-12:00 “MISCONDUCT WATCHDOGS (II)”

  • Jonathan Eisen (UC Davis, Genome Center) (Chair)
  • Darren Taichman (Executive Deputy Editor, Annals of Internal Medicine Vice President, American College of Physicians) “A False Sense of Security?”
  • Debora Weber-Wulff (University of Applied Sciences Berlin, HTW, Media and Computing & VroniPlag Wiki) “Documenting Plagiarism in Doctoral Theses: The Work of the VroniPlag Wiki Academic Community in Germany”
  • Brandon Stell (The PubPeer Foundation & CNRS) “Introducing PubPeer”
  • Emmanuel Didier and Catherine Guaspare (EPiDaPo, UCLA) “The Voinnet Affair: New Norms in High-Pressured Science”

12:00-1:30 LUNCH

1:30-2:30  “COUNTERFEITING BRANDS V. FAKING PRODUCTS”
This session looks at a specific form of fakery rooted in “brand appropriation.”  While the preceding session considers generally fake journals, conferences, etc., here we want to look more specifically at imaginary journals whose titles (as well as the look and feel of their websites) are made to resemble those of well-known and respectable journals.  One could perhaps add to this list certain “academic” conferences that take place in prestigious locations (say, Oxford) but are not actually affiliated with the university, or the appropriation of the names of respected academics that are then listed (without authorization) on editorial boards of fake journals or organizing committees of fake conferences.  Similarly, fake universities who sell degrees without any attempt at educating their students (not even online) tend to assume names with an Ivy League ring to them.  The common denominator here is an attempt at the mimicry of a “brand” rather than just the copying/pirating of a product.

  • Madhavi Sunder (UC Davis, Law) (Chair)
  • Marie-Andree Jacob (Keele University, Law) “Template, Creativity and Publication Ethics”
  • Alessandro Delfanti (University of Toronto, Institute of Communication, Culture, Information and Technology) “ArXiv or viXra? Physics and the quest for the true archive”
  • Sergio Sismondo (Queen’s University, Philosophy) “Leveraging Academic Value in the Pharmaceutical Industry”

2:30-3:30  CARNIVALESQUE RESPONSES
While misconduct “watchdogs” (discussed in a previous session) expose through public communication and denunciation, this session focuses on other actors who reveal misconduct and poor oversight through a carnivalesque approach.  Humor and absurdity—submitting profane papers and computer-generated gibberish articles that “sound” academic, or whistleblowers using clever anagrams as aliases–become a mode of critique and unmasking. Neither clearly “predatory” journals, “fake” conferences nor “legitimate” journals are immune to being the subject of a joke–a joke that, in some cases, may be more powerful than punishment. In a way, carnivalesque responses to misconduct continue the logic of an older history of art forgery-as-prank in which the forgery reveals through a kind of satire.  Are these cases telling us, perhaps, that satire is the best approach to both metrics and the gaming they elicit?

  • Alexandra Lippman (UC Davis, Innovating Communication in Scholarship Project) (Chair)
  • Cyril Labbé (Joseph Fourier University – Grenoble I) “Ike Antkare, His Publications and Those of His Disciples”
  • Burkhard Morgenstern (Universität Göttingen, Bioinformatics) “Virtual Editors Can Significantly Improve the Visibility of Junk Journals – A case study”
  • Paul Brookes (University of Rochester, Medicine) “Crossing the Line – Pseudonyms & Snark in Post-Pub Peer Review”

 

LOCATION: Please note that the conference will be held at two different locations on the UC Davis campus.  On Thursday, February 4 we will convene at the Vanderhoef Studio Theatre, Mondavi Center.  On Friday, February 5 our proceedings will take place in the Kalmanovitz Appellate Courtroom at King Hall (UC Davis Law School) (located here).

CAMPUS MAP: Can be found at http://campusmap.ucdavis.edu/

ADDITIONAL INFORMATION: Please email Alexandra Lippman (alippman at ucdavis dot edu)

INNOVATING COMMUNICATION IN SCHOLARSHIP — a UC Davis IFHA Project
CENTER FOR SCIENCE AND INNOVATION STUDIES

A fascinating snippet from The Boy Kings, by Katherine Losse, describing the approach of a new operations director joining Facebook in 2007. From pg 144:

The next week, Chamath asked me and my management colleagues in customer support to do an evaluation exercise in which we ranked everyone on the Customer Support Team from highest to lowest. Sitting up late that night in the office, I assigned a score to each person on the team. Some were easy to score: They were either spectacularly hard workers or rather lazy, preferring to play company- sponsored Beirut games to the alternately hard and tedious work of solving user problems, but for most it was a queasy and difficult process of comparing apples to oranges, which, in this case, might be one person’s quickness at answering emails versus another’s thoroughness and accuracy. 

When the results were in, Chamath came back to deliver a speech. “Look around you,” he told us. “In a few weeks, some of the people in this room won’t be here. They will be moved to other departments, because they’ve worked hard and have made themselves valuable to the company. Other people in this room won’t be here, because they haven’t worked hard enough. I’m telling you this because you need to understand that this is how it works: You are always being ranked, and it’s your job to perform. How you do here is up to you, but no one’s going to let you get away with not pulling your weight.”

From page 75 of Brad Stone’s excellent book The Everything Store:

In early 1998, Bezos was closely involved with a department called Personalization and Community, which was geared toward helping customers discover books, music, and movies they might find interesting. That May, he surveyed what was then Amazon’s Hot 100 bestseller list and had an epiphany— why not rank everything on the site, not just the top sellers? “I thought, ‘Hey, why do we stop at a hundred? This is the Internet! Not some newspaper bestseller list. We can have a list that goes on and on,’ ” he told the Washington Post. 2 The notion was not only to create a new kind of taxonomy of popularity but also to give authors, artists, and publishers a better idea of how they were doing— and to cater to some of their more neurotic impulses. “Bezos knew sales rank would be like a drug to authors,” says Greg Linden, an early Amazon engineer. “He insisted that it change whenever a new order came in.” That was not a trivial challenge. Amazon’s overloaded servers were already stretched to the limit, and its Oracle database software was not designed to handle the increasing loads generated by the swelling audience of the Web. Engineers ended up fudging it, taking snapshots of sales data and pushing new rankings to the website every few minutes. The service, called Amazon Sales Rank, was introduced in June to the consternation of not only authors, who began compulsively checking their rankings at all hours of the day and night, but also their spouses and more than a few wary editors and publishers. “I understand how addictive it can be, but maybe they could spend their time more productively, like, maybe, writing a new book,” veteran editor John Sterling said.

That was not a trivial challenge. Amazon’s overloaded servers were already stretched to the limit, and its Oracle database software was not designed to handle the increasing loads generated by the swelling audience of the Web. Engineers ended up fudging it, taking snapshots of sales data and pushing new rankings to the website every few minutes. The service, called Amazon Sales Rank, was introduced in June to the consternation of not only authors, who began compulsively checking their rankings at all hours of the day and night, but also their spouses and more than a few wary editors and publishers. “I understand how addictive it can be, but maybe they could spend their time more productively, like, maybe, writing a new book,” veteran editor John Sterling said.

When questioned by a friend in 1980 as to whether he was happy at Princeton, the philosopher Richard Rorty replied that he was “delighted that I lucked into a university which pays me to make up stories and tell them”. He went on to suggest that “Universities permit one to read books and report what one thinks about them, and get paid for it” and that this is why he saw himself first and foremost as a writer, in spite of his already entrenched antipathy towards the philosophical profession which would grow with time. It’s a lovely idea, isn’t it? This is the thought that keeps coming back to me as I’m preparing to participate in the Time Without Time symposium in Edinburgh later this week.

The invited speakers have been asked to reflect on “their practice, roles and research interests” in terms of the themes of the symposium. Perhaps slightly depressingly, it’s occurred to me that so much of what I do has in a sense been motivated by frustration that the university is not what I once (naively) believed it to be. My interest in social acceleration (how I approach the themes of the symposium) is in large part an attempt to understand how and why this is so: this is where my thoughts currently stand and I’m running this international conference with Filip Vostal in December.

The problem is that employment in a university no longer requires that one simply reads books and reports what one thinks about them. Was this ever really the case? Either way, it’s a seductive vision. Unfortunately, it is belied by the over one hundred metrics to which each academic working within UK higher education is potentially subject. Contrary to Rorty’s ideal of scholars reading books, writing about them and occasionally deigning to share their reflections with students, we’re instead measured constantly in matters such as workload, teaching and research within institutions that are themselves ranked in a way constituted through the measurement of the individuals within them.

Professional lives are judged according to opaque criteria, ratcheted up between assessment exercises such that anything less than ‘international excellence’ is coming to be seen as worthless. At some institutions, including my own, we see the introduction of the demand that staff meet a certain baseline of ‘income generation’ in order to keep their jobs: despite the fact that the money apportioned by way of research assessment exercises is intended to fund research. For instance a Bristol University lecturer was sacked, allegedly for not securing enough grant income. The phrase ‘publish or perish’ acquired new resonance when Stefan Grimm, a respected figure in Toxicology, committed suicide after being threatened with redundancy for failing to win enough research funding.

The culture this breeds is corrosive and unhappy. All the descriptions pertaining to artists in the e-flux article assigned as reading for the symposium apply with unnerving accuracy to academics: “barely capable of distinguishing themselves from the consuming desire to work at all times”, “neurotic people who deploy a series of practices that coincide quite neatly with the requirements of the neoliberal, predatory, continually mutating capitalism of the every moment”, “people who behave, communicate, and innovate in the same manner as those who spend their days trying to capitalize every moment and exchange of daily life”. In a much circulated paper, the feminist scholar Ros Gill suggests that a ‘sacrificial ethos’ silences stories of stress and insecurity. At all career stages, though perhaps most harmfully amongst PhD students and early career researchers, a sense of commitment to a calling helps license acquiescence to precarious and exploitative labour relations which make a lie of the ideal of collegiality still alluded to within the academy.

However this is more than just overwork and over-identification with a job. The Tumblr blog academia is killing my friends contains 40 personal narratives of “abuse, exploitation and suffering in academia”. We shouldn’t conclude that postings stopped in July 2014 because the editor exhausted the available stories. This doesn’t end with graduate school and, if anything, it looks likely to get worse: a recent survey by the Guardian Higher Education Network of 1366 academics who had experienced bullying at work, half of whom were based in the UK, pointed to management structures orientated towards ‘research excellence’ which had created a pervasive culture of fear amongst staff. Higher education has become a deeply toxic place and, through a sociological lens, it’s easy to see how this has its roots in structural features of the sector rather than simply being the aggregate tendency of a collection of unpleasant people.

The image Rorty presents us with of scholarship is idealistic. It reflects his own privilege. It’s an artefact of a higher education system that in the 1980s Ivy League was substantially different to what we see in 2015 in the UK. Most strikingly of all: the image is of a slow life. It suggests Rorty dreamily ambling through his days, going for long morning walks through the Gothic splendour of Princeton’s campus and spending long afternoons reading books in front of a fire place, occasionally putting pen to paper to record what thoughts they have provoked within him.

In an important way, what’s changed can be characterised in terms of speed… the imagined slowness of Rorty’s Princeton life has given way to a frenetic pace, defined by a perpetual ratcheting up of demands and an entrepreneurial ethos seeking new and quantifiable opportunities. As the ‘self-employed mindset’ begins to take hold, it’s difficult to know how much to give: am I doing enough? The demand for ‘excellence’ is open-ended because it’s never clear what this will constitute in the future. Nonetheless, it’s the only thing that will be accepted. As David Cameron put it recently, “if you’re not good or outstanding, you have to change … if you can’t do it yourself, you have to let experts come in and help you”. He was talking about secondary education rather than higher education but I’ve yet to encounter a more succinct statement of what the political theorist Will Davies memorably describes as heating up the floor to see who can keep hopping the longest. Anxiety thrives, demands intensify and metrics are the informational thread which holds this tangled web together. These numbers can be transparent and they can also be opaque. They can be sources of pleasure and sites of anxiety. When everything moves so fast, we rely on these metrics as cyphers for quality: ways of assessing in lieu of evaluation, assessing others and assessing ourselves.

In my work at the moment I’m developing the notion of ‘cognitive triage’ to make sense of how agents come to operate in such an environment. It was initially offered by the journalist Kevin Roose to describe the frantic state of day-to-day survival into which trainee financiers fall in order to survive their deliberately brutal socialisation period. When we’re triaging, we attend to the most immediate requirements and our temporal horizons begin to shrink. Under these conditions, imagination becomes more difficult and so too does extended deliberation about our circumstances and what matters to us. This isn’t inexorable and I think we can see many contemporary trends as attempts to escape triaging and to get beyond ‘the day-to-day’ e.g. digital detoxes, information diets, life hacking, productivity culture, mindfulness. With the exception of the latter however, I’m sceptical that these help because they tend to intensify our focus on our immediate behaviour: even if they help us cope with the pernicious effects of cognitive triage, they further narrow our horizons rather than broadening them.

Cognitive triage breeds a mentality within which tasks become obstacles to negotiate rather than activities through which we can become who we are. Consider the to-do list: each item is given a equal weight, regardless of the meaning it holds for us. When we’re triaging, we rush. We don’t attend to the task at hand, following its internal logic as we lead our way through it. There’s a relational richness to practical activity which can so easily be obliterated by the mentality produce by triaging. Ironically, I’m triaging right now. I want to get this post finished so I can answer a couple of e-mails and go to bed. But this post is an attempt to lay out as a whole strands of thought that have been obsessing me for a number of years. My disparate interests actually do fit together and the urge to articulate how this is so feels of profound concern to me. But I also have to get up early tomorrow morning, clean my house, do a mass of event organisation, edit some posts for the Sociological Review blog, get my special issue of Discover Society off the ground, pack for Edinburgh and practice this at least once as a talk so that I don’t just start rambling when I get up to speak on Thursday. The urgent is crowding out the important. It happens a lot. For now, I’ll give in to it, in order that I can write a ‘part 2’ tomorrow which is slightly less rushed.

Power, Acceleration and Metrics in Academic Life, 2nd-4th December 2015, Prague

Call for papers: Power, Acceleration and Metrics in Academic Life

There is little doubt that science and knowledge production are presently undergoing dramatic and multi-layered transformations accompanied by new imperatives reflecting broader socio-economic and technological developments. The unprecedented proliferation of audit cultures preoccupied with digitally mediated measurement and quantification of scholarship and the consolidation of business-driven managerialism and governance modes are commonplace in the contemporary academy. Concurrently, the ever-increasing rate of institutional change, (the need for) intensification of scientific and scholarly production/communication and diverse academic processes seem to characterize the overall acceleration of academic life (i.e., in many disciplines the new maxim ‘patent and prosper’ (Schachman) supplements the traditional ‘publish or perish’). Quantification and metrics have emerged not only as navigating instruments paradoxically exacerbating the general dynamization of academic life but also as barely questioned proxies for scientific quality, career progression and job prospects, and as parameters redrawing what it means to be/work as a scholar nowadays (i.e., the shifting parameters and patterns of academic subjectivity). Metrification now seems to be an important interface between labour and surveillance within academic life, with manifold affective implications.

This conference will inquire into the techniques of auditing and their attendant practices and effects and will also probe into scholars’ complicity in reproduction of such practices. It will consider processes of social acceleration within the academy and their implications for the management of everyday activity by those working within it. This will include:

• empirical and theoretical engagements with the acceleration of higher education
• the origins of metrification of higher education
• metrification as a form of social control
• the challenges of self-management posed by metrification and/or acceleration
• common strategic responses to these challenges
• the relationship between metrification and acceleration
• how metrification and acceleration relate to a broader social crisis

The workshop will take place from December 2nd to 4th 2015 in Prague.

Deadline for abstracts will be May 1st 2015. Please send 250 words and short biographical note to Mark Carrigan (mark@markcarrigan.net) and Filip Vostal (filip.vostal@gmail.com) by the deadline.

Keynote Speakers:

Roger Burrows – Ancient Cultures of Conceit Reloaded

Philip Moriarty – The Perils, Pitfalls, and Power of Peer Review in Public

Susan Robertson – Vertigo: Time and Space in the Contemporary University

James Wilsdon – In numbers we trust? Reflections on the UK’s independent review of the role of metrics in research assessment

Fee

50 Euros (standard) / 25 Euros (PhD/ECR)

Registration to open in summer 2015

Venue

Hosted by Institute of Philosophy of the Czech Academy of Sciences the event will take place in Vila Lanna, V Sadech 1, 160 00, Prague 6, Czech Republic (http://www.vila-lanna.cz/index.html)

Travel

Air: From Vaclav Havel Airport Prague take the bus no 119 to Dejvicka (which is the terminal stop). Vila Lanna is 5-6min walk from there.

Train: From Main Railway Station (Praha hlavni nadrazi, often abbreviated Praha hl. n.), take metro line C (red), change at Muzeum for line A (green) and get off at the terminal stop Dejvicka. Vila Lanna is 5-6min walk from there.

This article by John Holmwood is worth reading:

There are few national systems of higher education that are immune from their effects, though their use is more extensive and systematic in some places, rather than others. They seem to have gone furthest in national systems with a high proportion of public universities, especially in countries with strong neo-liberal public policy regimes – for example, the UK, Australia and New Zealand. They are less extensive where national systems of governance are weaker – for example, the United States and Germany.

My purpose is not to describe the myriad forms of audit-by-metrics, or the national differences in ‘metric regimes’. Rather, I shall draw out two aspects in the development of ‘metric regimes’, which have particular significance for the discipline of sociology. The first is how the form of metric measurement favours particular disciplines over others. The second is the move from ‘co-production’ to ‘commercialisation’ in the construction of metric regimes.

http://isa-global-dialogue.net/death-by-metrics/

Paul Kirby and Meera Sabaratnam have written a thought-provoking response to the HEFCE consultation on using metrics for research assessment. Archived here because I plan on coming back to this properly at a later date. This is their account of the motivations driving this turn towards metrics, which they go on to critique:

  • The research assessment exercises conducted at a national level (RAE 2008; REF 2014) and at institutional levels are difficult, time-consuming, expensive and laborious because they consume large quantities of academic energy. Universities and academics themselves have complained about this.
  • Ministers, civil servants, research administrators and managers might prefer modes of assessment that do not require human academic input and judgement. This would be cheaper, not require academic expertise and would be easier to administer. This would facilitate the exercise of greater administrative control over the distribution of research resources and inputs.
  • Moreover, in an age of often-digitised scholarship, numerical values associated with citations are being produced – mostly by data from large corporate journal publishers – and amongst some scholarly communities at some times they are considered a mark of prestige.

http://thedisorderofthings.com/2014/06/16/why-metrics-cannot-measure-research-quality-a-response-to-the-hefce-consultation/

I agree with them that ‘quality’ and ‘impact’ should not be conflated. But I think it’s instructive to consider the (many) reasons why the former tends to get subsumed into the latter. Evaluative processes that are adequate to measuring ‘quality’ exist but they don’t scale easily – on my understanding that’s a large part of the problem with the present system. However I do agree with much of what they’re saying about the limitations of citation counting given the diversity of reasons underlying an act of citation:

  • It exists in the field or sub-field we are writing about
  • It is already well-known/notorious in our field or sub-field so is a useful reader shorthand
  • It came up in the journal we are trying to publish in, so we can link our work to it
  • It says something we agree with/that was correct
  • It says something we disagree with/that was incorrect
  • It says something outrageous or provocative
  • It offered a specifically useful case or insight
  • It offered a really unhelpful/misleading case or insight

http://thedisorderofthings.com/2014/06/16/why-metrics-cannot-measure-research-quality-a-response-to-the-hefce-consultation/

I like the phrase they use here: citation counts track centrality to networks of scholarly argument. I agree this can’t be treated as a proxy for quality but I think it’s an important measure nonetheless. It also tracks marginality to networks of scholarly arguments. My suspicion is that the undesirability of absolute marginality, books and papers that are never read or cited, becoming pervasive is IOTTMCO (Intuitively Obvious to the Most Casual Observer). I also suspect many within the academy basically share that view, given how readily urban myths about low citation rates circulate. However the risk is that the category of ‘never read or cited’ is immediately collapsed into ‘never cited’.

I’m running out of time but I’ll try and come back to this later in the week. It’s a really thought provoking contribution by them and it’s proving very helpful in thinking through my own views on this topic.

Screen Shot 2014-05-26 at 17.15.49

It feels a little wrong that it’s ranking above Simon Wren-Lewis on eBuzzing. The methodology is a little opaque and I’m wondering if the reason for this high ranking is my ‘proactive’ scheduling of the @soc_imagination twitter feed:

Blog ranking based on the score calculated by Ebuzzing which considers various numerous parameters including the number of backlinks, the number of shares of its articles on Facebook and Twitter

178th in the general list for the UK makes sense to me. 4th in Economics doesn’t. I wonder if this says more about opaque metrics and content marketing than it does about the popularity of the websites being ranked.