This week: the second Accelerated Academy

30 November-2 December 2016, Leiden (Scheltema, Marktsteeg 1)

Conference organisers
Sarah de Rijcke, Centre for Science and Technology Studies, Leiden University

Björn Hammarfelt, University of Borås, Sweden | Leiden University

Alex Rushforth, Centre for Science and Technology Studies, Leiden University

Scientific committee

Mark Carrigan, University of Warwick

Tereza Stöckelová, Czech Academy of Sciences

Filip Vostal, Czech Academy of Sciences

Paul Wouters, Leiden University

Milena Kremakova, University of Warwick

From the 1980s onward, there has been an unprecedented growth of institutions and procedures for auditing and evaluating university research. Quantitative indicators are now widely used from the level of individual researchers to that of entire universities, serving to make academic activities more visible, accountable and amenable to university management and marketing. Further demands for accountability in academia can be related to general societal trends described under the heading of the audit society (Power 1997), and the evaluation society (Dahler-Larsen 2011). As part of broader transformations in research governance, indicators on publications and citations are now permeating academia: from global university rankings to journal-level bibliometrics such as the journal impact factor and individual measures like the h-index. Yet, it is only recently that considerable interest has been directed towards the effects that these measures might have on work practices and knowledge production (c.f. de Rijcke et al. 2015), and the role they might be playing in accelerating academic life more generally (c.f. Vostal 2016).

The Accelerated Academy draws together a number of cross-disciplinary conversations about the effects that acceleration towards metric forms of evaluation is having upon research, and the implications this holds for living and working in contemporary academia (Felt et al. 2009). Building on the successful maiden edition of the Accelerated Academy series in Prague in 2015, this year’s Leiden conference will be especially focussed towards the following questions:

What does acceleration mean in different research contexts?

What are the implications of digitally mediated measurement and tools for quantifying scholarly performance?

What are the knowledge gaps regarding the effects of metrics on scientific quality and societal relevance of research?

How can we harness the positive and minimize the adverse effects of performance measurement in universities?

Keynote Speakers

Ulrike Felt (University of Vienna) – Valuing time: Temporalities, regimes of worth and changing research cultures

How are the temporal reorderings of contemporary academic research cultures related to (e)valuative practices? This is the core question addressed in my key note. It will start from the diagnosis that many of the critiques and doubts raised about the quality and efficiency of our research systems have been frequently addressed through profoundly restructuring the temporal dimensions of academic lives, work, knowledge production and management. From there the presentation will invite a reflection on the effects of this re-timing of academic research environments and how it in turn supports and stabilizes specific valuation practices. It will invite to look beyond phenomena of acceleration and engage with the wider phenomenon of politics of time at work in academia. Looking into one exemplary field where we can observe chronopolitics at work, the talk will focus on the complex relations of temporalities and indicators (as one expression of worth).

Peter Dahler-Larsen (University of Copenhagen) – The Evaluation Society and Academia

In the evaluation society, evaluation machinery and infrastructure connect measurements and objects across time and space in unprecedented ways. Evaluation contributes to contingency and acceleration. The effects upon research and research quality are complex, including, potentially, the reconfiguration of the very meaning of research, relations among colleagues, the definitions of research fields, and relations between research and society. The impact of evaluation upon research is difficult to interpret. Our interpretations rest on assumptions. This key note offers three perspectives, roughly described as “the metrological perspective”, “the political perspective”, and the “constructivist perspective”. Each perspective is characterized by distinct assumptions, issues of interest, and orientations regarding what needs to be done.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.