My notes on Liboiron, M., Tironi, M., & Calvillo, N. (2018). Toxic politics: Acting in a permanently polluted world. Social studies of science48(3), 331-349.

The authors of this paper take “a permanently polluted world” as their starting point. It is one where toxicity is ubiquitous, even if unevenly distributed. Unfortunately, “[t]he tonnage, ubiquity and longevity of industrial chemicals and their inextricable presence in living systems means that traditional models of action against toxicants such as clean up, avoidance, or antidote are anachronistic approaches to change ” (pg 332). The pervasiveness is such that we need to move beyond the traditional repertoire of management (separation, containment, clean up, immunisation) which is premised on a return to purity whiled depoliticising the production of that toxicity by treating it as a technical problem to be managed. In doing so, we can begin to see how toxic harm can work to maintain systems rather than being a pathology which ensues from systemic failure

There is conceptual work required if we are to grasp the politics of toxicity, encompassing how we conceptualise toxic harm, provide evidence for it, formulate responses to it and grasp the interests reflected in its production and management. This involves rejecting a view of toxicity as “wayward particles behaving badly” (pg 333). As they explain on pg 334, toxicity is relational:

Toxicity is a way to describe a disruption of particular existing orders, collectives, materials and relations. Toxicity and harm, in other words, are not settled categories (Ah-King and Hayward, 2013; Chen, 2012) because what counts as a good and right order is not settled.

They suggest a distinction between toxins and toxicants. The former occurs naturally in cells, whereas the latter are “characterized by human creation via industrial processes, compositional heterogeneity, mass tonnage, wide economic production and distribution processes, temporal longevity, both acute and latent effects, and increasing ubiquity in homes, bodies and environments” (pg 334). This includes naturally occurring minerals which are rendered problematic through industrial processes that lead them to occur in specific forms, locations and scales productive of harm.

Laws surrounding toxicants are based upon threshold limits, usually in relation to effects on human bodies. These are supplemented by cost benefit principles based around the avoidance of ‘excessive costs’ given available technologies. In this sense, the breakdown of order on one level (enabling toxicants to spread because it wouldn’t be ‘feasible’ to prevent it) facilitates the reproduction of order on another level (ensuring viable conditions for the continued reproduction of the commercial sector involved). I really like this insight and it’s one which can be incorporated into the morphogenetic approach in an extremely productive way.

This focus on toxicity enables us to links together these levels, providing a multi-scalar politics of life. There is a temporality to toxicity in which a slow disaster is not easily apprehended. For this reason agents seek to make it legible as a event through actions like photography or protest actions. But this easily gives rise to a politics of representation, seeing the claims of environmentalists as (at best) on a par with the claims of commercial firms. Rendering these processes legible through mechanisms like sensational images can reproduce existing differences between centre and periphery, the heard and the unheard.

Their interest is in modes of action “beyond governance-via-policy, in-the-streets-activism and science-as-usual” (pg 337). I’m not sure what their motivation is for this beyond the drive to “no longer privilege the modern humanist political subject and epistemologies based in claims and counter claims”: are they saying that a narrow politics of evidence and judgement has its corollary in public activism around public issues which have been established evidentially? I can see the analytical case for trying to get beyond this dichotomy but I’m not sure I see what is at stake politically in doing so. Their interest in actions such as  “the everyday, obligatory practices of tending to plants and others as toxic politics that do not necessarily result in scaled-up material change” doesn’t seem politically fruitful to me precisely because of the multi-scalar mode of analysis they offer (pg 341). Why should we challenge “activism as heroic, event-based and coherent” (pg 341)? Again I can see an analytical case for this, even if I disagree with it, but I don’t see what is at stake in this politically. It might be there are unintended consequences to thinking in terms of ‘effective outcomes’ but the force of this argument rests on an implicit claim about outcomes. Why is it important to “make room in dominant political imaginations for multiple forms of local, low resolution, uneventful, uneven, frustrated, desireful, ethical, appropriated and incommensurate forms of justice” (pg 343)?

 

This is such an important point in Tim Carmody’s (highly recommended) Amazon newsletter. Not only is Amazon enormously popular but critics of the firm fail to understand the basis of this popularity, as opposed to the insight they have into the popularity of a firm like Apple:

One study last year showed that Amazon was the second most trusted institution in American life, behind only the military. If you only poll Democrats? Amazon is number one. People love Amazon. Most of them don’t know about and have never thought they needed a mesh router for their house. But they will now.

It suggests criticism of big tech could remain a marginal pursuit, embedded to the point of doxa within certain segments of the population while others remain oblivious to it. It also suggests the need for caution in how we treat ‘big tech’. It’s not a monolithic block and people have different relationships to these firms. I’ve assumed there’s a political value in using the designation, as it defines an issue in a way orientated towards action, but perhaps it’s a mistake while there’s a divergence in public affection between the firms in question.

I’m increasingly hopeful that I’ll submit the second edition of Social Media for Academics to Sage next week, meeting a deadline which I suspect my editor had expected I would break. The book is six months overdue, I’ve broken countless deadlines and the impending date was only agreed after a period in which we agreed to withdraw any deadline in order to counter the anxiety which was making it hard for me to write. Sitting in my office on a Saturday afternoon, I’m finishing the final chapter which needs copy editing before I turn to the two chapters and introduction that require substantive work. It seems like a good time to reflect on what went wrong, with a second edition that has been objectively massively behind schedule and subjectively a nightmare to produce.

It was a project I thought would be easy. I feel ridiculous admitting that I’d effectively set aside two weeks of work to prepare the second edition. The first edition had been so well reviewed that I felt all I needed was to insert some new material to take account of what had changed on social media in the preceding years. I had been blogging regularly using a category on my blog, I’d given over 60 talks in which I’d developed new ideas and I’d had countless suggestions from people who had read the first edition. My plan was to spend time producing copy on each new topic before going through the text line by line to find where I could fit these in.

This was a big mistake because I rapidly generated vast amounts of text which didn’t fit substantively or stylistically into the existing book. In turn the process of going through it line by line eroded its structure, leaving me feeling as if I was sitting weeping in the ruins of a neatly turned out home that I’d caused to collapse in a reckless act of home repair. The project quickly became unmanageable because I had little traction upon (a) the scope and remit of the new material I’d produced to go into the book (b) the structure into which this material had to be fitted. By traction I mean a sense of how what was in front of me as I wrote or edited connected to a broader context.

I’ve often been fascinated by the experience of producing a text as a totality. That moment when you hold a thesis in your hands for the first time and this project which had dominated your life suddenly becomes an object you can easily manipulate. The second edition of Social Media for Academics has involved that process in reverse, as the objectivity of the text evaporated into a dispiriting horizon of unmet deadlines and postponed commitments. By simply piling up the new material in blog posts, Scrivener notes and artefact cards without any attempt to link these together, it was inevitable I was going to find myself drowning in ideas. I gripped onto the existing structure of the book in the hope it could keep me afloat but I simply pulled it into the ocean as well.

It occurs to me now that my mistake was a simple one. I should have read through the whole text, making free form notes, before trying to do anything else. Furthermore, in the last few years of being increasingly busy, I’d become adept at producing ephemera: short talks, blog posts, fragments of writing. But I’d gradually lost the habit of connecting these things together and making sense of what I’d been producing. My thoughts didn’t condense in the way they used to, both as a consequence of less mental bandwidth and less inclination to do the connective work upon which creativity depends. The combination of jumping straight into editing without having imposed any order on what I was trying to incorporate goes much of the way to explaining the disaster which has been my experience of producing this second edition.

The only thing that made it tractable in the end was printing out each chapter, going through it with a pen to rewrite, restructure and extend. Not all of my new material has survived and I’m still nervous that there are topics I’ve missed out. But it’s a much tighter book as a result of this process, in spite of being substantially longer. It’s also left me reflecting on the approach I take to my work and made me realise the importance of ordering what I do. This used to happen automatically, as I thought and reflected in the course of days which had a quantity of unstructured time that now seems a distant memory to me. It’s also something I did through blogging, using this as a mechanism to elaborate upon my ideas in order to connect things together. I never stopped blogging but something about the process has changed. It became more efficient, producing units of thought across a diverse range of topics, while leaving these fragmented from each other. I’d rarely stop to write a blog post when I was seized by what C Wright Mills called the feel of an idea, something which I used to do regularly and inevitably left me with the experience of a connection between things that had previously seemed disconnected.

This blog post is an example of this process and I feel much clearer about what went wrong as a result. It’s taken more time and energy than would have been involved in writing a new book. I now know how not to produce the second edition of a book. But I’m quite proud of the result and I hope people like it.

But you can recognize me because I’m you, mate
It’s never too late to see deeper than the surface.
Trust me, there’s so much more to it.
There’s a world beyond this one
That creeps in when your wits have gone soft
And all your edges start shifting
I mean it
A world that it breathing
Heaving its shoulders and weeping
Bleeding through open wounds
That’s why I’m grieving.
Down on my knees and I am feeling everything that I’m feeling.
So come here
Give me your hand
Because I know how to hold it.
I will write every single one of you a poem
And then I’ll set them all on fire
Because I am stunned by how the light in your eyes resembles
Brightening skies.
Mate, I would fight for your life like it was mine

 

This ECPR panel looks superb. Saving here to follow up later:

please find attached, the call for papers for a panel at the ECPR General
Conference in Wrocław (4 – 7 September).

Title of the panel:***The Relationship Between Digital Platforms and**
**Government Agencies in Surveillance: Oversight of or by Platforms?*

If you are interested in participating please submit an abstract (500
words maximum) no later then *15 February* via the ECPR Website (ecpr.eu).

*Abstract*
Revelations of surveillance practices like those of the National
Security Agency or Cambridge Analytica have shown that the digital age
is developing into an age of surveillance. What said revelations have
also shown is that digital platforms are significantly contributing to
this development. As intermediaries between communications and business
partners platforms enjoy a privileged position (Trottier 2011). Platform
increasingly use this position by surveilling and manipulating end users
for the sake of profit maximization (Fuchs 2011, Zuboff 2015). Platforms
with a business model of surveillance and manipulation, seem to have
become the most successful type of corporations of today. Already two of
the three most valuable corporations are operating as such platforms.
While platforms are emerging and expanding in ever more established as
well as new markets and thus gain influence on large parts of society,
the question arises how states are dealing with these new actors and
their capabilities. The panel is intended to provide answers to this
question by studying the spectrum of state-platform relations.
As empirical examples show, the relationship between digital platforms
and the states is multi-faced. On the one hand public institutions are
partnering with private platforms. The data from platforms is used for
example by intelligence agencies to combat terrorist groups, by police
departments to search for criminal suspects and by regulatory agencies
to counter hate speech or violations of copyrights.
On the other hand, the capabilities of platforms can also be turned
against the state. As the last US presidential elections showed
platforms can be utilized to influence the electorate or to compromise
political actors.
From the point of view of the platforms, the state represents on the one
hand an instance that may restricts their actions as it declares
specific types of business activity illegal. The new general data
protection regulation of the EU is one example.
At the same time, states are providing the legal basis for the
platforms’ activities. In order to promote e-commerce for example many
European states liberalized their privacy regulation in the beginning of
the new millennium.
These examples illustrate the diversity in platform-state relations. The
panel will acknowledge this diversity and will bring together works
considering various empirical cases as well as theoretical frameworks.
We welcome contributions focusing on different political systems as well
as different platforms like for example social media, retail, transport
or cloud computing platforms.
Exemplary questions that may be addressed are:

• Which major privacy, anti-trust or media regulations of platforms
where enacted on the national level recently? Which types of platforms
were addressed and which were not? In how far do these regulations
resemble a general trend? To which degree do they effect surveillance
practices?
• In which areas and by which means of surveillance are platforms
already enforcing public policies? Which kind of data is provided by
platforms for predictive policing? How are platforms identifying and
depublishing illegal content? When are platforms collaborating with
intelligence agencies?
• How can platforms be regulated efficiently? Which forms of regulations
between hierarchical regulation and self-regulation exist and how did
these forms emerge? In how far is oversight of platforms comparable to
oversight by platforms?
• Are policies of platform regulation defusing? If so, which states are
setting the standards?
• Which international institutions in the field of platform regulation
were created so far? Is an international regime of platform regulation
evolving?

This looks like an interesting job at a new institute I’d like to keep track of:

The Department of Science & Technology Studies at Cornell University seeks
a Postdoctoral Researcher to play a major role in a two-year project on
Data Science & Society. We invite applications from scholars with a recent
Ph.D. in science & technology studies (STS) or related fields (e.g.,
sociology, anthropology, law, media studies, information science) and an
active research agenda on the social aspects of data science.

The Postdoctoral Researcher will be expected to devote 50% time to his or
her own research agenda and 50% time to working with S&TS Department
faculty on developing the Data Science & Society Lab, a new and innovative
course that is part of the Cornell Data Science Curriculum Initiative. The
lab will engage undergraduate students in two components: instruction in
theoretical tools and practical skills for analyzing social and ethical
problems in contemporary data science (e.g., data science as ethical
practice; fairness, justice, discrimination; privacy; openness, ownership,
and control; or credibility of data science); and participation in
interdisciplinary project teams that work with external partners to address
a real-world data science & society problem.

The Postdoctoral Researcher will have the opportunity to help launch and
shape the initiative, to develop curriculum and engagement projects, build
relationships with external partners and participate in teaching the
course. S/he will work with two S&TS Department faculty members, Malte
Ziewitz and Stephen Hilgartner, who will have primary responsibility for
teaching the course.

Applicants should send:

– Cover letter summarizing the candidate’s relevant background,
accomplishments, and fit with the position
– CV
– Up to two publications (or writing samples)
– Three letters of recommendation
– A transcript of graduate work (unofficial is acceptable)

Required Qualifications:

PhD in science & technology studies (STS) or related fields (e.g.,
sociology, anthropology, law, media studies, information science) and an
active research agenda on the social aspects of data science. ABD students
are eligible to apply, but proof of completion of the Ph.D. degree must be
obtained prior to beginning the position. Recent graduates who received
their Ph.D. during the last five years are especially encouraged to apply.

The position is available for a Summer 2019 start (as early as July 1). We
will begin to review applications on February 28. Apply at
https://academicjobsonline.org/ajo/jobs/13236. For further information,
please contact Sarah Albrecht, saa9@cornell.edu.

Diversity and inclusion are a part of Cornell University’s heritage. We are
a recognized employer and educator valuing AA/EEO, Protected Veterans, and
Individuals with Disabilities.

Using the communal kitchen at the Faculty of Education last Friday, I noticed that the lid had fallen off the bin and was sitting on the floor. In the middle of something and keen to get home, I didn’t stop to pick it up. I just came back from the same kitchen on Monday afternoon and noticed it was still on the floor. “Ah the tragedy of the commons” I said internally while stroking my chin and nodding sagely, before beginning to walk out of the room. At which point I realised how absurd I was being and stopped to pick the lid up from the floor, immediately wishing I’d done it on Friday.

It left me wondering how certain forms of abstraction, stepping back from a concrete phenomenon and subsuming it into a general category, make action less likely. There’s something about that moment of understanding, recognising a fragment of the general in the mundanity of the particular, liable to induce passivity. It’s hard to argue a counter factual but I suspect I would have immediately picked up the lid if I hadn’t experienced that moment of abstract recognition. However I’m aware I’m doing exactly the same thing in writing this blog, recognising a general propensity in a particular instance, encouraging others to do the same by raising it discursively in a public forum.

Image-1-4“In sum, the obsession with the web, its monopolisation of any idea of the new, has served capitalist realism rather than undermined it. Which does not mean, naturally, that we should abandon the web, only that we should find out how to develop a more instrumental relationship with it. Put simply, we should use it – as a means of dissemination, communication and distribution – but not live inside it. The problem is that this goes against the tendencies of handhelds. We all recognise the by now cliched image of a train carriage full of people pecking at their tiny screens, but have we really registered how miserable this really is, and how much it suits capital for these pockets of socialisation to be closed down?” – Mark Fisher, Abandon hope (summer is coming) 

This event looks fantastic. More details and registration here.

Chair: Dr Neil Harrison, University of Oxford

In their seminal works of the early 1990s, both Ulrich Beck and Anthony Giddens predicted that one manifestation of late modernity would be a popular suspicion of experts and scepticism about expertise.  Since then, the rise of the individual’s ability to have their voice heard through mass social media has eroded traditional patterns of cognitive authority – including in academia.

On the one hand, this democratisation of knowledge is to be welcomed, as it has enabled new critical voices to emerge and new discourses to develop, especially among groups that have historically been voiceless. However, it has also created an environment of confusion – a crowded forum of competing voices where volume, integrity and quality are often out of balance.  This confusion has allowed those with power to obfuscate, especially when the weight of evidence is against them.  In recent times, we have seen former UK Education Secretary Michael Gove claim that the public are ‘tired of experts’, while US President Donald Trump’s infamous refrain of ‘fake news’ is used to sideline inconvenient facts and opinions.

Universities have traditionally been seen as authoritative sites for both the creation and transmission of knowledge.  Academics are positioned as experts whose work enriches public life through scientific, social and cultural advances, with expertise that is passed to students through a variety of teaching practices as part of a consensual corpus of knowledge. More recently, universities have increasingly promoted the idea of their graduates as globally-aware and values-led problem-solvers, with the knowledge to tackle ‘wicked issues’ like climate change, public health crises and economic instability.

This event will showcase a diverse collection of papers from a special issue of Teaching in Higher Education journal. They are bound together by a focus on how universities can and should respond to the ‘post-truth’ world where experts and expertise are under attack, but where knowledge and theory-based practice continue to offer the hope of a fairer, safer and more rewarding world.  Specifically, the papers touch on the contributions that can be made by information literacies, public intellectualism, curriculum reform, interdisciplinarity and alternative pedagogies.

 

Presenters:

Elizabeth Hauke (Imperial College, London): “Understanding the world today: the roles of knowledge and knowing in higher education”

Gwyneth Hughes (University College London): “Developing student research capability for a ‘post-truth’ world: three challenges for integrating research across taught programmes”

Rita Hordósy (University of Manchester) and Tom Clark (University of Sheffield): “Undergraduate experiences of the research/teaching nexus across the whole student lifecycle”

Mark Brooke (National University of Singapore): “The analytical lens: developing undergraduate students’ critical dispositions in English for Academic Purposes writing courses”

Alison MacKenzie (Queen’s University, Belfast): “Just Google it: digital literacy and the epistemology of ignorance”

My notes on Lawson, T. (2009). Cambridge social ontology: an interview with Tony Lawson. Erasmus Journal for Philosophy and Economics, 2(1), 100-122.

Tony Lawson is a key figure in critical realism, leading the Cambridge Social Ontology group over twenty five years and playing a primary role in establishing the International Association for Critical Realism, as well as producing decades of work on social ontology and its relationship to economic thought. Originally a mathematician, I was intrigued by this interview’s insight that it was student activism which left him interested in economics, specifically the capacity of economic jargon to get in the way of political discussion. His bewilderment at ubiquitous economic modelling began as soon as he moved into an economics department, leaving him scathing in his critique of those who “are rather pedestrian in their approach to, and often very poor at, mathematics, though seemingly in awe of it, or perhaps in awe of mathematicians” (101). As he puts it, “there are limits to the uses of any specific form of mathematics” which economists seem largely unaware of. In other words, the uses and abuses of mathematics have been central to his work on social ontology, particularly the character of social reality which was obscured by techniques which sought no connection with it. This line of argument led him to connect with others in the nascent intellectual movement of critical realism:

I produced stuff criticising economics from an explicitly realist perspective for ten years or so before coming across Roy. At some point, I discovered that a number of us were making similar or anyway related critiques of current social scientific practice, but situated in different disciplines. Margaret Archer was doing it in sociology; Andrew Sayer in human geography, and so on. Roy was doing a similar thing in philosophy and had the philosophical language. Eventually, we all sort of came together
picking up especially on Bhaskar’s philosophical language—and the rest of his contribution, of course.  (102)

However his interest in social ontology predates philosophical ontology. As he puts it on pg 102, “when I first came into economics at the LSE, my basic concern was that the methods we were taught presupposed a world of a sort very different to the one in which we actually seem to live”. These methods presuppose event regularities (if A then B), atomism (factors which operate uniformly in any context) and a non-processual social reality. The focus of this argument is upon the kind of reality presupposed, featured which can be concretely manifested in different ways as opposed to there being specific claims entailed by specific methods. It is paralleled by the question of what the world must be like for everyday social practices to work in the way that they do.

It follows from his that one can’t build ‘up’ from ontological reasoning into empirical claims and substantive theorising. Its value is rather that it “helps avoid inappropriate reductionist stances and aids explanatory and ethical work” (104). This is why he stresses his primary interest is in ontology rather than critical realism, with the former leading him to the latter rather than being reducible to it. This encompasses philosophical ontology (“the practice of seeking to uncover shared properties of phenomena of a given domain”) and scientific ontology (“to explore the specifics of a phenomenon in a domain”). His work is tied up with the rejection of monism in economic method, described on pg 112:

What I take to be essential to mainstream economics is the insistence that methods of mathematical modelling be everywhere and always employed in economic analysis. I emphasise the word ‘insistence’. It is this insistence that I reject wholesale. I do not, of course, oppose economists using or experimenting with mathematical methods, though I a m pessimistic about the likelihood of much insight being so gained. But I am opposed to the insistence that we must all use these, and only these, method
s, that the use of these methods constitutes proper economics, that employment and promotion be restricted to those who use only mathematical models, that only modelling methods be taught to students, and so on

The thing I found most interesting about this interview was his account of the Cambridge Social Ontology Group as a form of collective method, responding to the growing impersonality of the Cambridge Realist Workshop on Monday nights. The same people attend each time, with discussion focused around particular topics with continuity between the tweets. The focus of both is on questions rather than answers, though obviously the two cannot be separated. To what extent can this be seen as a method for doing ontology? The prevailing culture of the academy relegates organisation to a peripheral status but actually there are some fields of inquiry where it can function as a primary method in its own right. Getting this right is getting scholarship right, as opposed to initiating something which simply allows scholarship to be refined or transmitted.

There’s a little aside on 107 which doesn’t really fit into the rest of these notes but which I don’t want to forget:

I believe the emphasis on prediction in a world that is clearly open, is ultimately an aberrant form of behaviour that itself requires an explanation, probably a psychological one. In fact I am quite susceptible to the suggestion that, in many cases, the over-
concern with prediction is something of a coping mechanism resulting from earlier traumas in life

What a fascinating resource this is: Sociologists’ Knowledge of Anarchism Project. Thanks to Martyn Everett for passing it on.

To explore sociologists’ knowledge about an alternate theoretical paradigm also concerned with society: anarchism. Sociologists tend to have an extremely variable familiarity with anarchist ideas—some who know a lot and others who know very little beyond crude, popular caricatures. This project engages with those sociologists who have substantial familiarity with, knowledge of, or experience with anarchism. The interviews will hopefully constitute discussion fodder for communities interested in sociology, anarchist studies, and anarchist movements.

My notes on Nash, K. (2018). Neo-liberalisation, universities and the values of bureaucracy. The Sociological Review, 0038026118754780.

It is too easy to frame neoliberalism in institutions as an outcome rather than a project. In this thoughtful paper, Kate Nash explores the space which this recognition opens up, the “competing and contradictory values in the everyday life of public sector organisations” which becomes apparent when we reject the supposition of “a fit between ideology, policy, political outcomes and practices” (178). Extending marketing competition into the university doesn’t automatically replace public goods, something which is important to grasp if we want to construct an adequate meso-social account of neoliberalisation. New Public Magement, as a theory of administration, might be explicitly opposed to bureaucracy but it is through a bureaucratic transformation that its tenets are woven into the fabric of an institution like the university. Nash begins her argument by revisiting Weber’s conception of the impartial promise of bureaucracy:

I adopt Weber’s definition of bureaucracy as enacting an ‘ethos of impartiality’, treating individuals as cases according to strict rules of professional and technical expertise. Each person in an organisation should follow correct procedures to guard against making personal judgements; to avoid using the authority of their office to exercise power according to their own personal decisions, whims or alternative values (Du Gay, 2000; Weber, 1948). For Weber, famously, instrumental values, the means rather than the ends, come to predominate in a modern capitalist economy and we are all caught in an ‘iron cage’ of technical evaluations (Beetham, 1987, pp. 60–61; Mommsen, 1989, pp. 109–120). (179)

However it is a mistake to regard bureaucracy as a totality, argues Nash, framing it as leading to the displacement of all values other than administrative efficiency. Rejecting this view allows us to distinguish between “different kinds of bureaucracy, that which undermines and that which supports education in universities” (179). It allows us to identify the values which marketisation entrenches (entrepreneurship and consumer choice) and find others to protect. The allocation of research funding (through the RAE/REF and individualised competitions) and teaching funding (through the student fees and student loans system) in UK universities reflects the entrenchment of these values. It is against this backdrop that collegiality, drawing on the analysis of Malcolm Waters, becomes interesting:

Collegiality, he argues, is relevant to university life in that, firstly, as academics we understand ourselves to be experts in our different fields, and therefore as possessing insights into knowledge – scientific, of the humanities, of the arts – on which there are no higher authorities. As such, academics have a degree of expert authority; we expect, and to a large degree we maintain, our ability to ‘have the last word’ on what counts as a university education in our specialised disciplines through procedures of peer and student evaluation. Secondly, academics tend to think of the university as a ‘company of equals’. Where knowledge is ultimately what matters, other markers of status, wealth and power must be irrelevant. As Waters puts it, ‘if expertise is paramount, then each member’s area of competence may not be subordi- nated to other forms of authority’ (Waters, 1989, p. 955). Finally, Waters suggests that the value of ‘consensus’ is a norm of universities: only decisions that have the full support of the collectivity ‘carry the weight of moral authority’ (Waters, 1989, p. 955). (181)

For Waters this is not necessarily a good thing, as collegiality brings closure i.e.the protection of insiders over outsiders, the defence of existing status against threats to it. This can make it appear to be a form of resistance to marketisation, but the intersection of the two can exasperate their existing problems e.g. superstar academics being able to exercise academic autonomy in a collegial mode, while others are left behind to aspire to collegial status (if I understand Nash’s point correctly). The fact that corporatism has displaced collegiality, to use McGettigan’s phase, doesn’t mean collegiality is a solution to the problem of corporatism.

Even if the rise of audit culture and end of contractual tenure have dented academic autonomy, there is still an entrenched expectation that we “should be free to research, to publish and to teach ‘the truth’, however inconvenient or troublesome for university administrators, governments and civil servants, without fear of losing our jobs”. It has the associated expectation that we will develop this by “reading widely, with curiosity, developing capacities to think through different meanings of concepts, challenge fundamental assumptions, and design and use systematic methodologies, as well as to uncover facts through scholarship and empirical research” (182). Meeting this expectation requires temporal autonomy in relation to free time in which nothing is being produced that can easily be registered.

Audit culture on Power’s account threatens this through twin processes: colonisation (transforming an organisation’s values through measuring its activity) and decoupling (the circularity of auditing which has paperwork produced for auditing as its sole object). The assumption underlying this is that “professionals cannot be trusted to do their jobs well; in particular, we cannot be trusted to deliver value for money” (183). However bureaucratic work is of the same kind and Nash draws attention to that we engage in outside of audit, including those activities which support education and resist abuses of collegiality and marketisation. Nash reminds us that “we should not see bureaucracy solely as marketising, nor only as imposed from above” (184). These are described by Nash as socialising bureaucracy:

Socialising bureaucracy regularises collegiality in that it helps academ- ics communicate what counts as good teaching and learning, what counts as research and learning that is of academic merit, and what assumptions and biases should not be allowed to make a difference in these judgements. It regulates collegiality in that documents and procedures help set limits on academics’ discretionary judgements. (185).

Against an exclusive focus on marketisation as a threat to education, Nash reminds us of those cases where professional power threatens it e.g. academics act in ways that serve  their own private interests rather than those of education. The first example she gives is formalisation of equal treatment where mechanisms ensure staff and students are assessed on the relevant grounds of academic performance and other criteria are excluded. The contractualisation of learning formalises the reciprocal expectations placed upon teachers and learners, mechanisms ensuring both parties have a working understanding of how the interaction will proceed.

Socialising bureaucracy in this sense mitigates the pathologies of both collegiality and marketisation. Recognising the critiques which see these mechanisms as killing spontaneity and charisma, Nash asks how we could otherwise secure the value for teaching and learning for everyone in a mass higher education system which has expanded dramatically over recent decades? Nonetheless distinguishing marketing bureaucracy from  socialising bureaucracy is difficult in practice. Both can contribute to the intensification of work and be experienced as destructive of autonomy. Furthermore, one kind of bureaucracy can stimulate the other

What’s particularly interesting for my purposes is Nash’s analysis of the grey area opened up between the two by intensified competition within and between universities:

It includes dealing with the paperwork associated with the explosion of publishing, showcasing and promotion of academic work – from reviewing articles for journals and book manuscripts and editing journals to organising and publicising conferences and seminars; the bureaucracy of applying for and dealing with funded research, which can mean managing a team; designing, developing and publicising popular programmes and courses; reviewing new programmes for other Departments and universities; acting as external examiner for other universities; and writing references for colleagues and students. In virtually every case, these activities require hours of meetings and emails, as well as filling in forms, and they often require producing online as well as offline materials. In addition, there are also meetings, emails and paperwork associated with running a Department and a university as if it were a business: writing and re-writing ‘business plans’, ‘job descriptions’, ‘programme specifications’, ‘strategies’ to promote research, enhance student experience and so on (188)

It strikes me that social media is part of this grey area but it also something through which much of the gray area is inflected i.e. it is an expectation in itself but also a way of undertaking these other activities. To use an example I talk about a lot: if social media makes it quicker to publicise seminars and conferences then why do we constantly assume it will be a net drain on our time? This seems like the theoretical framework I’ve been looking for to help make sense of the institutionalisation of social media within the university.

I’m giving serious thought to this, as much as I’m trying to save money and travel less:

Call for Papers for the Conference „Scraping the Demos“: Political
epistemologies of Big Data

Organizers: Research Group Quantification and Social Regulation
(Weizenbaum Institute for the Networked Society) and DVPW Thematic Group
“Internet and Politics. Electronic Governance”

Date: 8-9 July 2019 (lunch-to-lunch)

Conference location: WZB Berlin Social Science Center, Reichpietschufer
50, D-10785 Berlin, Germany

Responsible: Dr. Lena Ulbricht lena.ulbricht@wzb.eu

The conference explores political epistemologies of big data. Political
epistemologies are practices by which societies construct politically
relevant knowledge and the criteria by which they evaluate it. These
practices and criteria may originate in scientific, political, cultural,
religious, and economic contexts. They evolve over time, vary between
sectors, are inherently political and therefore subject to conflict. Big
data is the practice of deriving socially relevant knowledge from
massive and diverse digital trace data. The rise of digital technologies
in all social spheres has enabled new epistemic practices which have
important political implications: Political elites see digital
technologies as sources of new and better tools for learning about the
citizenry, for increasing political responsiveness and for improving the
effectiveness of policies.

Practices such as “big data analysis”, “web scraping”, “opinion mining”,
“sentiment analysis”, “predictive analytics”, and “nowcasting” seem to
be common currency in the public and academic debate about the present
and future of evidence-based policy making and representative democracy.
Data mining and web scraping, techniques to access information “hidden”
behind the user interface of a website or device, seem to establish
themselves as epistemic practices with political implications. They
generate knowledge about populations and the citizenry which diverge in
many ways from previous ways of “seeing” and constructing the demos.
Data that is based on digital collection tools is often much more
personal, it can relate different kinds of information and in many cases
offer an improved predictive capability. Therefore, survey methods and
traditional administrative data may lose influence on political
epistemologies. To rely on big data means to rely on data sources that
accumulate information without awareness of the concerned individuals.
This epistemic shift can be observed in policy advice, government and
administration, and political campaigning. Emerging research strands
such as “computational social sciences,” “social physics,” “policy
analytics”, “policy informatics”, and “policy simulations” strive for
better evidence, more transparency and responsiveness in policy making
and governments such as in the UK, or, as in Australia, have set up
strategies of “open policy making”, “agile policy making” and “public
service big data”.

Political parties and advocacy groups use digital data to address
citizens and muster support in a targeted manner; public authorities try
to tailor public policy to public sentiment measured-online, forecast
and prevent events (as in predictive policing, preemptive security and
predictive healthcare), and continuously adapt policies based on
real-time monitoring. An entire industry of policy consultants and
technology companies thrives on the promise related to the political
power of digital data and analytics. And finally, academic research
engages in digitally enhanced computational social sciences, digital
methods and social physics on the basis of digital trace data, machine
learning and computer simulations. The political implications of these
epistemic practices have yet to be examined in detail. Indeed, the rise
of digital technologies in all social spheres may alter the relations
between citizens and political elites in various ways: it could improve,
impoverish (or simply change) political participation, policy
transparency, accountability of political elites and, and decision-making.

The aim of the conference is to bring together scholars from various
related disciplines working on the topic, including, but not limited to:
political communication, elections and party politics, science and
technology studies, political theory, history, sociology and philosophy
of science, critical data studies and computational social sciences.
These fields of research have addressed various aspects related to
political epistemologies in the digital age – but there have been only
few opportunities to relate them, to compare similar practices in
different fields (for example in public policy and in political
campaigning) and to examine the broader picture in order to generate
theories about the political epistemologies of big data, algorithms and
artificial intelligence. Contributions can be both, conceptual or empirical.

The conference is interested in research concerning the following
questions and similar topics:
•       What are the political epistemologies underlying the use of big data
and related phenomena such as algorithms, machine learning and
artificial intelligence in political contexts?
•       Which scientific, political, social and economic practices make use of
digital data and methods? How do these practices construct knowledge
which is deemed as politically relevant?  By which
(rhetoric/procedural/technical) means do these practices and the actors
involved substantiate their claims to political relevance?
•       What insights can we gain from the computational social sciences in
relation to traditional social science methods when it comes to
political behavior, public opinion, policy making etc.?
•       How are digitally mediated political epistemologies related to other
political epistemologies? How are they embedded in institutional
practices and values?
•       Which interpretive conflicts do we witness with regard to the
knowledge produced and legitimized by digital technologies; which are
its major challengers? In which ways do epistemic practices based on big
data, compared to other epistemic practices, influence the chances for
challenging political knowledge claims?
•       How can we place political epistemologies in a historical or cultural
perspective?
•       What are the implications of digitally mediated political
epistemologies for evidence-based policy making and for representative
democracy? Which conceptions of participation, representation and good
governance are embedded in the related practices? How do big
data-related epistemic practices reconfigure democratic concepts? Do we
witness a new form of technocracy?
•       How should democratic societies shape and regulate big-data-based
epistemic practices? Which contributions can we expect from algorithmic
accountability, data protection and research ethics?
The conference will provide academic reflections to current public
debates about the state of democracy in the digital age, considering
that in 2019 various elections take place in German speaking countries,
at the level of the European Parliament and within the German federal
states of Bremen, Hamburg, Saxony, Brandenburg and Thuringia, as well as
in Austria and Switzerland (regional and federal level). The keynote
will be held by professor Daniel Kreiss, the author of a seminal book
about the use of data-related practices in political campaigning
(“Prototype Politics” 2016). The conference will also include artistic
interventions and a lab.

The conference will offer childcare, will be video-recorded, and held in
English. If the funding application is successful, the travel costs of
paper presenters will be covered. The organizers plan on following up
the conference with a publication project.

Abstracts should make explicit on which theories, methods and, if
applicable, empirical material the paper is based. Please send your
abstract of 300-500 words until February 24 to the following address:
demosscraping-weizenbaum@wzb.eu

Preliminary program structure
8 July 2019
14.00   Welcome address
14.15   Keynote by professor Daniel Kreiss + discussion
15.30   Coffee
16.00   Paper presentations
17.30   Lab and art exhibition
18.30   Reception
9 July 2019
9.00    Paper presentations
10.30   Coffee
11.00   Paper presentations
12.30   Paper presentations or panel discussion
14.00   Ending

An absolutely fascinating 4S panel from Ana Vara and David Tyfield:

4S CONFERENCE OPEN PANEL
2019 New Orleans Sept 4-7

Open Panel 69: How Should STS Address Inequality? As a Subject, a (Dis)Value)? Theoretical and Empirical Perspectives

In technoscientific times of huge and increasing inequalities that involve almost all aspects of social life, both within and between countries, questions regarding inequality seem unavoidable to STS scholars, both from an analytical and an ethical standpoint. Specifically, the roles of technoscience in conditioning how inequality is created and augmented, and the (possibly novel) nature of its impacts on trajectories of innovation and vice versa emerge as central concerns.

STS has a long history of engagement with such issues. Since the early days of the field, the study of controversies (e.g. Nelkin) has highlighted the unequal distribution of risks and benefits in the development and implementation of many technologies, contributing to entire new fields of research such as environmental justice. Other topics related to inequality addressed by STS include working conditions, race, access to health, and gender. The study of the production of knowledge has also taken into account the differential status of knowledge according to its origin. While the study of ignorance is a relatively newer focus, with categories such as “undone science” by David Hess et al. targeting inequality quite specifically.

However, in spite of its sustained concern, STS has not developed specific theoretical frameworks on inequality. This panel invites discussion of the possibility and desirability of the development of specific theoretical frameworks on inequality in STS, as well as how contributions from other disciplines can be accommodated. From an empirical perspective, this Panel encourages contributions on cases where this problematic issue is central in different ways.

Organizers
Ana Vara, National University of San Martín, Argentina
David Tyfield, Lancaster University, UK

Submissions

The deadline for submissions at the conference website (https://convention2.allacademic.com/one/ssss/4s19/ or via https://www.4s2019.org/call-for-submissions/) is February 1st, 2019

My notes on this report by Google Transparency Project 

There are many reasons to be cautious about the educational ambitions of tech firms. If these firms seem likely to be the dominant actors of the global economy over the coming decades, how will shape the influence they exercise over education. To offer the most concrete example I can think of: if tech firms shape the curriculum for digital citizenship and digital safety, will they present themselves as sources of digital risk? I doubt it and it’s one of many reasons why their projects and initiatives need to be carefully scrutinised. Capturing the Classroom by the Google Transparency Project is an important contribution to precisely this agenda.

It investigate how technology procurement has been upended in American schools, with “a rigorous and competitive process that carefully weighed factors including cost, usefulness and safeguards on children’s privacy” being radically transformed by Google “directly enlisting teachers to push their products into the classroom”. This has been undertaken through the recruitment of teacher evangelists and organisation of teaching summits (pg 2) with existing professional development budgets bearing the cost of helping teachers adapt to this new technological infrastructure. It is a process which “focused on teachers and their power to spread the word about Google’s classroom potential—all while bypassing the administrators that typically make decisions about technology and other educational tools” (pg 7). In some cases, the teacher trainers win consultancy contracts with no disclosure terms attached, echoing the established practice of Big Pharma offering paid speaking gigs to doctors in the expectation they act as advocates for their products.

It has also sparked the proliferation of an ecosystem of blogs, resources and consultancies “among educators and administrators looking to cash in on school districts’ technology craze” (pg 12). In some cases, these businesses then work with other tech firms, creating a sustained mobilisation of big tech advocacy within education. Third party firms can place a distance between a teacher and Google, blunting the appearance of a conflict of interest.

The authors draw the contrast to Coke and Pepsi’s ambition to produce customers for life by placing vending machines in every school. They suggest Google have already seen considerable success:

Today, 25 million students worldwide use Google’s Chromebooks at school, 30 million teachers and students use Google Classroom, and more than 80 million people use G Suite for Education. (Pg 2)

The success of their initiatives has inspired other firms to follow their lead, described on pg 5:

Google isn’t the only technology company trying to push its products into the classroom. Microsoft, Amazon and Apple, as well as other device manufacturers and software developers, all have aggressive programs targeted at classrooms. Many, such as Amazon Inspire, Microsoft’s Certified Educator program19 and Apple’s Distinguished Educator program, take a page directly from Google’s playbook, also courting teachers and administrators with free trips, software and, increasingly, lucrative consulting gigs moonlighting for EdTech companies. (Pg 5)

However they note that Google has a crucial advantage, in that it can offer hardware as loss leaders in a way that its competitors cannot. Many questions remain unanswered about the commercial significance of this, including whether student profiles built up in school are ‘switched on’ when students enter adult life (pg 7).

My notes on Davies, W. (2017). Elites without hierarchies: Intermediaries,‘agency’and the super-rich. In Cities and the super-rich (pp. 19-38). Palgrave Macmillan, New York.

Who are the super-rich, and what do they want? This is the question which a thought provoking paper by Will Davies begins with and it’s one which has preoccupied me in recent years. Our statistical understand of the super-rich has increased in recent years but this increased knowledge leaves a range of sociological questions which need to be addressed:

What do they want to do with all that money, other than protect it, grow it and pass it on to their children? Do they want political power, and if so, of what kind and to what end? Or do they employ it culturally, to achieve their own modes of Bourdieusian distinction from the other 99.9%? (pg 2)

For a Millsian approach to elites, the question is which political, cultural or military  institutions are they gravitating towards in pursuit of power? For the Marxist approach, it’s a question of shared interests, their collective consciousness of them and self-organisation in pursuit of them in relation to other classes, as well as the tools of exploitation leveraged in this process. Davies agrees with Mike Savage that these aren’t necessarily the right questions, summarising his argument that we need to take money seriously as money (rather than assume it is waiting to be converted into power, with the assumption elites are intrinsically political) and must adequately describe capital before we can theorise it (rather than apply pre-existing categories to incomplete or outdated descriptions of our object).

What is this object? Is it a class? Is it a group? To what extent is it open or closed? To these challenges Davies adds another one: “the need to avoid wholesale methodological individualism, while recognising the deeply personal and individualised nature of the relationships and strategies that appear to structure the lives of the super-rich” (pg 3). Piketty’s contribution is to reorientate analysis way from the labour market and towards the family. But this is difficult because knowledge is partial and the super-rich is secretive. In order to addresses these challenges, Davies suggests we study intermediaries: agents working on behalf of the super-rich who represent their interests. By focusing on agency, in the sense of one party being contracted to represent the interests of another, it is possible to response to Savage’s challenges and move the study of the super rich forward.

He draws on Simmel’s account of money as a teleological vacuum, a pure means which extends beyond every possible use to which it can be put, connecting this to the ambitions of the super-rich. Piketty’s insight about the increasing importance of unearned wealth in the economy, as well as Dorling’s recognition of the professional classes now being subsumed into the 99%, yield a sense of the super-rich as breaking away. As he puts it on pg 6, “To break free of the bounds of culture, politics or technological limits becomes a teleology in itself, the same anti-teleology that Simmel identified as the metaphysical nature of money”. This is tied to a phenomenology of valuing money as “a state of arbitrariness, where money can be experienced as perfect liquidity, without friction” and “extreme form of negative liberty that lacks all normative restraint and relationship only to the future” (pg 16).

The problem of agency is key if we wish to avoid taking this analysis too far, with their insulation depending on the capacity of agents to represent the interests of the super-rich to the wider world. He summarises this as a theoretical approach on pg 8:

In this spirit, I want to propose a theoretical device which may help to shape a sociological approach to the super-rich – principle-agent problems. In particular, I suggest that we can think of the relationship of the super-rich to domains of power, culture and production as a series of principle-agent problems, in which they seek a form of representation which absolves them of the need to become involved in matters of public concern or controversy.

Principle-agent problems rest on the “paranoid methodological individualism” associated with game theory, with the primary challenge being to ensure the agent does not use their position to pursue their own private interests rather than those of the principle they are representing. Interestingly, this is the rationale for stock options for executives, theoretically encouraging them to act in pursuit of shareholder interest by making them a shareholder. But as Davies notes, the fact executive renumeration has risen more quickly than the stock market suggests it actually makes the agency problem worse.

This ties to a broader ambiguity about their position, as “symptoms of the deep-lying ambiguity surrounding the corporate form generally, which is neither a piece of private property nor a political association, but flips from one to the other as it suits” (pg 9). Training as professionals has been one solution but managers lack the monopoly over a specific domain of knowledge typical of professionals and their connection to the public interest is tentative and contestable. Techniques such as edit and credit rating were introduced to address this ambiguity but this introduce their own problem of agency, at least if the rating agency is paid by the company it rates.

This sociological reframing of the principle-agent problem “is a particular way of
representing the interface of politics and economics” (pg 11). If I understand him correctly, economics is insulated from politics by outsourcing normative evaluation to agents; capital can float free of controversy because the evaluation, justification and debate takes place at a distance through the mediation of ratings agencies, auditors, central bankers and policy makers. It is a form of “moral under-writing – declaring that activities are transparent and trustworthy, sometimes when they are not” (pg 15). The same analysis can be applied to the growth of family offices whose purposes is to “save super-rich families from having to engage in public situations (getting a child into a school, handling tax, booking a restaurant table, managing property) which may involve any form of antagonism” (pg 11). Whereas professionals once anchored capital in the public sphere, now they facilitate its escape.

He uses this to make the fascinating argument that the super-rich may benefit from further neoliberalisation, but it’s unclear how actively they are supporting it. Agency in this sense allows them to avoid becoming a class-for-itself, highlighting a micro-social disjuncture between the economic and the political which prevailing concepts of ‘neoliberalism’ are unable to capture. As a project it “required considerable solidarity and reflexive self-understanding on the part of capitalists and ideologues themselves, through think tanks, lobbying bodies, political parties, philanthropic networks” (pg 14). But if I understand correctly, its success has eroded the conditions which made the is possible while also making it less necessary than was once the case. In its place, we have increasingly complex webs of “non-hierarchical, non-exploitative dyadic contractual relations” (pg 15) which often overlap within super-rich networks in which intermediaries have become full members over the preceding decades. It follows from this that the problem is not wealth corrupting politics, as much as “how wealth is kept entirely separate from politics and public life, through strategic acts of delegation, where the delegate is also a delegator” (pg 15).