I’m giving serious thought to this, as much as I’m trying to save money and travel less:

Call for Papers for the Conference „Scraping the Demos“: Political
epistemologies of Big Data

Organizers: Research Group Quantification and Social Regulation
(Weizenbaum Institute for the Networked Society) and DVPW Thematic Group
“Internet and Politics. Electronic Governance”

Date: 8-9 July 2019 (lunch-to-lunch)

Conference location: WZB Berlin Social Science Center, Reichpietschufer
50, D-10785 Berlin, Germany

Responsible: Dr. Lena Ulbricht lena.ulbricht@wzb.eu

The conference explores political epistemologies of big data. Political
epistemologies are practices by which societies construct politically
relevant knowledge and the criteria by which they evaluate it. These
practices and criteria may originate in scientific, political, cultural,
religious, and economic contexts. They evolve over time, vary between
sectors, are inherently political and therefore subject to conflict. Big
data is the practice of deriving socially relevant knowledge from
massive and diverse digital trace data. The rise of digital technologies
in all social spheres has enabled new epistemic practices which have
important political implications: Political elites see digital
technologies as sources of new and better tools for learning about the
citizenry, for increasing political responsiveness and for improving the
effectiveness of policies.

Practices such as “big data analysis”, “web scraping”, “opinion mining”,
“sentiment analysis”, “predictive analytics”, and “nowcasting” seem to
be common currency in the public and academic debate about the present
and future of evidence-based policy making and representative democracy.
Data mining and web scraping, techniques to access information “hidden”
behind the user interface of a website or device, seem to establish
themselves as epistemic practices with political implications. They
generate knowledge about populations and the citizenry which diverge in
many ways from previous ways of “seeing” and constructing the demos.
Data that is based on digital collection tools is often much more
personal, it can relate different kinds of information and in many cases
offer an improved predictive capability. Therefore, survey methods and
traditional administrative data may lose influence on political
epistemologies. To rely on big data means to rely on data sources that
accumulate information without awareness of the concerned individuals.
This epistemic shift can be observed in policy advice, government and
administration, and political campaigning. Emerging research strands
such as “computational social sciences,” “social physics,” “policy
analytics”, “policy informatics”, and “policy simulations” strive for
better evidence, more transparency and responsiveness in policy making
and governments such as in the UK, or, as in Australia, have set up
strategies of “open policy making”, “agile policy making” and “public
service big data”.

Political parties and advocacy groups use digital data to address
citizens and muster support in a targeted manner; public authorities try
to tailor public policy to public sentiment measured-online, forecast
and prevent events (as in predictive policing, preemptive security and
predictive healthcare), and continuously adapt policies based on
real-time monitoring. An entire industry of policy consultants and
technology companies thrives on the promise related to the political
power of digital data and analytics. And finally, academic research
engages in digitally enhanced computational social sciences, digital
methods and social physics on the basis of digital trace data, machine
learning and computer simulations. The political implications of these
epistemic practices have yet to be examined in detail. Indeed, the rise
of digital technologies in all social spheres may alter the relations
between citizens and political elites in various ways: it could improve,
impoverish (or simply change) political participation, policy
transparency, accountability of political elites and, and decision-making.

The aim of the conference is to bring together scholars from various
related disciplines working on the topic, including, but not limited to:
political communication, elections and party politics, science and
technology studies, political theory, history, sociology and philosophy
of science, critical data studies and computational social sciences.
These fields of research have addressed various aspects related to
political epistemologies in the digital age – but there have been only
few opportunities to relate them, to compare similar practices in
different fields (for example in public policy and in political
campaigning) and to examine the broader picture in order to generate
theories about the political epistemologies of big data, algorithms and
artificial intelligence. Contributions can be both, conceptual or empirical.

The conference is interested in research concerning the following
questions and similar topics:
•       What are the political epistemologies underlying the use of big data
and related phenomena such as algorithms, machine learning and
artificial intelligence in political contexts?
•       Which scientific, political, social and economic practices make use of
digital data and methods? How do these practices construct knowledge
which is deemed as politically relevant?  By which
(rhetoric/procedural/technical) means do these practices and the actors
involved substantiate their claims to political relevance?
•       What insights can we gain from the computational social sciences in
relation to traditional social science methods when it comes to
political behavior, public opinion, policy making etc.?
•       How are digitally mediated political epistemologies related to other
political epistemologies? How are they embedded in institutional
practices and values?
•       Which interpretive conflicts do we witness with regard to the
knowledge produced and legitimized by digital technologies; which are
its major challengers? In which ways do epistemic practices based on big
data, compared to other epistemic practices, influence the chances for
challenging political knowledge claims?
•       How can we place political epistemologies in a historical or cultural
perspective?
•       What are the implications of digitally mediated political
epistemologies for evidence-based policy making and for representative
democracy? Which conceptions of participation, representation and good
governance are embedded in the related practices? How do big
data-related epistemic practices reconfigure democratic concepts? Do we
witness a new form of technocracy?
•       How should democratic societies shape and regulate big-data-based
epistemic practices? Which contributions can we expect from algorithmic
accountability, data protection and research ethics?
The conference will provide academic reflections to current public
debates about the state of democracy in the digital age, considering
that in 2019 various elections take place in German speaking countries,
at the level of the European Parliament and within the German federal
states of Bremen, Hamburg, Saxony, Brandenburg and Thuringia, as well as
in Austria and Switzerland (regional and federal level). The keynote
will be held by professor Daniel Kreiss, the author of a seminal book
about the use of data-related practices in political campaigning
(“Prototype Politics” 2016). The conference will also include artistic
interventions and a lab.

The conference will offer childcare, will be video-recorded, and held in
English. If the funding application is successful, the travel costs of
paper presenters will be covered. The organizers plan on following up
the conference with a publication project.

Abstracts should make explicit on which theories, methods and, if
applicable, empirical material the paper is based. Please send your
abstract of 300-500 words until February 24 to the following address:
demosscraping-weizenbaum@wzb.eu

Preliminary program structure
8 July 2019
14.00   Welcome address
14.15   Keynote by professor Daniel Kreiss + discussion
15.30   Coffee
16.00   Paper presentations
17.30   Lab and art exhibition
18.30   Reception
9 July 2019
9.00    Paper presentations
10.30   Coffee
11.00   Paper presentations
12.30   Paper presentations or panel discussion
14.00   Ending

My notes on Andrejevic, M., Hearn, A., & Kennedy, H. (2015). Cultural studies of data mining: Introduction, European Journal of Cultural Studies 18(4-5), 379-394

In this introduction to an important special issue, Mark Andrejevic, Alison Hearn and Helen Kennedy that the ubiquity of data infrastructure in everyday life means that “we cannot afford to limit our thinking about data analysis technologies by approaching them solely as communication media” and offer a list of questions which we need to address:

what kinds of data are gathered, constructed and sold; how these processes are designed and implemented; to what ends data are deployed; who gets access to them; how their analysis is regulated (boyd and Crawford, 2012) and what, if any, possibilities for agency and better accountability data mining and analytics open up. (pg 380)

This creates a problem for cultural studies because data mining challenges established forms of representation, “promising to discern patterns that are so complex that they are beyond the reach of human perception, and in some cases of any meaningful explanation or interpretation”. It is not “not only a highly technical practice, it also tends to be non-transparent in its applications, which are generally privately owned and controlled”. It poses an ontological challenge to cultural studies, as well as epistemological and methodological ones. In the absence of access to the products of data mining, the authors suggest cultural studies is left theorising their effects.

If we approach data analysis technologies as communicative media, we miss a “shift away from interpretive approaches and meaning-making practices towards the project of arranging and sorting people (and things) in time and space” (pg 381). Data mining isn’t undertaken to understand the communication taking place, as much as to “arrange and sort people and their interactions”. They suggest that recent developments in social theory mirror this changing reality (pg 381-382):

Perhaps not coincidentally, recent forms of social and cultural theory mirror develop- ments in big data analytics; new materialism, object-oriented ontology, post-humanism and new medium theory – all of which are coming to play an important role in digital media studies – de-centre the human and her attendant political and cultural concerns in favour of a ‘flat’ ontology wherein humans are but one node, and perhaps not the most  important, in complex networks of interactions and assemblages. Thus, analysis of the circulation of affects and effects rather than of meanings, content or representations, con- nected as they are to human-centred forms of meaning-making, has become a dominant trope in some influential current approaches to media. Such analyses tend to fashion themselves as anti-discursive in their rejection of a focus on representation and cognition and their turn towards bodies and things in their materiality (rather than their significa- tion).

They make the compelling argument that to “remain within the horizon of interpretation, explanation and narrative” can be a “strategic critical resource in the face of theoretical tendencies that reproduce the correlational logic of the database by focusing on patterns and effects rather than on interpretations or explanations” (pg 382). The promise of these new approaches to correct an excessively discursive focus risks an “over-correction” and a “view from nowhere” in which “the goal of comprehensiveness (the inclusion of all components of an endless network of inter-relations) tends towards a politically inert process of specification in which structures of power and influence dissipate into networks and assemblages” (pg 383). Pushing beyond human concerns too easily leads to ever more specific analyses which collapse the substance of interactions into their effects, leaving us with “no way of generating a dynamics of contestation and argument in a flat ontology of ever proliferating relations or objects” (pg 384).

This is not a claim that there is nothing beyond culture, but rather a reminder that invoking this beyond is intrinsically cultural and a call for “an interrogation of the embrace of a post-cultural imaginary within contemporary media theory” (pg 384). This imaginary often obscures the political economy of data infrastructure, compounding the existing tendency for the ‘virtual’ character of digital phenomenon to distract from their socio-economic materiality; for all their opacity, complexity and power they are just another phase in the technological development of human civilisation (pg 385). When we recognise this in becomes easier to reject the “celebratory presentism” and remember that “technological forms, and the rhetorics and analytic practices that accompany them, do not come from nowhere – they have histories, which shape and condition them, and inevitably bear the marks of the cultural, social and political conditions surrounding their production and implementation” (pg 385). They end this wonderful paper with a call to action which I’d like to explore in the digital public sociology book I’m writing with Lambros Fatsis (pg 393):

We need to develop new methodologies and new intellectual and critical competencies to tackle the embedded assumptions buried in the code and their political and cultural implications. Our ability to accomplish these things will require more than isolated scholarly effort; collaborative, politically engaged activist sensibilities will no doubt be required in order to push past the privatized digital enclosures and open up access to the algorithms, analytics, distributive regimes and infrastructural monopolies that are increasingly coming to condition the contours and substance of our daily lives.

In the last few days, I’ve spent a lot of time reflecting on a remark Susan Halford made at this event about the difference between expertise and discipline. If I understand her correctly, her point was that capacities for knowing and acting in the world (expertise) can have their reproduction organised socially in different ways (discipline) and this is crucial for understanding how knowledge production responds to novel developments. In some cases, discipline might support expertise but in other cases it might hinder it. In either case, expertise is dependent upon it because it requires a social organisation through which existing knowledge is codified, new knowledge incorporated and knowledge practitioners trained. This means that we can’t ever have ‘pure expertise’ as a response to novelty because experts are embedded, even if loosely or unorthodoxly, within disciplines. This is the problem Susan identifies with the politics of discipline generated by big data:

How we define Big Data matters because it shapes our understanding of the expertise that is required to engage with it – to extract the value and deliver the promise. Is this the job for mathematicians and statisticians? Computer scientists? Or ‘domain experts’ – economists, sociologists or geographers – as appropriate to the real-world problems at hand? As the Big Data field forms we see the processes of occupational closure at play: who does this field belong to, who has the expertise, the right to practice? This is of observational interest for those of us who research professions, knowledge and the labour market, as we see how claims to expert knowledge are made by competing disciplines. But it is also of broader interest for those of us concerned with the future of Big Data: the outcome will shape the epistemological foundations of the field. Whether or not it is acknowledged, the disciplinary carve-up of big data will have profound consequences for the questions that are asked, the claims that are made and – ultimately – the value that is derived from this ‘new oil’ in the global economy.

One response to this upheaval is to retreat into disciplinary silos and there’s inevitably a comfort to this. But not only does it cede terrain in a way which might allow narrow forms of expertise to become hegemonic, doubling down on a form of discipline unlikely to survive this transformation of expertise in its current form will inevitably be short sighted. This is how Felicity Callard and Des Fitzgerald describe the shifting plate tectonics of the human sciences in their book on interdisciplinarity:

The more we wander down strange interdisciplinary tracks, the more apparent it becomes to us that being disciplined isn’t playing it safe: the truth is that staying within the narrow epistemological confines of –for example –mid-twentieth-century sociology, while it may produce short-term gains, is not, in fact, the best way to guarantee a career in the twenty-first century (and we mean ‘career’ in its most capacious sense here: we are not using it with the assumption that everyone wants a permanent post at a university, but to express an idea that many would like to find some way to advance their projects, ideas, and so on). The plate tectonics of the human sciences are shifting: we here describe our own forays into one small, circumscribed niche between the social and natural sciences, but expand this horizon to epigenetics, to the emergence of the human microbiome, to all kinds of translational research in mental health, to ‘big data’ and the devices that append it, to the breakdown of the barrier between creative practices and research, and to a whole host of other collapsing dichotomies, and it becomes apparent that ‘neuro-social science’ is only one local effect of a much broader reverberation.

But there’s also a great deal of creativity in this space. It just means we have to consider projects of expertise alongside projects of discipline, mapping out these issues as neither purely matters of expertise nor purely matters of discipline. This is what I hope we’ll manage to explore in my session at the TSR conference on defending the social. It’s a Fireside Chat with Val Gillies and Ros Edwards, as well as their co-author who couldn’t make it when we recorded the podcast below.

In a recent paper, I’ve argued we find a cultural project underpinning ‘big data’: a commitment to reducing human being, in all its embodied affective complexity, stripping it of any reality beyond the behavioural traces which register through digital infrastructure. Underlying method, methodology and theory there is a vision of how human beings are constituted, as well as how they can be influenced. In some cases, this is explicitly argued but it is often simply implicit, lurking beneath the surface of careful choices which nonetheless exceed their own stated criteria.

It’s an argument I’m keen to take further than I have at present and reading Who Cooked Adam Smith’s Dinner by Katrine Marçal  has left me interested in exploring the parallels between homo economicus (and why we are invested in him) and the emerging homo digitalis. Marçal writes on pg 162 of the allure of the former, misunderstood if we see it as nothing more than an implausible theoretical construct or a mechanism to exercise influence over political decision-making:

Many have criticized economic man’s one-dimensional perspective. He lacks depth, emotions, psychology and complexity, we think. He’s a simple, selfish calculator. A caricature. Why do we keep dragging this paper doll around? It’s ridiculous. What does he have to do with us? But his critics are missing something essential. He isn’t like us, but he clearly has emotions, depth, fears and dreams that we can completely identify with. Economic man can’t just be a simple paper doll, a run-of-the-mill psychopath or a random hallucination. Why, if he were, would we be so enchanted? Why would we so desperately try to align every part of existence with his view of the world, even though collected research shows that this model of human behaviour doesn’t cohere with reality? The desperation with which we want to align all parts of our lives with the fantasy says something about who we are. And what we are afraid of. This is what we have a hard time admitting to ourselves. Economic man’s parodically simple behaviour doesn’t mean that he isn’t conjured from deep inner conflicts

What makes homo economicus so compelling? This allure has its roots in a denial of human dependence, describing on pg 155 how our fascination with “his self-sufficiency, his reason and the predictable universe that he inhabits” reflect discomfort with our once having been utterly dependent on others, “at the mercy of their hopes, demands, love, neuroses, traumas, disappointments and unrealized lives”, as well as the inevitability that we will be so again at the other end of the life-course. But he also embodies a vision of what life should be like between the two poles of dependency, as she writes on pg 163:

His identity is said to be completely independent of other people. No man is an island, we say, and think that economic man’s total self-sufficiency is laughable. But then we haven’t understood his nature. You can’t construct a human identity except in relation to others. And whether economic man likes it or not –this applies to him as well. Because competition is central to his nature, his is an identity that is totally dependent on other people. Economic man is very much bound to others. But bound to them in a new way. Bound to them. Downright chained to them. In competition. If economic man doesn’t compete, he is nothing, and to compete he needs other people. He doesn’t live in a world without relationships. He lives in a world where all relationships are reduced to competition. He is aggressive and narcissistic. And he lives in conflict with himself. With nature and with other people. He thinks that conflict is the only thing that creates movement. Movement without risk. This is his life: filled with trials, tribulations and intense longing. He is a man on the run.

If I’m right about the existence of homo digitalis, a clear vision of human constitution underpinning ‘big data’*, we can ask similar questions about this truncated, eviscerated, predictable monad. So complex when we look up close, so simple when we gaze down from on high. Our individuality melts away in the aggregate, leaving us no longer overwhelming but simply overwhelmed. Manageable, knowable, stripped back. Why might this be an appealing vision of human kind? Who might it be appealing to? I’m sure many can guess where I’m going with this, but it’s a topic for another post.

*A term I use to encompass digital social science, commercial and academic, as well as the organisations and infrastructures which it facilitates.

Some tweets about this blog post worry me because it appears as if people think this is my analysis. It’s not. These are my notes on the excellent paper below which I’d strongly recommend reading in full. 

This thought-provoking article by Malcolm Williams, Luke Sloan and Charlotte Brookfield offers a new spin on the familiar problem of the quantitative deficit within U.K. sociology. Many accounts of this sort are concerned with the explanatory implications of this deficit (the phenomena that defy explanation without quantitative terms) while digital sociology is concerned with its implications for computational skills. However, the authors look to a deeper level: the tradition within British sociology which defines itself against quantitative methods. They explore this possibly by drawing a contrast between analytical sociology and critical sociology:

Analytic sociology is the term often used to describe a quite specific version of scientific sociology that combines theories and empirical data to produce sociological explanations (Bunge, 1997; Coleman, 1986; Hedström, 2005; Hedström and Swedberg, 1998). It mostly employs mechanistic explanation and variants on middle range theory. Our use of the term ‘analytic’ encompasses this specific use, but is also broader and meant solely to indicate a sociology that aims to produce descriptions and explanations of social phenomena. It does not exclude ‘understanding’ as methodological virtue, nor does it deny the role of ‘critique’ as an element in the methodological toolkit. It certainly does not exclude qualitative methods and indeed the research described here has qualitative elements

http://journals.sagepub.com/doi/full/10.1177/1360780417734146

Their distinction tracks familiar oppositions between explanation/ understanding and positivism/hermeneutics. Their interest is in how the latter term in each pair was advantaged by the dynamics of expansion in U.K. universities, where (non-quantitative) sociology was a cheap route to expanded student numbers with little to no necessary capital investment. It was during this period of expansion during the 1960s that ‘scientific method’ began to be tied to militarism by the burgeoning anti-war movement. They argue that successive intellectual movements (postmodernism, the linguistic turn, the cultural turn) accentuated this antipathy, such that progressive thought came to be instinctively cautious about quantitative methods. This trend played out within the discipline, its students and teacher, rather than simply being located ‘out there’.

They see this hostility as being dampened by the methodological pluralism encouraged by critical realism and mixed methods pragmatism. But for reasons I don’t understand, which seem to misread the motivations and methods of the critical realist project, incorporate them to analytical sociology:

While there are important differences in the analytic approach (say between realism, post-positivism, and positivism), there is a common core as treating social phenomena as real (or a proxy for real) (Kincaid, 1996) that can be caused, or can cause other social phenomena. The analytic approach shares the common foundations of science: description, explanation, and theory testing and, more specifically, that through the use of appropriate sampling we can generalise from sample to population or from one time or place to another.

http://journals.sagepub.com/doi/full/10.1177/1360780417734146

These are precisely the features which what they call critical sociology rejects as “either methodologically impossible to achieve, in the social world, or ethically undesirable”. More positively, it is concerned with situated meaning and the possibility of emancipation. Their characterisation here is much vaguer but they admit there is an element of strawman to each. Their concern is with how these sociological stereotypes enter into the understanding of students, as extreme versions of actually existing tendencies take hold in the imagination of those who are the next generation of sociologists and the cohorts which the discipline sets loose upon the world.

This is an important possibility because evidence suggests that sociology students are not driven by a fear of number in choosing their degree. Or at least that other mechanisms are at work in bringing about the quantitative deficit within U.K. sociology. The evidence they present suggests a humanistic understanding of sociology is dominant within the student body:

Table 2 clearly shows that the majority of students scored the discipline as closer to the arts/humanities than science/maths. It has been speculated that students taking a prior A-levels in art might be inclined to see sociology as closer to the arts and those taking a mathematics A-Level as closer to science. In fact, though there was some variation at the different measurement points, more students in both groups still thought sociology nearer to the arts/humanities than the sciences.

All but one of subsequent focus groups revealed a “proclivity towards the qualitative involving the theoretical and critique with scepticism about statistics and a clear preference from the students for doing discursive work”. The BSA survey, asking more nuanced questions than the aforementioned survey, produced a more cautious endorsement of sociology’s status:

Table 4 shows that the majority of participants viewed the subject content (64.3%) and status (66.9%) of sociological research as closer to the arts and humanities. In terms of methodology, analytical tools, and public utility, sociology was seen as mid-way between the arts and humanities and the natural sciences

Their overarching argument, supported by intriguing comparative data concerning sociology in Netherlands and New Zealand, concerns how a cultural antipathy to quantitative methods gets reproduced across successive professional cohorts (compounded by the marginalisation of quantitative methods teaching within the broader curriculum):

Many, if not most, sociologists in UK universities have themselves come from a culture of sociology that emphasises critique over analysis, theoretical positions, and qualitative over quantitative methods of enquiry that reflect the historical influences on the discipline, as described above. This culture exists at all levels of teaching, from pre-university A-level teaching through to postgraduate training. Their attitudes and practices incline them ideologically and practically to favour a humanistic and critical attitude towards the discipline, the selection of research questions that require interpretive methods, and often either an expertise in these methods or a preference for theoretical reasoning alone

The result is an absence of methodological pluralism within U.K. sociology, held it seems as a point of principle. They suggest this might also be coupled with a vague sense of persecution, as critical sociology perceives itself as being under threat in a discipline it in fact dominates.

The ensuing ‘split personality’ might be a source of strength for the discipline in troubled times:

In the UK, quite apart from sociology ceding many of its former areas of interest to other disciplines, what sociology is depends on who you ask. The appearance is one of fragmentation. Nevertheless, a counterfactual argument may go something like this: a fragmented discipline might also be described as a diverse one, whose survivability does not depend on the adherence to any particular paradigm. Psychology, for example, which has long been largely associated with experimental method, faces something of a crisis as the statistical reasoning that underpin the experiment have been increasingly challenged in the last two decades (see, for example, Krueger, 2001). Sociology, in the UK, may actually be more agile as a result of its analytic/critique split personality

But crucially there is a risk of the quantitative practitioners exporting themselves from the discipline, even as its capacity to generate them increases:

One might further speculate that those graduate sociologists, from universities with Q-Step centres or other more quantitatively inclined courses, will not necessarily work in sociology or identify as sociologists because they too see it as a primarily humanistic discipline based upon critique, but rather go to other disciplines or become generic ‘social researchers’ with a consequent continuation of the present situation where analytic sociology continues to be a minority pursuit within the UK discipline.

It seems passé to talk about the ‘big data revolution’ in 2017. Much of the initial hype has subsided, leaving us in a different situation to the one in which big data was expected to sweep away all that had come before. Instead, we have the emergence of data science as well as the institutionalisation of computational methods, albeit unevenly, across the full range of the natural and social sciences. Furthermore, addressing the challenge posed by early waves of big data evangelicism to established methodologies, particularly those with a critical and/or hermeneutic focus, has generated a vast outpouring of creativity with the potential to generate significant reorientations within these disciplines. The ‘big data revolution’ has proceeded in a much more constructive way than those early prophets of epochal change were able to predict.

However, we are still far from harmony within the academy. While the intellectual changes driven by big data are well underway, institutional changes of potentially greater importance are still in their infancy. This is how Susan Halford describes the politics of discipline surrounding big data:

How we define Big Data matters because it shapes our understanding of the expertise that is required to engage with it – to extract the value and deliver the promise. Is this the job for mathematicians and statisticians? Computer scientists? Or ‘domain experts’ – economists, sociologists or geographers – as appropriate to the real-world problems at hand? As the Big Data field forms we see the processes of occupational closure at play: who does this field belong to, who has the expertise, the right to practice? This is of observational interest for those of us who research professions, knowledge and the labour market, as we see how claims to expert knowledge are made by competing disciplines. But it is also of broader interest for those of us concerned with the future of Big Data: the outcome will shape the epistemological foundations of the field. Whether or not it is acknowledged, the disciplinary carve-up of big data will have profound consequences for the questions that are asked, the claims that are made and – ultimately – the value that is derived from this ‘new oil’ in the global economy.

https://discoversociety.org/2015/07/30/big-data-and-the-politics-of-discipline/

We can see rapid transformation at this level, with expertise in the social and natural sciences responding to the opportunities and incentives which big data has brought with it. The institutional landscape has begun to change, most notably around funding, with important consequences for how individual and collective agents plan their career-path through this environment. However, this is still unfolding within organisations that have not themselves undergone change as a result of big data. It is this which is likely to change in the coming years. As WonkHe reported earlier this week of the consultation on how the Office for Students will regulate providers of higher education in England:

The consultation will also be looking at the nuts and bolts of the OfS – how will it balance the demands of competition and autonomy while maintaining “proportionate” regulatory approaches? How will the remarkable new powers of entry (extreme audit?) be used? What sanctions will be available to the new regulator, and how will they be applied? Following strong ministerial direction, we can also expect measures on senior staff pay to feature prominently, but what form will they take, and will they have any real teeth? And how will approaches compare to other sectors?

Widely expected is an end to regular institutional visits – the “periodic review” is likely to be replaced by a new method for the OfS to use live data to monitor institutions. It may well be easier than the annual submission, but now is a good time to be a big data wonk, as new systems and process will need to be established in institutions to respond to a new approach.

This concern for real time metrics, institutionalising transactional data into the fabric of higher education itself, only seems likely to grow. What does this mean for the politics of discipline? My hunch is that the big data revolution within higher education has only just begun and that it’s eventual form will be different to that which most predicted.

I just came across this remarkable estimate in an Economist feature on surveillance. I knew digitalisation made surveillance cheaper but I didn’t realise quite how much cheaper. How much of the creeping authoritarianism which characterises the contemporary national security apparatus in the UK and US is driven by a familiar impulse towards efficiency?

The agencies not only do more, they also spend less. According to Mr Schneier, to deploy agents on a tail costs $175,000 a month because it takes a lot of manpower. To put a GPS receiver in someone’s car takes $150 a month. But to tag a target’s mobile phone, with the help of a phone company, costs only $30 a month. And whereas paper records soon become unmanageable, electronic storage is so cheap that the agencies can afford to hang on to a lot of data that may one day come in useful.

http://www.economist.com/news/special-report/21709773-who-benefiting-more-cyberisation-intelligence-spooks-or-their

In reality, it is of course anything but, instead heralding a potentially open ended project to capture the world and achieve the utopia of total social legibility. An ambition which always makes me think of this short story:

The story deals with the development of universe-scale computers called Multivacs and their relationships with humanity through the courses of seven historic settings, beginning in 2061. In each of the first six scenes a different character presents the computer with the same question; namely, how the threat to human existence posed by the heat death of the universe can be averted. The question was: “How can the net amount of entropy of the universe be massively decreased?” This is equivalent to asking: “Can the workings of the second law of thermodynamics (used in the story as the increase of the entropy of the universe) be reversed?” Multivac’s only response after much “thinking” is: “INSUFFICIENT DATA FOR MEANINGFUL ANSWER.”

The story jumps forward in time into later eras of human and scientific development. In each of these eras someone decides to ask the ultimate “last question” regarding the reversal and decrease of entropy. Each time, in each new era, Multivac’s descendant is asked this question, and finds itself unable to solve the problem. Each time all it can answer is an (increasingly sophisticated, linguistically): “THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

In the last scene, the god-like descendant of humanity (the unified mental process of over a trillion, trillion, trillion humans that have spread throughout the universe) watches the stars flicker out, one by one, as matter and energy ends, and with it, space and time. Humanity asks AC, Multivac’s ultimate descendant, which exists in hyperspace beyond the bounds of gravity or time, the entropy question one last time, before the last of humanity merges with AC and disappears. AC is still unable to answer, but continues to ponder the question even after space and time cease to exist. Eventually AC discovers the answer, but has nobody to report it to; the universe is already dead. It therefore decides to answer by demonstration. The story ends with AC’s pronouncement,

And AC said: “LET THERE BE LIGHT!” And there was light

https://en.wikipedia.org/wiki/The_Last_Question

From Douglas Rushkoff’s Throwing Rocks at the Google Bus, loc 2256:

Besides, consumer research is all about winning some portion of a fixed number of purchases. It doesn’t create more consumption. If anything, technological solutions tend to make markets smaller and less likely to spawn associated industries in shipping, resource management, and labor services.

Digital advertising might ultimately capture the entirety of advertising budgets, but it does nothing to expand these budgets. There are upper limits on the revenue growth of the corporations that define the ‘attention economy’: how are they going to respond to these?

I’m very interested in this concept, which I was introduced to through the work of Pierpaolo Donati and Andrea Maccarini earlier this year. It emerged from the work of Arnold Gehlen and refers to the role of human institutions in unburdening us from existential demands. This is quoted from his Human Beings and Institutions on pg 257 of Social Theory: Twenty Introductory Lectures by Hans Joas and Wolfgang Knobl. He writes that institutions

are those entities which enable a being, a being at risk, unstable and affectively overburdened by nature, to put up with his fellows and wit himself, something on the basis of which one can count on and rely on oneself and others. On the one hand, human objectives are jointly tackled and pursued within these institutions; on the other, people gear themselves toward definitive certainties of doing and to doing with in them, with the extraordinary benefit that their inner life is stabilized, so that they do not have to deal with profound emotional issues or make fundamental decisions at every turn.

In an interesting essay last year, Will Davies reflected on the ‘pleasure of dependence’ in a way which captures my understanding of entlastung. It can be a relief to trust in something outside of ourselves, settling into dependence on the understanding that our context is defined by a degree of reliability due to an agency other than our own:

I have a memory from childhood, a happy memory — one of complete trust and comfort. It’s dark, and I’m kneeling in the tiny floor area of the back seat of a car, resting my head on the seat. I’m perhaps six years old. I look upward to the window, through which I can see streetlights and buildings rushing by in a foreign town whose name and location I’m completely unaware of. In the front seats sit my parents, and in front of them, the warm yellow and red glow of the dashboard, with my dad at the steering wheel.

Contrary to the sentiment of so many ads and products, this memory reminds me that dependence can be a source of deep, almost visceral pleasure: to know nothing of where one is going, to have no responsibility for how one gets there or the risks involved. I must have knelt on the floor of the car backward to further increase that feeling of powerlessness as I stared up at the passing lights.

http://thenewinquiry.com/essays/the-data-sublime/

At a time when entlastung is failing, when institutions are coming to lose this capacity to unburden us, could faith in self-tracking, big data and digital technology fill the gap? The technological system as a whole comes to constitute the remaining possibility of entlastung and we enthusiastically throw ourselves into its embrace, as the only way left to feel some relief from the creeping anxiety that characterises daily life.

The essay by Will Davies is really worth reading: http://thenewinquiry.com/essays/the-data-sublime/

From Infoglut, by Mark Andrejevic, loc 607. The context to digital innovation in public services: 

What emerges is a kind of actuarial model of crime: one that lends itself to aggregate considerations regarding how best to allocate resources under conditions of scarcity – a set of concerns that fits neatly with the conjunction of generalized threat and the constriction of public- sector funding. The algorithm promises not simply to capitalize on new information technology and the data it generates, but simultaneously to address reductions in public resources. The challenges posed by reduced manpower can be countered (allegedly) by more information. As in other realms, enhanced information processing promises to make the business of policing and security more efficient and effective. However, it does so according to new surveillance imperatives, including the guidance of targeted surveillance by comprehensive monitoring, the privileging of prediction over explanation (or causality), and new forms of informational asymmetry. The data- driven promise of prediction, in other words, relies upon significant shifts in cultures and practices of information collection.