My notes on Andrejevic, M., Hearn, A., & Kennedy, H. (2015). Cultural studies of data mining: Introduction, European Journal of Cultural Studies 18(4-5), 379-394

In this introduction to an important special issue, Mark Andrejevic, Alison Hearn and Helen Kennedy that the ubiquity of data infrastructure in everyday life means that “we cannot afford to limit our thinking about data analysis technologies by approaching them solely as communication media” and offer a list of questions which we need to address:

what kinds of data are gathered, constructed and sold; how these processes are designed and implemented; to what ends data are deployed; who gets access to them; how their analysis is regulated (boyd and Crawford, 2012) and what, if any, possibilities for agency and better accountability data mining and analytics open up. (pg 380)

This creates a problem for cultural studies because data mining challenges established forms of representation, “promising to discern patterns that are so complex that they are beyond the reach of human perception, and in some cases of any meaningful explanation or interpretation”. It is not “not only a highly technical practice, it also tends to be non-transparent in its applications, which are generally privately owned and controlled”. It poses an ontological challenge to cultural studies, as well as epistemological and methodological ones. In the absence of access to the products of data mining, the authors suggest cultural studies is left theorising their effects.

If we approach data analysis technologies as communicative media, we miss a “shift away from interpretive approaches and meaning-making practices towards the project of arranging and sorting people (and things) in time and space” (pg 381). Data mining isn’t undertaken to understand the communication taking place, as much as to “arrange and sort people and their interactions”. They suggest that recent developments in social theory mirror this changing reality (pg 381-382):

Perhaps not coincidentally, recent forms of social and cultural theory mirror develop- ments in big data analytics; new materialism, object-oriented ontology, post-humanism and new medium theory – all of which are coming to play an important role in digital media studies – de-centre the human and her attendant political and cultural concerns in favour of a ‘flat’ ontology wherein humans are but one node, and perhaps not the most  important, in complex networks of interactions and assemblages. Thus, analysis of the circulation of affects and effects rather than of meanings, content or representations, con- nected as they are to human-centred forms of meaning-making, has become a dominant trope in some influential current approaches to media. Such analyses tend to fashion themselves as anti-discursive in their rejection of a focus on representation and cognition and their turn towards bodies and things in their materiality (rather than their significa- tion).

They make the compelling argument that to “remain within the horizon of interpretation, explanation and narrative” can be a “strategic critical resource in the face of theoretical tendencies that reproduce the correlational logic of the database by focusing on patterns and effects rather than on interpretations or explanations” (pg 382). The promise of these new approaches to correct an excessively discursive focus risks an “over-correction” and a “view from nowhere” in which “the goal of comprehensiveness (the inclusion of all components of an endless network of inter-relations) tends towards a politically inert process of specification in which structures of power and influence dissipate into networks and assemblages” (pg 383). Pushing beyond human concerns too easily leads to ever more specific analyses which collapse the substance of interactions into their effects, leaving us with “no way of generating a dynamics of contestation and argument in a flat ontology of ever proliferating relations or objects” (pg 384).

This is not a claim that there is nothing beyond culture, but rather a reminder that invoking this beyond is intrinsically cultural and a call for “an interrogation of the embrace of a post-cultural imaginary within contemporary media theory” (pg 384). This imaginary often obscures the political economy of data infrastructure, compounding the existing tendency for the ‘virtual’ character of digital phenomenon to distract from their socio-economic materiality; for all their opacity, complexity and power they are just another phase in the technological development of human civilisation (pg 385). When we recognise this in becomes easier to reject the “celebratory presentism” and remember that “technological forms, and the rhetorics and analytic practices that accompany them, do not come from nowhere – they have histories, which shape and condition them, and inevitably bear the marks of the cultural, social and political conditions surrounding their production and implementation” (pg 385). They end this wonderful paper with a call to action which I’d like to explore in the digital public sociology book I’m writing with Lambros Fatsis (pg 393):

We need to develop new methodologies and new intellectual and critical competencies to tackle the embedded assumptions buried in the code and their political and cultural implications. Our ability to accomplish these things will require more than isolated scholarly effort; collaborative, politically engaged activist sensibilities will no doubt be required in order to push past the privatized digital enclosures and open up access to the algorithms, analytics, distributive regimes and infrastructural monopolies that are increasingly coming to condition the contours and substance of our daily lives.

Tuesday December 4th 12pm
Faculty of Education, University of Cambridge
Everyone welcome! It’s a short journey from Cambridge train station

We hear a lot about the coming ‘automation revolution’, but what might developments in machine learning and AI mean for researchers in the social sciences and humanities? In our next masterclass, Associate Professor Inger Mewburn (from the Australian National University and the Thesis Whisperer Blog) will talk about her forthcoming book on machine learning in the social sciences. What kinds of projects does machine learning make possible? What kind of collaborations can social scientists make to take advantage of these new tools and techniques? Do some of our PhD graduates have a future in bespoke algorithm design? Come along to this discussion on future social science practice.

Register here: https://www.eventbrite.com/e/masterclass-an-introduction-to-machine-learning-with-dr-inger-mewburn-tickets-52665601231

The robots are coming! The robots are coming! After watching More Human Than Human, I’ve woken up preoccupied by the rise of the robots narrative and how inadequate it is for making sense of the cultural politics and political economy of automation. The film is an engaging exploration of artificial intelligence and its social significance. While its analysis is often superficial, it foregrounds the agency of the roboticists and thinkers who are shaping emerging technologies and this feels important to me. Nonetheless it sits uneasily with the film’s tendency to frame technological change as inexorable, able to be steered for good or evil but nonetheless impossible to constraint. This is a tension at the heart of disruption rhetoric, celebrating innovation as a form of creativity while holding it to be unavoidable. But this is just one way in which the film so starkly embodies a broader trend.

One reason it is important to see the figures shaping these developments is that it makes clear how white, male and Anglo-American they are. As Jana Bacevic observed, the film manifestly fails the Bechdel test. There are three women with speaking roles in the film, only of whom talks about her own work but does so in a way framed through the lens of the man whose memory powers it. As far as I can recall, every single person in the film is white, mostly American with a few northern Europeans thrown in for good measure. The only exception is a Russian-born women in the film who now works as an entrepreneur in Silicon Valley. This is problematic for many reasons, not least of all that much cutting edge work in artificial intelligence is taking place in China. By ignoring these developments, not only does the film undermine its own investigative mission but it further evacuates the political questions it raises by robbing them of their geopolitical dimension. Disruptive innovation is bound up in techno-nationalism, as machine learning becomes an arms race with epochal significance at a time when American power seemingly enters a state of terminal decline after years of domination without hegemony.

The film ends in a contemplative mode, reiterating familiar rumination about our future. Every sentence in the closing scene repeatedly invokes ‘we’ and ‘our’. Who are we? How does the white American author in his early 30s who provides the intellectual narration for the film come to articulate the agenda of this we? How does the older white American director who provides its substantive narration, with the film framed around his own personal project in disruptive innovation, come to articulate the agenda of this we? The ‘we’ here is devoid of politics. It is a we without a they as Chantal Mouffe would put it. At a time when the liberal order is in chaos, we ought to be suspicious to the point of paranoia about the emergence of a powerful narrative of civilisational renewal in which we can save ourselves or we can doom ourselves. It is Anglo-American capitalism mystifying its own bleeding age, making a religion out of its own products and celebrating them as world-making or fearing them as world-breaking. None of this is to deny hugely significant technological advances are occurring. But the rise of the robots narrative actively frustrates our understanding of it, systematically shutting down the intellectual space in which it becomes possible to think through the cultural politics and political economy of automation. Provincialising disruption is unavoidable if we want to understand the reality of putatively disruptive technologies.

How good does this look? So much of this chimes with the paper I’m currently struggling to finish

The Cultural Life of Machine Learning: An Incursion into Critical AI Studies
Preconference Workshop, #AoIR2018 Montréal, Canada
Urbanisation Culture Société Research Centre, INRS (Institut national de la recherche scientifique)
Wednesday October 10th 2018

Machine learning (ML), deep neural networks, differentiable programming and related contemporary novelties in artificial intelligence (AI) are all leading to the development of an ambiguous yet efficient narrative promoting the dominance of a scientific field—as well as a ubiquitous business model. Indeed, AI is very much in full hype mode. For its advocates, it represents a ‘tsunami’ (Manning, 2015) or ‘revolution’ (Sejnowski, 2018)—terms indicative of a very performative and promotional, if not self-fulfilling, discourse. The question, then, is: how are the social sciences and humanities to dissect such a discourse and make sense of all its practical implications? So far, the literature on algorithms and algorithmic cultures has been keen to explore both their broad socio-economic, political and cultural repercussions, and the ways they relate to different disciplines, from sociology to communication and Internet studies. The crucial task ahead is understanding the specific ways by which the new challenges raised by ML and AI technologies affect this wider framework. This would imply not only closer collaboration among disciplines—including those of STS for instance—but also the development of new critical insights and perspectives. Thus a helpful and precise pre-conference workshop question could be: what is the best way to develop a fine-grained yet encompassing field under the name of Critical AI Studies? We propose to explore three regimes in which ML and 21st-century AI crystallize and come to justify their existence: (1) epistemology, (2) agency, and (3) governmentality—each of which generates new challenges as well as new directions for inquiries.

In terms of epistemology, it is important to recognize that ML and AI are situated forms of knowledge production, and thus worthy of empirical examination (Pinch and Bijker, 1987). At present, we only have internal accounts of the historical development of the machine learning field, which increasingly reproduce a teleological story of its rise (Rosenblatt, 1958) and fall (Minsky and Papert 1968; Vapnik 1998) and rise (Hinton 2006), concluding with the diverse if as-yet unproven applications of deep learning. Especially problematic in this regard is our understanding of how these techniques are increasingly hybridized with large-scale training datasets, specialized graphics-processing hardware, and algorithmic calculus. The rationale behind contemporary ML finds its expression in a very specific laboratory culture (Forsythe 1993), with a specific ethos or model of “open science”. Models trained on the largest datasets of private corporations are thus made freely available, and subsequently détourned for the new AI’s semiotic environs of image, speech, and text—promising to make the epistemically recalcitrant landscapes of unruly and ‘unstructured’ data newly “manageable”.

As the knowledge-production techniques of ML and AI move further into the fabric of everyday life, it creates a particularly new form of agency. Unlike the static, rule-based systems critiqued in a previous generation by Dreyfus (1972), modern AI models pragmatically unfold as a temporal flow of decontextualized classifications. What then does agency mean for machine learners (Mackenzie, 2017)? Performance in this particular case relates to the power of inferring and predicting outcomes (Burell, 2016); new kinds of algorithmic control thus emerge at the junction of meaning-making and decision-making. The implications of this question are tangible, particularly as ML becomes more unsupervised and begins to impact on numerous aspects of daily life. Social media, for instance, are undergoing radical change, as insightful new actants come to populate the world: Echo translates your desires into Amazon purchases, and Facebook is now able to detect suicidal behaviours. In the general domain of work, too, these actants leave permanent traces—not only on repetitive tasks, but on the broader intellectual responsibility.

Last but not least, the final regime to explore in this preconference workshop is governmentality. The politics of ML and AI are still largely to be outlined, and the question of power for these techniques remains largely unexplored. Governmentality refers specifically to how a field is organised—by whom, for what purposes, and through which means and discourses (Foucault, 1991). As stated above, ML and AI are based on a model of open science and innovation, in which public actors—such as governments and universities—are deeply implicated (Etzkowitz and Leydesdorff, 2000). One problem, however, is that while the algorithms themselves may be openly available, the datasets on which they rely for implementation are not—hence the massive advantages for private actors such as Google or Facebook who control the data, as well as the economic resources to attract the brightest students in the field. But there is more: this same open innovation model makes possible the manufacture of military AI with little regulatory oversight, as is the case for China, whose government is currently helping to fuel an AI arms race (Simonite 2017). What alternatives or counter-powers could be imagined in these circumstances? Could ethical considerations stand alone without a proper and fully developed critical approach to ML and AI? This workshop will try to address these pressing and interconnected issues.

We welcome all submissions which might profitably connect with one or more of these three categories of epistemology, agency, and governmentality; but we welcome other theoretically and/or empirically rich contributions.

Interested scholars should submit proposal abstracts, of approximately 250 words, by 11:59pm EDT on June 30th, 2018 to CriticalAI2018 [at] gmail [dot] com. Proposals may represent works in progress, short position papers, or more developed research. The format of the workshop will focus on paper presentations and keynotes, with additional opportunities for group discussion and reflection.

This preconference workshop will be held at the Urbanisation Culture Société Research Centre of INRS (Institut national de la recherche scientifique). The Centre is located at 385 Sherbrooke St E, Montreal, QC, and is about a 20-minute train ride from the Centre Sheraton on the STM Orange Line (enter at the Bonaventure stop, exit at Sherbrooke), or about a 30-minute walk along Rue Sherbrooke.

For information on the AoIR (Association of Internet Researchers) conference, see https://aoir.org/aoir2018/ ; for other preconference workshops at AoIR 2018, see https://aoir.org/aoir2018/preconfwrkshop/.

Organizers: Jonathan Roberge (INRS), Michael Castelle (University of Warwick), and Thomas Crosbie (Royal Danish Defence College).