I was struck by this phrase by Ivan Illich in Deschooling Society, conveying his scepticism of the promise of educational technology in the 1970s. On pg 67 he writes of an “attempt to escalate an old dream into fact, and to finally make all valuable learning the result of professional teaching”. It left me wondering whether the contemporary ed tech bubble can be understood in terms of a dream to reduce learning to the professional: stripping teachers out of the process and replacing them with platforms which facilitate ‘personalised’ learning, analysed and overseen by a cohort of professionals. It would represent what Emmanuel Lazega describes as the final victory of bureaucracy over collegiality, as the agency of teachers which school has long depended on is finally dispensed with so that the logic of schooling can be reduced into the platform and the class of engineers who maintain it.

lights are out, phones are dead
and I’m the only thing that’s runnin in this city
except for the clouds
and man they’re comin down
if i knew my way around wouldn’t feel so dizzy
where’s tele? nobody can tell me
i don’t speak a lick of that language and got a slippery memory
if i spelled it all out on my arm, only if
but i didn’t so i think get a grip kid, deal with it
baby’s waiting for a ring
wont settle for the substitute excuse that’s forming
i got a complicated case of escapism
for her i try to rewire my nature
too tired to wake her up
odder that artificial calm she was on
drug-induced future that slipped out of her palms
seductive rain dancer, she thinks i’m waterproof
like superman doesn’t need a roof over his head
when i come home to roost i need truth to hold in bed
but i’m seeking salvation in a booth
and the phones are dead
and the lights are out
and i’m the only thing livin in this ghost town
except for the clouds, and man they’re comin down
if i knew my way around by now i’d be bound for home
blackout on white night in rome
blackout on white night in rome

My notes on Lupton, D. (2018). How do data come to matter? Living and becoming with personal data. Big Data & Society, 5(2), 2053951718786314.

In this paper, Deborah Lupton extends her work on the quantified self into a broader theorisation of how people come to live with data. It foregrounds the voluntary dimension of this process, in which “many novel digital technologies offer any interested person the opportunity to document, monitor and measure details of their bodies”, equipping them with capacities which were previously confined to specialised instruments and trained experts (1). These techniques render visible what was previously unseen. From 2:

Elements of their bodies that people may not otherwise have con- sidered to any great extent – the number of steps they take per day, their sleep patterns, the kilometres and geographical locations they move through, their brain waves, moods and so on – are brought into sharp relief.

This renders the body as a series of interlocking digitised elements which “demand new ways of interpreting these signs and signals of bodily function and movement” (2). Her focus is on the existential and epistemic predicament this confronts people with: how do we make sense of this information, determine its value and put it into practice? This takes place in a context where there is pervasive cultural pressure to know our bodies better and live in a way deemed more efficient. Her focus is on how this data is experienced and understood in everyday life.

This has been studied through the frame of data sense-making i.e. how people engage with and learn from information. It has been tied to data literacy, a concern for capacities to select, analyse, visualise and learn from data. Lupton highlights how these approaches tend to focus on cognitive and technical forms of interpretation, ignoring the role of the situated body as a means through which we learn. She uses the concept of data sense to this end, which incorporates the “entanglements of the digital sensors with the human senses in the process of sense-making” (3) with the body as the site of sensation and response.

This project draws on a range of approaches from agential realism, new materialism and the anthropology of material culture. They share a more-than-human approach which “demands that the human subject is always considered permeable and open to the material world rather than closed-off and contained” (4). They share the following characterised described on 5:

  • an approach that recognises that humans and non- humans are entangled in hybrid, unstable and generative ways;
  • the importance of considering the distributed agency and vital capacities (‘thing-power’) of human non-human assemblages;
  • an emphasis on the embodied, sensory and otherwise material nature of meaning, knowing, growing, perceiving and making as part of human embodiment;
  • the changing meanings of artefacts as they move into different assemblages and the work required to articulate these assemblages; and
  • the importance of identifying and tracing the ways in which humans and nonhumans are intermeshed, the enactments and practices that are involved, and the effects of these on human lives.

These inform Lupton’s conception of a human-data assemblage within which data learns about humans but humans in turn “may find themselves asking to what extent their data speak for them, and to what extent their data are different from other elements of embodiment and selfhood” (5). Digital devices and software for personal data necessarily seek to make data intelligible to users. Through such intelligibility personal data has agency in relation users, exercising an influencer over their behaviour and leading them to do things on the basis of these new understandings. But data can also ossify if it’s not found useful or actionable, freezing into a latent state which could be rendered lively again at a later date. Her work on self-tracking provides an illustrative example of why these distinctions matter. Described on 6-7:

My research participants often described collecting and reviewing data about their bodies as generating agential capacities that are suffused with affect. These data can motivate them, encourage them to move their bodies more, persist with weight-loss efforts or self-management of chronic conditions. The ‘numbers’ can make them feel good if they demonstrate that people are achieving goals set for themselves, or if the data demonstrate good health or higher levels of fitness. Positive feelings can be generated by the buzzes, flashing lights, badges and other notifications that communicate a goal has been achieved. Alternatively, however, biometric data can have demoralising effects, generating disappointment, frustration, guilt and anger. Notifications can be experienced as annoying or pestering, making unreasonable demands.

To make sense of data involves “connecting the metrics with the lived sensory experiences of one’s body and the other elements that are important in data sense-making” (7). This is a contextualised process of building a human-data assemblage, shaped by the environment but also contributing to it. This stress on the meaning of the context is crucial if we want to understand, for instance, how particular professional groups might engage in specific ways with personal data. As Deborah puts it on 7:

When people review their data, they actively relate them to the contexts in which they were generated. People consider such aspects as the time of day, the weather, how their bodies felt, whether they were lacking sleep, were hungry, feeling stressed, drank too much the night before, what place and space they were inhabiting or moving through when the information were generated.

Oddly though I think an individual is reproduced here, in spite of the theoretical sources. My point of departure is the claim that personal “data are meaningful because they are about and for them” (8) and I think this insufficient to account for mattering in Andrew Sayer’s sense. Mattering always points beyond the relationship between the agent and what matters, something which I’m not sure the concept of the assemblage can account for because it squeezed human and data together in a dance of co-constitution. This is something I’ve tried to analyse in Katherine Hayles work and I’m thinking I need to seriously explore the issue I persistently see in co-constitution theorising, which I take to be a novel form of central conflationism in Margaret Archer’s sense.

This is a fascinating analysis of demographic trends in the UK, considering the implications of a coming expansion of 18 year olds for UK higher education in the 2020s. Extrapolating forward from current application rates, 50% of this cohort will be applying to go to university and the system is currently ill equipped to absorb this expansion, particularly given that central planning has been precluded by the ‘reforms’ of recent years:

We will have two more sharp falls in the 18 year-old population of around 2 per cent – this cycle and 2020. Then the cohort grows again. This growth is strong, often 3 per cent a year. And it is consistent, up year after year. This matters, as it makes the cumulative rises large and unrelenting. The five-year rate of population growth increases reaches 17 per cent in the mid-2020s. Between 2020 and 2030 the population increases by 27 per cent. This trajectory equates to almost a million extra 18 year-olds over the decade.

My notes on Latz, A. B. (2015). Gillian Rose and Social Theory. Telos, 173, 37-54. and Fuller, B. W. (2018). Back to Hegel? On Gillian Rose’s critique of sociological reason. The British journal of sociology, 69(2), 265-285

The figure of Gillian Rose was a continual presence in the Sociology department at Warwick in the time I was there, from 2007 to 2014, with the main seminar room named after her and many staff members who had been close to her. However I’d never really engaged with her work until now so I’m pleased I’m finally got round to reading these two papers. The first by Andrew Brower Latz explores her relationship to social theory, situating her in terms of her the Frankfurt school which she identified herself as a student of. Her work emerged in a context where “a focused search to develop a better grasp and articulation of sociology and its logic” (38) was mitigating the failure of classical sociology to identify its own methodological specificity. Latz takes critical realism, structuration theory and Bourdieu’s sociology as constructive responses to this underlying problem.

It is this context that Rose’s Hegelian work was so significant, providing a means to address “perennial issues in social theory, namely:the relationship of logic to the sociology of knowledge; contradictions and antinomies; emergence; and the possibility of a totality” (38). She engaged with Hegel’s speculative philosophy as a radicalisation of the Kantian critique of reason (theory) by reason (meta-theory). In doing so, it provides a way to approach the social character of knowledge which takes us beyond the post-Kantians, for whom objectivity (“the applicability of our concepts to the external world, which creates the possibility of true or false judgments about the world”, 39) is grounded in sense data received through intuition. If I remember this correctly, our sensory apparatus is receptive to the world and we know that world through the representations that receptivity provokes in our mind. We do not encounter the object but only this (involuntary) representation of it. The transcendental form of inquiry was retained by neo-Kantians but the transcendental idealism was rejected, leaving the subject locked within thought without the (indirect) escape which the latter provided. As Latz puts it, “The touchstone for a system of thought was thought itself, in a way that tended to insulate thought from receptivity to thought’s other” (40).

For Rose the recurrence of positivism within sociology has one of its source in the lack of appreciation by sociologists of the transcendental form of their reasoning. If I understand correctly, this manifests itself as an evasion of the relationships between theory and meta-theory, mind and world, theory and evidence (to use Latz’s terms): the world is either collapsed into our experience of it or our experience is imputed to a world deemed to be devoid of intrinsic meaning. As Latz says later, drawing on a reading of Rose by Simon Jarvis, “Sociology’s danger is twofold: on the one hand, imposing a grid or pre-theorized schema on society instead of allowing experience to speak; on the other, imagining that simply pointing to experience will do” (53). For Rose speculative philosophy provides a way out of this impasse as “transcendental philosophy performed with maximum awareness of its own workings, which is gained through a historical perspective” (42). It entails a grappling with the absolute – “the unity of finite and infinite, of law and ethics, the full achievement of ethical life” (42) – but as a regulative ideal which established the unattainable horizon of our thought. This goes hand-in-hand with her “emphasizing the moments of incompleteness, provisionality, tension, and even fragmentation within thought and society” (43).

Latz considers the significance of her work in relation to logic & the sociology of knowledge, contradictions & antinomies, emergence and totality. He does this through considering sociological studies which either repudiate or embody key features of her approach, namely thinking “in terms of contradictions and determinate negation, must use increasingly comprehensive levels of explanation and historically informed analyses, and be aware of its own role within its object of study” (47). Perhaps the key point in this discussion is the relationship between theory and meta-theory: how theorising in pursuit of social explanation is itself theorised and the practical implications of this. How do we tie  substantive and meta-theoretical considerations together in the same undertaking? If I understand correctly, the point is we are always doing this regardless of whether we are aware of it. If we’re not aware of it, we’re going to do it in a way that impedes our mode of explanation. So Rose’s approach is about how to do this in a way which is adequate to the character of social reality. It is a form of theoretical reflexivity, for lack of a better term. That at least is how I understand the gist of Latz’s discussion.

I was particularly interested in her conception of totality: “Since for Rose no single view of the totality is adequate to it, various perspectives on it are required” (52). It always evades us while remaining the horizon of what we are doing and why. A further point this left me reflecting on is how invoking experience can itself render that experience abstract by cutting it from the world in relation to which that experience emerges and which accounts for the meaning and content of it. It emerges from the failure to link theory and meta-theory, facing the challenge of for instance linking mind and world rather than simply allowing that to fade into the background in a way that dichotomises lived experience and brute facts. As Latz puts it, “Rather than an abstruse methodological pedantry, issues of meta-theory can often have theoretical effects, including philosophical conclusions drawn from sociological studies, the nature of the explanations given, or the status afforded to those explanations and the theories whence they derive” (54).

The second paper by Brian W. Fuller is more explicitly concerned with the subdued reception of Rose’s work and its relationship to the wider neglect of Hegel within the social sciences. He highlights how “Rose argues that we sociologists have been systematically misunderstanding ourselves, and allowing this to happen – taking refuge, either implicitly or explicitly, in the notion of Kantian limits to our understanding” (266). If we remain locked within the Kantian problematic then we are confined to the “social investigation of unknowable objects” (266). This entails recognising the brokenness of modern thought, which a thematic I’m fascinated by without really understanding and want to return to. I assume it relates to the post-Kantian estrangement of thought from world, cutting it off from what it is, but I’m far from certain based on what I’ve read so far. What Fuller later calls ‘the strict dichotomy between cognition and its objects” which begins in Kant and is preserved in post-Kantian philosophy (268). Rose’s concern is to acknowledge this and to begin with it in the sociological enterprise. Its failure can be seen in a dichotomisation which pervades sociological thought. From 269: 

Her Hegelian move is to grasp the two paradigms as aspects of one whole. Sociology has repeated the mistakes of neo-Kantian philosophy by bifurcating into two contradictory theoretical paths, each of which represents one-half of a linked pair, and which consequently cannot be comprehended in isolation. Durkheim’s structural approach and Weber’s interpretivism each postulates a precondition and a conditioned; though their perspectives are opposite, neither can grasp the transition between spheres. Employing Hegel directly, Rose declares the former approach ‘empty’, while the latter is ‘blind’ (1981: 214). In sum, Rose’s complaint is that sociology is trapped within dichotomies which it can never overcome, because they are products of its own mode of thinking.

The confrontation with the relationship between theory and meta-theory opens up  the possibility of transcending dichotomies in their application. I’m trying to understand how this relates to the approach of Margaret Archer, whose precondition of analytical dualism entails thinking with dichotomies as ways in which we can unpick the relationship between heterogenous elements in the explanation of social outcomes. But what Archer calls explanatory methodology as a site for leaving these dichotomies behind would presumably be to effectively reproduce them from Rose’s point of view. As Fuller puts it, “the ‘speculative’ direction she suggests is designed to help uncover the meaning and significance of such recurring problems and limitations, opening up potential for transformative practice” (270). This I would argue is what Archer actually does, I’d be interested to encounter any sustained engagement between these two sets of ideas. It certainly falls short of Rose’s lofty, somewhat existential, approach to theoretical inquiry. From 270:

A second difficulty of explication is that – according to Rose’s interpretation – there is no way to adequately present the Hegelian speculative position in an abstract and concise fashion, without misunderstanding it. Hegel’s approach to philosophy intends to teach a new (speculative) way of thinking and experiencing, which requires continual ‘re-cognizing’ of one’s current position. Rose takes Hegel’s philosophy seriously as a ‘way of despair’, and the process of educating consciousness necessarily appears difficult, aporetic, or else impossible. This is not an illusion to be overcome, nor a dead end, but a process that will require failure. Accordingly, for Rose, abstract explication is out of the question, and the reader must learn through failure.

Incidentally, could this not be a meta-theory of what Daniel Little calls being an open-source philosopher? This could provide a profound intellectual-existential rationale for the virtues of thinking out loud, rather than thinking being an internal process contained until moments of careful and purified expression. More generally, Rose’s project involved a move beyond reflective thought, in the process recognising the dichotomies it creates as both its own creation and “part of a larger whole, a conceptual and social-historical whole” (271). This move has its origins in the Hegelian turn beyond Kant’s restriction to the boundaries of the finite, “insisting that the whole point of philosophical thought is to be speculative, to attempt to think the infinite, to embrace the contradictions produced by reflection” (271). Philosophical reflection merely analyses, categorises and schematises its contents, ordering the finite rather than understanding its coming to existence and totality within which it happens.

The possibility for overcoming it rests on a self awareness of the reflective position, understanding its own activity as the origin reflective understanding. In this sense, it involves taking Kant further than recognising the dependence of objectivity upon the subject. We typically see appearances as grounded in essences but the Hegelian project was about recognising this as a posited dichotomy, in order to grasp the unity of the appearance and the essence. My crude understanding of this, which I’m not very confident about, sees it as a vast multilayered mess of becoming which is parcelled out into discrete terms by the activity of cognition. Speculative philosophy involves tracing out how these discrete terms and their static relations have been created through reflective activity, moving upwards towards a totality we can never reach because our striving is part of the whole we are trying to apprehend from inside the mess. This has implications for ethical life. From 274:

In his practical philosophy, Kant derived a set of universal principles to gov- ern the moral subject, arguing that moral action must be guided by the univer- salizability of a potential action. Hegel’s critique of Kantian moral philosophy is similar to his account of speculation above. He claims that Kant’s practical philosophy cannot adequately comprehend humanity nor society in its historical concreteness, since it deals only in abstractions and universals. The critique emphasizes two points. First, he objects to the abstract separation of theoretical and practical reason, and consequently, of the realms of necessity and freedom. For Kant, for example, the human will exists in the sphere of noumena, inde- pendent of the natural, empirical world in which human subjects reason and act. Second, Hegel criticizes Kantian practical philosophy for being too ‘formal’. The formality of the moral law means that it cannot be derived from the con- crete, historical world, but only transcendentally from reason. From a Kantian view, we can only understand ethical life abstractly.

I think this is akin to the understanding Alisdair Macintyre expresses in his focus on moral particularism. We are always already inside ethical community and abstracting the individual from that community in the Kantian manner will capture nothing of the concrete reality of moral existence. It ties to Rose’s reading of Hegel’s phenomenological method, which unites thinkers I’ve been drawn to who in different sorts of ways seek to concretise subjectivity without reducing it to the dominion of first person experience. From 277:   

The phenomenological method is then a way of presenting speculative expe- rience. It treats experience concretely, in its social-historical particularity, and hence allows ‘us’ to recognize our own ‘determination’. In other words, it affords a conception of consciousness not simply in its mode of being or exis- tence, but in ‘actuality’ – consciousness at work in the social world.

Fuller’s discussion of the relationship between philosophy and sociology is extremely interesting. From 278:

Sociology has long had a difficult relationship with philosophy, beginning with the attempts of the most promi- nent classical theorists (Marx, Weber, Durkheim) to articulate a vision of soci- ology which goes beyond philosophy in some important aspects. Although perhaps the more common claim remains that sociology has in some sense tran- scended philosophy or made it obsolete, there are periodic calls to reintroduce philosophical perspectives into social science.

This ends with a really interesting critique of Daniel Chernilo’s project of philosophical sociology, arguing it merely reiterates the dichotomies it takes as its starting point while remaining with the horizons of existent sociological reason. Counter-poising the philosophical and sociological doesn’t help us better understand the relationship between them, as much as inviting this dialogue might in itself help enrich the practice of each. His final discussion of the horizons of Rose’s thought are fascinating. From 280:

Likewise we cannot use Rose to overcome the contra- dictions of structure and agency that so many have struggled with for so long. Rose did not succeed where contemporary theorists failed, any more than Hegel ‘solved’ the problems of Kantian philosophy. We need to tell the story of the conceptual oppositions and antinomies as they came to be, to structure our current social theoretical world. These antinomies are not just distractions, but have their own particular social history, which needs to be comprehended and presented phenomenologically, in order to not result in reification.

In this he departs from Latz who he frames, unfairly I think, as claiming that Rose’s speculative approach be used to avoid social theory’s missteps and contradictions. I read this instead as Latz be concerned about the practical application of Rose’s approach by working sociologists and Turner being concerned by the integrity of Rose’s approach as speculative philosophy.

In the last few months, I’ve been thinking a lot about the popularity of social media firms amidst mounting scandal. It has often seemed that there’s a new common sense opening up in which these firms are seen as fundamentally untrustworthy, built around a business model which means the scandals they generate are a feature rather than a bug. But how widespread is this point of view? There is low trust in social media with the Edelman Barometer finding 34% in Europe and US/Canada which is markedly lower than trust in traditional media. In Zucked, Roger McNamee reports on a corpus study looking at associations with tech firms. From loc 3298-3313:

To get a sense of the impact, I asked Erin McKean, founder of Wordnik and former editor of the Oxford Dictionary of American English, to study changes in the nouns and adjectives mostly frequently associated with each of the largest tech companies: Apple, Google, Amazon, Facebook, and Microsoft, plus Twitter. Prior to the 2016 election, the tech leaders enjoyed pristine reputations, with no pejorative word associations. For Google, Amazon, Apple, and Microsoft, that is still true. For Facebook, things have changed dramatically. The word “scandal” now ranks in the top 50 nouns associated with Facebook. “Breach” and “investigation” are in the top 250 nouns. With adjectives the situation is even worse. Alone among the five tech leaders, Facebook had one pejorative adjective in its top 100 in 2015–2016: “controversial.” In 2017 and 2018, the adjective “fake” ranked in the top 10 for Facebook, followed by “Russian,” “alleged,” “critical,” “Russian-linked,” “false,” “leaked,” and “racist,” all of which ranked in the top 100 adjectives. Apple, Google, Amazon, and Microsoft do not have a single pejorative noun or adjective on their lists. Twitter has two nouns on its list that may or may not imply brand issues: “Trump” and “bots.” The study was conducted using the News on the Web (NOW) corpus at Brigham Young University. The top 10 US sources in the corpus, ranked by number of words, are Huffington Post, NPR, CNN, The Atlantic, TIME, Los Angeles Times, Wall Street Journal, Slate, USA Today, and ABC News. Despite all the political fallout, Facebook continues to go about

But are they still popular? In an important sense, the evidence would suggest yes in so far as that neither user growth nor engagement rates seem to be in decline. Does this behavioural popularity co-exist with an affection for the brands themselves? I’d love to know of any research on this if readers have encountered it. But what seems clear is that continued use can co-exist with a pervasive sense that what is being used is not trustworthy. This seems like a potent psychic mix to throw into the already strange relationship which many of us have with these platforms.

This is superb from Richard Seymour on the ceaseless demands which contemporary capitalism places on us and the psychic costs they create. I’ve found his idea of the Twittering machine incredibly thought provoking to make sense of the sprawling entity which social media platforms are tangling us up within:

One of the most exhausting features of late capitalism is just how constantly it interpellates us. There may have been a time when ‘demand’ was something communicated unconsciously by a handful of adults in one’s emotional life. Today’s homes are penetrated by more and more screens and electronic devices, filaments of capitalist ideology. When I was a child, being ‘hailed’ meant one of my parents calling my name, usually in a way that suggested I was in trouble. Children growing up today are constantly being hailed by notifications, updates, alerts. Maybe the alerts don’t suggest that one is in trouble, but they demand immediate action. Even the little red number on top corner of social media or email app is a constant jangle of the nerves, which it is almost impossible to ignore. The imperative nature of these notifications cumulatively amounts to a demand on the subject: stay connected, be reactive, be productive, be ‘sociable’. And as kids grow up today, they’re growing into a 24/7 work culture. The drive is arguably being digitised, subsumed more fully into the logic of computational capital than that of Oedipal patriarchy. If the drive is a sort of mental writing, a montage of symbolic elements, it is surely now composed in part of elements drawn from a written network of signifiers.

So what do you when you just can’t? When demand is too much? When the interpellations are so persistent that they won’t even let you sleep? When the drive is what is keeping you constantly in a state of distracted, exhausted, pseudo-alertness? You find another way of working the drive. If your relationship to demand is problematic, you try to silence it, evade it, short-circuit it. Knock it out with alcohol or heroin. Short-circuit it by staging a reckoning right here, right now, on this gambling machine. As if you’re asking, with each bet, “what is it you want from me?” As long as you’re in that sequence of anticipatory moments, demand is muted. Even at the cost of ruin.

My notes on Mantello, P. (2016). The machine that ate bad people: The ontopolitics of the precrime assemblage. Big Data & Society. https://doi.org/10.1177/2053951716682538

Since 9/11 the politics of prediction and risk have created an alliance between security agencies, technology firms and other commercial actors which seeks to create a precrime assemblage: the first generation sought to identify threats through data mining (“search habits, financial transactions, credit card purchases, travel history, and email communications”) but the next generation are “becoming intelligent assemblages capable of integrating data from a multitude of nodes in order to foresee and preempt harmful futures” (pg 2). These advances are being facilitated through cloud computing, machine learning and limitless storage.

The beta versions of these assemblages are being tested in real world situations, rendering it urgent for us to understand their implications. The first is what it means for criminal justice as a whole when the focus is on the anticipation of crime rather than dealing with its occurrence after the fact. The second is the expansion of surveillance into everyday life driven by the public-private alliances which are driving the agenda. The scope of surveillance is increasing but so too is to civic participation in it, driven by gamified mechanisms which “encourages citizens to do the securitization footwork of the state by offering them the opportunity to participate in do-it-yourself, reward-centered, pro-active, networked and, at times, and gamified versions of automated governance” (pg 2).

Peter Mantello argues that the allure of technological innovation is legitimating these developments, promising greater impartiality and efficiency, while the reality of their operation is extending juridicial reach in order to identify non immediate threats to the established order. The pre-crime assemblage will function “to preserve the domains of its masters, who will control immense existential and predictive data that will allow them to shape public perceptions, mold social behavior, and quell possible opposition, thereby ensuring the exception incontrovertible and infinite life” (pg 2).

He uses Massumi’s conception of ontopower to theorise this process, “a mode of power driven by an operative logic of preemption is spreading throughout the various structures, systems, and processes of modern life” (pg 3). Pre-emption itself is long standing but the preoccupation with speculative feelings of non imminent threats was, he argues, born out of the reaction to 9/11. If I understand correctly, the point is that risks are increasingly pre-empted rather than managed, with risk management becoming an anticipatory lens through actors and organisations proactively prepare for imagined futures.

Exceptionalism becomes legitimate under these circumstances, as anticipated threats are used to justify actions which would have otherwise been regarded as illegitimate. A mechanism like the “public safety orders” enacted by the New South Wale police expand the principle of anti-terror policing to civic law enforcement: “they shift the balance further away from the principles of due process where people are innocent until proven guilty and more toward a new era where crimes are committed before they happen, citizens are disappeared without recourse to defense, and where guilt and imprisonment are based on suspicion, rumor, association, or simply left to the intuitive ‘gut feeling’ of police officers” (pg 4). This goes hand-in-hand with an affirmation of the unpredictability of the future. Randomness and uncertainty mean that crimes cannot be avoided but this is why anticipatory work is seen as so important to minimise the threats on the horizon.

This anticipatory work tends to diffuse responsibility into an apparatus of knowledge production, identifying networks of connections or regional hot spots which become the locus of an intervention. A whole range of assets are deployed in the preparation of these interventions, as described on pg 5 in the case of Hitachi’s Public Safety Visualization Suite 4.5:

This includes mining data from an array of various nodes such as remote video systems (hotels/city streets/commercial and private properties/transporta- tion lines), gunshot sensors that alert CCTV cameras, vehicle license plate recognition systems, wireless com- munications, Twitter and other social media, mobile surveillance systems as well as useful data from smart parking meters, public transit systems, and online newspapers and weather forecasts.

Data visualisation plays a crucial role in this by “compressing vast amounts of invisible data into visible signifiers” (pg 5). However the uncertainty, ambiguity and construction which characterises the data itself is lost in the apparent self-evidence of the ensuing representations. The navigability, scalability, and tactility of the interface then mediates interaction with this experienced reality. The performative power falls away, as diverting police resources to ‘hotspots’ only to discover ‘more crime’ there (either comparable to what could be found elsewhere or encouraged by the aggravating factor of heavy handed police) comes to function as a legitimation of the apparatus itself. The approach also compounds existing inequalities through its reliance on historical apparatus about patterns of arrest in order to predict future offending.

What I found fascinating was the slippage in the software. An example on pg 6 concerns ‘at risk’ lists, intended to be the basis for social service interventions prior to any policing action, instead being used as target lists for people who were assumed to be likely offenders. This on the ground slippage highlights the importance of understanding the organisational context within which new tools are deployed, as a means to understand how their original intentions may mutate in the context of application.

The terrifying turn underway is from the deployment of past data to the harvesting of present data in real time. As Mantello puts it, this involves “the real-time extraction of personal data from an individual’s daily life—monitoring their patterns, routines, habits, emotional tendencies, preferences, idiosyncrasies, and geo- spatial coordinates” (pg 7). Enthusiasts claim that the broader the data that is harvested, the easier it will be to identify ‘criminal signatures’ at ever earlier points in time. This converges with what Zuboff has called surveillance capitalism in which behavioural data is leveraged to persuade rather than simply to predict. How might this modus operandi be enacted as part of the pre-crime assemblage? There is a truly dystopian horizon to such a project, described on pg 7:

Yet there is also the distinct dystopian possibility, in its never- ending ontopolitical pursuit to colonize and regulate all aspects of social life, that it may suppress dissent and discourage nonconformist thought or behavior. Already we are seeing such practices occur today with the increasing trends of self-censorship in social media due to fear of state surveillance and authoritarian reprisal

The gamified form this takes can be seen in Sesame Credit, produced in collaboration with Alibaba, as part of the early stages of China’s opt in social credit system, with rewards on offer for those who perform in ways that meet expectations. But as this becomes mandatory in 2020, we can expect this to go hand-in-hand with the proactive avoidance of people deemed to have poor social credit and potential sites where negative social credit behaviours may thrive. The author also considers the example of opt-in blackboxes in cars, where rewards on offer for those who agree to such monitoring but which eventually may be rolled out for everyone as part of a transformation of insurance. The City of Boston security app, Citizen Connect, offers ‘street cred’ recognition points for repeated contributions: “users who actively report on suspicious persons, ongoing crime, random acts of violence, or municipal infrastructure hazards get promoted to special ‘‘patrols’’ where they earn special badges of civic distinction” (pg 9).

A recurrent theme in stories about Facebook is the privilege which Mark Zuckerberg accords for himself which his radical transparency denies for others. My favourite example had been the opaque meeting room hidden away at the back of his glass fronted office, allowing him to retreat into privacy while everyone around him stands exposed. But this example from Roger McNamee’s Zucked loc 2955 is even better:

One particularly awkward story that week revealed that Facebook had been deleting Zuck’s Messenger messages from the inboxes of recipients, a feature not available to users. Facebook initially claimed it made the change for security purposes, but that was patently unbelievable, so the next day it announced plans to extend the “unsend” feature to users.

Before I got a smart phone, I used to wander around talking to people all the time. I began to fall out of this habit during my mid 20s and getting my first iPhone was the nail in the coffin. Now I’m more likely to go out of my way to avoid talking to people than I am to start a conversation with a stranger.

This evening I was sitting on a train from Manchester to Cambridge, working on my forthcoming book about platform capitalism when someone tried to start talking to me. It turns out he was a co-founder of a small fitness platform startup and we had the most thought  provoking conversation about exactly what I’d been writing about in the abstract only minutes earlier. 

It was a reminder of why conversations undertaken at random can be so welcome and it left me resolving to make more of an effort to get into conversations, in the way that used to come relatively naturally to me yet now leaves me feeling at least initially stilted.

My notes on Caplan, R., & Boyd, D. (2018). Isomorphism through algorithms: Institutional dependencies in the case of Facebook. Big Data & Society, 5(1), 2053951718757253.

Are data-driven technologies leading organisations to take on shared characteristics? This is the fascinating question addressed in this paper by Robyn Caplan and danah boyd which they begin with the example of news media. The popularity of social media platform as intermediaries has forced many news media producers to change their operations, increasingly producing with a view to popularity on these platforms. As they put it, “these platforms have upended the organizational practices of news-producing platforms, altering how both the newsroom and individual journalists operate” (2). They use the concept of isomorphism to understand how “algorithms structure disparate businesses and aims into an organizational field, leading them to change their goals and adopt new practices” (2). This is a process of homogenisation, as organisations reconstruct themselves into a field orientated around the assumptions embedded into the t mediating platform. The ensuing ambiguity has regulatory consequences, as social media platforms are not straight forward media actors but nor are they mere intermediaries. By theorising algorithmic mediation as akin to bureaucratisation, it become easier to identify the precise character of the role of platforms within it. It also makes clear the continuities with earlier isomorphic processes, for instance as corporate software platforms introduced common features to organisations.

The roots of this connection are deep. They argue that “algorithms that serve to pre- process, categorize, and classify individuals and organizations should be viewed as extensions of bureaucratic tools such as forms, that have been associated with the state in the past” (3). Software like Lotus 1-2-3 and Microsoft Office restructured business activity through the affordances it offered to digitalise bureaucratic processes and algorithmic technologies should be seen as a further extension of this process. The neutrality which animated the promise of bureaucracy is also often expressed in the belief that algorithmic judgement will negate the role of subjectivity and bias in decision making processes. This is obscured by the familiar black box of the algorithm but also the mythology of its uniqueness, seeing it as something distinct from previous organisational processes. However if we see algorithms as organisational phenomena then the problem comes to look quite different, simultaneously more straight forward but also more challenging because the problems will likely spiral outwards across dependent organisations. 

They use DiMaggio and Powell’s concept of isomorphism which considers how a common environment can lead otherwise different units of a population facing that environment to come to resemble one another. For organisations this occurs through one organisation becoming dependent on another organisation, with the expected degree of resemblance tracking the degree of that dependence. For instance in the case of Facebook’s newsfeed, the concept of what is ‘relevant’ has been redefined by the vast size of the audience whose access is mediated through this mechanism. The dependence of the news media on that mechanism means they come to reproduce its characteristics, increasingly operating with a view towards metrics like clicks, likes and shares. The early winners in the Facebook ecosystem were those publishers like Buzzfeed and Upworthy who “subsumed their own organizational practices to the logic of Facebook’s algorithms” (5). But Facebook’s attempts to modulate this mechanism in order to produce what they deemed better quality results inevitably leads the actors dependent upon it to make adaptive changes in response to these modulations. Mimesis thrives in this environment as they explain on pg 6-7:

“Changes stemming from coercive forces, especially when frequent, lead to an environment of uncertainty that prompts dependent organizations to learn from other dependent organizations that have successfully conformed to the structuring mechanisms. This process of ‘‘mimesis,’’ or imitating models for success, is another process DiMaggio and Powell (1983: 151) argue will induce similarity across an organizational field. In this sense, the dominant organization’s incentives or goals become embedded across an industry through the borrowing of practices that lead to success over the network. In the case of Facebook, this was seen in the adoption of data-driven metrics and analytics into newsrooms, as well as the growth of a new set of intermediaries that were fed directly by the Facebook API, whose role it was to analyze and com- municate Facebook metrics back to publishers”

A further ecosystem of intermediaries thrives under these circumstances, as new players emerge who help the firms concerned address their common problems. These responses to uncertainty are driven by a concern to “demonstrate to others that they are working to change their practices to be in-line with those of the dominant organization“ (7) as well as increasing possibilities for success. The discussion of professionalisation is really important for my interests. The roles themselves changed as a result of isomorphism, with normative pressure to enact new functions and perform new skills which contrbute to the success of the organisation. This is my concern about the institutionalisation of social media within higher education. There’s a lot here which I’m going to need to go back to and I think it’s crucial for my developing project on the digital university. 

The notion of ataraxia comes from Pyrrhonism, a form of Ancient Greek scepticism which advocated a suspension of judgement in the face of invocations to believe. It sought to cultivate a calmness of spirit through an affirmation that things could not be known in themselves. The point is not to actively doubt but rather to withdraw from judgement given the epistemic predicament we face and what this means for our judgements. As Josh Cohen describes it on loc 726 of his Not Working, though I think he reads the notion a bit too literally:

Pig, cow, rabbit –these are the blank animal screens onto which we project desire for the non-desire we ourselves can never attain. This wisdom, Pyrrho suggests, lies in his intuiting the truth of ou mallon, that any given entity is ‘no more one thing than another’. With this phrase, all judgements and all determinations are suspended at a stroke, and all clear distinctions collapse. As the later Pyrrhonists saw it, ‘a thing can never be apprehended in and by itself, but only in connection with something else. Hence all things are unknowable.’

Could this not be a virtue for social media? If platforms rely upon encouraging reaction, the cultivation of refraining from judgement becomes hugely significant. It is not an active scepticism but rather a learned withdrawal from feeding what Richard Seymour calls the Twittering Machine, co-produced through our affective reactions to its constant machinations. What we find on social media becomes at most a pointer to something we must follow up beyond social media. Perhaps ataraxia is necessary for ensuring we use social media but don’t live in it, as Mark Fisher once put it. It is our judgement on demand which keeps us tied us, immersing us even if we intend not to be immersed.

My notes on Robinson, W. I. (2018). The next economic crisis: digital capitalism and global police state. Race & Class60(1), 77-92.

This paper places digitalisation in historical context, framing the current boom in terms of the fallout from the 2008 crisis. We are seeing a restructuring grounded in digitalisation and militarisation which will aggravate the conditions of the last crisis which still remain in place: growing consumer debt, vast speculative investment and an ever increasing degree of financialisation. Gross world product was $75 trillion in 201 while the global derivatives market was estimated at $1.2 quadrillion and currency speculation estimated at $5.3 trillion per day. Robinson argues that “the rapid digitalisation of global capitalism” needs to be understood alongside “debt-driven consumption and financial speculation” as an outlet for surplus capital (78). This is now coalescing in what he describes as the global police state encompassing:

  1. Ever more ubiquitous systems of mass control, repression and warfare that contain real and potential rebellion
  2. The increasing dependence of accumulation on the deployment of these systems in the face of chronic stagnation
  3. The move towards political systems which can be characterised as twenty-first century fascism

The potential causes of another crash are manifold: a burst stock market bubble, defaults in household or public debt or new international conflicts. Digitalisation is not a saviour of the system but rather an extension of how past crises have been negotiated. As he writes, “the rise of Computer and Information Technology (CIT) in the 1980s was itself a response on the part of capitalists to the crisis of overaccumulation, declining rates of profit, and well-organised working classes and social movements in the 1960s and the 1970s” (79). These facilitated a global restructuring which freed capital from redistribution at the level of the nation state (e.g. precaritisation of labour, trade liberalisation, growth of outsourcing, distributed global supply chains, increasing capacity to extract corporate subsidy, bond markets disciplining states) while leading to an escalation of inequalities which now constitutes a systemic risk. This has produced a new crisis of overaccumulation described on pg 80:

Given such extreme polarisation of income and wealth, the global market cannot absorb the output of the global economy. The Great Recession marked the onset of a new structural crisis of overaccumulation. Corporations are now awash in cash but they do not have opportunities to profitably invest this cash. Corporate profits surged after the 2008 crisis and have reached near record highs at the same time that corporate investment has declined.13 In 2017 the largest US-based companies were sitting on an outstanding $1.01 trillion in uninvested cash.

Where can this surplus be uploaded? Robinson reads Trumpism as a far-right response to this crisis which in fact aggravates it, shoring up the system through a nativist mobilisation of the disaffected but “this repressive neoliberalism ends up further restricting the market and therefore aggravating the underlying crisis of overaccumulation” (80). Accumulation by repression (the war on drugs and the war of terrorism, securitisation, militarisation leading to  Pentagon budget increased by 91% in real terms between 1998 and 2011, while defence industry profits quadrupled) is one response to this crisis which we can expect will be ratcheted up even further by Trumpism. Accumulation by digitalisation is the other outlet, with a transnationalisation of services driven by the platform economy coming to replace a globalisation of production and the financial system in an earlier phase. The growth of the tech sector in this context is described on pg 82:

The tech sector has become a major new outlet for uninvested capital in the face of stagnation. Investment in the IT sector jumped from $17 billion in the 1970s to $175 billion in 1990, then to $496 billion in 2000. It then dipped following the turn-of-century dot-com bust, only to climb up to new heights after 2008, surpassing $700 billion as 2017 drew to a close.

In the process a new class of intermediaries has been empowered, accumulating vast reserves through their data driven insertion into existing circuits of production and value. The tech giants have world leading capitalisations and the broader tech sector encompasses the digital economy, in spite of constituting a relatively small part of it once you exclude the giants. Its implications for employment have been bleak, creating unstable and low paid work while increasingly threatening a decimation of established occupations through the roll out of automation technologies. Furthermore, tech companies themselves are strikingly small employers, embodied by a billion dollar data centre built by Apple in North Carolina that only employs 50 full-time staff. Digitalisation intensifies the contradictions of capitalism and ultimately pushes costs down towards zero. If I understand correctly, Robinson argues this leaves it unable to continually absorb surplus capital because its very success erodes that capacity.

His notion of the global police state theorises what happens when “dominant groups turn to applying the new technologies to mass social control and repression in the face of real and potential resistance” as “digitalisation concentrates capital, heightens polarisation, and swells the ranks of surplus labour” (84). A terrifying new range of repressive technologies has been rendered feasible by digitalisation:

The new systems of warfare and repression made possible by more advanced digitalisation include AI powered autonomous weaponry such as unmanned attack and transportation vehicles, robot soldiers, a new generation of ‘superdrones’ and ‘flybots’, hypersonic weapons, microwave guns that immobilise, cyber attack and info-warfare, biometric identification, state data mining, and global electronic surveillance that allows for the tracking and control of every Robinson: The next economic crisis 85 movement. State data mining and global electronic surveillance are now expanding the theatre of conflict from active war zones to militarised cities and rural localities around the world.31 These combine with a restructuring of space that allow for new forms of spatial containment and control of the marginalised. The dual functions of accumulation and social control are played out in the militarisation of civil society and the crossover between the military and the civilian application of these advanced weapons, tracking, security, and surveillance systems. (84-85)

Investment in and deployment of these emerging repressive technologies provides a new vector through which accumulation can take place. A whole range of operations can be encompassed by this, from anti-crime sweeps and humanitarian missions through to drug enforcement operations and low or high intensity wars. It left me thinking of Nervous States by Will Davies and the significance of the eroding distinction between war and piece. It is inarguably that the global security sector is flourishing, ranging from arms manufacturers through to private military and security firms which now employ over 15 million people.

His terrifying suggestion is that the “Global police state and the rise of the digital economy appear to fuse three fractions of capital around a combined process of financial speculation and militarised accumulation into which the TCC is unloading billions of dollars in surplus accumulated capital” (86): financial capital supplies the direct and indirect investment, big tech develops and implements the technologies, the military-industrial-security complex applies these technologies through militarised accumulation. This extends from military conflict through to the spiralling armies of guard labour and ubiquitous private security systems. There is a propaganda component to this, with over 800 major films and 1000 television shows from 2005 to 2016 being influenced by US military and intelligence agencies in order to legitimate these operations and their targets. This is his account of the core contradiction at work, from pg 87:

There is a dangerous spiral here in the contradiction between a digitalisation that throws ever-more workers into the ranks of surplus humanity and the need for the system to unload ever-greater amounts of accumulated surplus. Once masses of people are no longer needed on a long-term and even permanent basis there arises the political problem of how to control this expanded mass of surplus humanity. Greater discipline is required, both for those who manage to secure work under new regimes of precarious employment and super-exploitation, and for those expelled and made surplus. The entire social order becomes surveilled.

Digitalisation renders workers redundant and controlling them in their redundancy offers a solution to the problem of overaccumulation that digitalisation has compounded. He suggests that nascent fascisms need to be understood as a preemptive strike at the working class against a backdrop of ever escalating tensions. There is a growing concern for the coercive exclusion of surplus humanity in lieu of a capacity or willingness to secure legitimacy (pg 88). Fascist movements are displacing the anxiety of downwardly mobile but historically privileged sectors of the global working class towards scapegoated communities presented as outside and threatening. The reality of Trumpism has been a neoliberalism on steroids only likely to accelerate the underlying downward mobility and anxiety.

There an interesting extract in Roger McNamee’s Zucked about the position Sandy Parakilas found himself in as an operations manager for Facebook platform, with responsibility for user privacy in relation to third party apps. From loc 2684:

In classic Facebook style, the company installed an inexperienced and untested recent graduate in a position of great responsibility, a position to which other companies would have traditionally assigned someone with meaningful relevant experience. The job did not contribute to growth, which meant it would not be a high priority inside Facebook. In fact, the task of protecting user privacy would internally be viewed as a form of friction—which put Sandy in a very difficult position.

What’s it like to be a source of friction in a company that wants to move fast and break things? Who will be place in these roles and how much support will they receive? It is a company which prioritises creative responses to problems over experience of them, leaving those problems which are intractable features of organisation life with a difficult status. From loc 2713:

At Facebook, figuring it out is a way of life. The company got its start with a bunch of Harvard undergraduates who knew how to code but had almost no experience with anything else. They figured it out. Each new wave of employees followed the same path. Some took too long and were pushed out. The rest got comfortable with the notion that experience was not helpful. At Facebook, the winners were people who could solve any problem they encountered. The downside of this model is that it encourages employees to circumvent anything inconvenient or hard to fix.

Antipathy towards constraint is more pronounced in the culture of a firm like Facebook, explicit within its trajectory of growth rather than implicit in its material interests. Its focus upon growth at all costs sits uneasily with regulatory requirements and safeguarding obligations, even if the impact failures in these areas have on growth mean they are now being forced to take them more seriously.  

My notes on Lichterman, P (2017) On Social Theory Now: Communicating Theory Now. Perspectives 39(2)

In this response to Social Theory Now, Paul Lichterman offers a compelling vision of social theory as sociology’s meta-conversation, with communicating theory being “to keep track of and facilitate that conversation, treating it as always in movement”. It is a sprawling conversation about the conceptual terms we use to articulate empirical research, linking together the particular subfields within which theories are generated in a topology of the discipline as a whole. Facilitating that conversation involves a kind of “temperature-taking”, “assessing where we are in the various sub-conversations, rather than a statement about which theories best reflect our historical era, or which theories are currently the best contenders for sociological immortality”. He contrasts this dialogical approach to theorising as transmission:

Transmissive theorizing starts with a large conceptual framework, and promotes it, applies it, passes it down with improvements or at least updates.  I’m contrasting that with this book’s version of communicating theory — which I will call “dialogue.” Dialogical theorizing propounds questions, and a few central concepts such as “culture” or “gender.” It sustains questions and central concepts, more than sustaining master theorists or distinct schools as ends in themselves. In transmissive theorizing, the theorist or school is exalted. In dialogical theorizing, the theorist or school is. . .consulted.

It is an overdrawn distinction but it’s an important one which captures the essence of my discomfort with critical realism, which I think suffers from being institutionally locked into a transmissive mode. Transmission gets in the way of “minding the conversation, recognizing its limits, checking out the rest of the party”. It is ill suited to the reality of contemporary social theory, consisting of “relatively porous conversations, where participants invite new participants now and then, rather than a world of masters, and apprentices working their way in”. Critical realism is far from alone in being transmissive but it is a powerful exemplar of this mode of theorising.

He ends with an interesting discussion of vision questions: “the big normative questions that help us envision a society that is—more democratic (Habermas, or Dewey), more self-understanding (Shils), more radically democratic (Mouffe, Seidman), not to mention more solidary, more rational, or less alienating, to invoke the big three”. If I understand correctly, he’s claiming that these vision questions tend to be baked into theorising in the transmissive mode, locked within schools to be accepted or resisted as part of a whole. But could they not be better integrated into dialogue between subfields in a way which renders them autonomous from schools? Can social theorising involve “semi-autonomous, conversational room for explicit communication about vision questions and how they relate to concepts in subfields”? He suggest public sociology and civic sociology as contributing to this process. Could a broader dialogical approach to social theorising better integrate them?

My notes on Pasquale, F. A. (2018). Tech Platforms and the Knowledge Problem. American Affairs, 2(2)

The most philosophically important aspect of Hayek’s work was his epistemological objection to central planning. He argued that the market was indispensable because it permitted distributed knowledge of a sort which a centralised decision maker couldn’t possibly hope to reconstruct. In this short paper, Frank Pasquale considers the renewed possibility of centralised planning which emerges when private tech giants have a gods eye view of consumer and business activity within their domains. As the examples he cites illustrate, this involves business-to-business as well as business-to-consumer insight:

Having logged and analyzed billions of transactions, Amazon knows intimate details about all its customers and suppliers. It can carefully calibrate screen displays to herd buyers toward certain products or shopping practices, or to copy sellers with its own, cheaper, in-house offerings. Mark Zuckerberg aspires to omniscience of consumer desires, by profiling nearly everyone on Facebook, Instagram, and WhatsApp, and then leveraging that data trove to track users across the web and into the real world (via mobile usage and device fingerprinting). You don’t even have to use any of those apps to end up in Facebook/Instagram/WhatsApp files—profiles can be assigned to you. Google’s “database of intentions” is legendary, and antitrust authorities around the world have looked with increasing alarm at its ability to squeeze out rivals from search results once it gains an interest in their lines of business. Google knows not merely what consumers are searching for, but also what other businesses are searching, buying, emailing, planning—a truly unparalleled matching of data-processing capacity to raw communication flows.

There is a parallel integration taking place in finance and insurance who increasingly operate in the manner of the platform. As he writes, “finance and insurance firms not only act as middlemen, taking a cut of transactions, but also aspire to capitalize on the knowledge they have gained from monitoring customers and providers in order to supplant them and directly provide services and investment”.

We now confront “a hierarchical, centralized regime, in which corporate power is immense, and in which large national apparatuses of regulation seem to be the only entities capable of reining it in”. What he calls populist localizers, Jeffersonians,  want a new era of antitrust to break up the tech giants, whose concentration they see as impeding new entrants and further technological innovation. In contrast the Hamiltonians argue that scale is essential to technological development (e.g. machine learning) and that we simply need to update our regulatory frameworks to take account of these new developments. Though Pasquale’s suggestion interoperability standards could address their concern is an interesting one e.g. by allowing diverse social networks to interconnect in the manner of mobile telephone networks.

This doesn’t follow a left/right divide. As he notes, a critic like Evgeny Morozov frames tech giants as natural monopolies which “get better and better at each task they take on when they have access to more and more pooled data from all the tasks they perform”. This is a reason to socialise them rather than to break them up into smaller fragments which couldn’t cope close to replicating their functionality at scale. In contrast, the Jeffersonians existing within the boundaries of contemporary statism, calling on the Federal Trade Commission to break up a firm like Facebook. For Jeffersonians concentration of data seems like a private data monopoly. For Hamiltonians it seems like a necessary measure to secure the data and mitigate the risks it generates. Understanding the basis of this disagreements rests on what a platform is:

The largest, most successful firms of digital capitalism tend to serve as platforms, ranking and rating other entities rather than directly providing goods and services. This strategy enables the platform to outsource risk to vendors and consumers, while it reliably collects a cut from each transaction. Just as a financial intermediary profits from transaction fees, regardless of whether particular investments soar or sour, the platform pockets revenues on the front end, regardless of the quality of the relationships it brokers.

This casts them in the role of policing the platform and adjudicating disputes, described by Pasquale as functional sovereignty when a private firm takes on functions previously confined to the nation state. They tend to assume these issue can be resolved through automation and generally take an absentee approach to what they manage. But this doesn’t matter under current antitrust law which seeks to protect competition, not competitors. The only concern is whether prices are going up or down. As Pasquale explains, this short-termism fails to take account of how superior offerings might have been in existence were it not for the competitive advantage of big tech:

To see the practical effects of this obsession with the short-term, imagine searching for “weather” in Google, and instantly seeing its own weather forecast filling your mobile screen. Had it linked to three forecasting sites in that precious screen space, it might have directed more exposure and advertising revenue to sites with diverse interfaces, more or less information, or other variations. For example, the site WeatherSpark used to give a beautifully precise image of storms’ movement over time—the perfect visual analogue to AccuWeather’s minute-by-minute forecasts of rain or clear skies. But WeatherSpark no longer offers that service, and who knows how many other start-ups gave up on occupying this space. To establishment antitrust authorities, there is no ground to intervene—consumers get the basics of weather from Google’s interface, and it is free. It’s a short-termist outlook that omits long-run considerations in the name of a presentist scientism. In their worldview, there is no room for argument about whether better or worse alternatives do or should exist. Antitrust is supposed to protect “competition, not competitors”—and a singular lack of concern for quality translates into profound lack of interest in whether current or future competitors could do a better job than a digital behemoth. But how can we know if there is competition, if there are no competitors to provide it?

For Hamiltonians the solution is not breaking up these firms but treating them as public utilities which can facilitate others. Regulators accepted the massiveness of power generation and phone networks but with the necessity of offering a countervailing power which could control their operations. In fact the centralisation of big tech could even be a beneficial thing in the move towards taking public control:

In a recent podcast, the socialists of Chapo Trap House joked that they were happy to see Amazon consolidate power. Once it takes over every business in the country, it will be easy to “cut off the head” and simply impose government control over the economy. “Free Whole Foods hot bar for everyone!” was the imagined denouement. Similarly, if all the private health insurers in the United States merged, the stage would finally be set for “single payer”: the government need only take over the one insurer left standing.

The Hamiltonian vision “can be the economic equivalent of geoengineering—an embrace of the radically new and large-scale, arising out of the sense that inequalities and climate change are such massive problems that only rapid technological advances can solve them”. In contrast to the precautionary principle of the Jeffersonians who question “whether any entity should accumulate the power necessary to, say, compare everyone’s genomes, convert millions of workers’ movements to patterns of behavior programmable in robotics, or maintain social credit scores on all citizens”. Interestingly, Pasquale places some of the blame on the expectations of investors:

Investors demand a fantasy of monopolization: that their firm not merely occupy a field, but develop “moats” against entrants in order to guarantee both present returns and future growth. The day-to-day reality of operational budget constraints, however, pushes the same firms toward the pathologies of absentee ownership.

I’ve been reading the psychoanalyst Josh Cohen’s Not Working: Why We Have To Stop for the last few days, during a week in which I have been forced to stop by a chest infection which prevented me from making a trip to Sweden I’d been looking forward to for months. It’s a useful time to read the book because my mood this week embodies its core concern, as I realise how ill equipped I am to stop. The most success I’ve had has been through the narcotising effect of Netflix, surrendering myself to auto-play in order to watch the entirety of You and The People Vs OJ Simpson

In the process I’ve been struck by how little space I experienced between action out there and compulsion in here, with the former providing a rhythm to my days which the latter obliterates. If I understand Cohen correctly, the problem is the limited character of that rhythm, as we come to find security through a constant motion orientated towards external factors forever escalating in the demands we place on ourselves and encourage in others. He has a vivid description on loc 119 of the fidgety activity which comes to substitute for rest in this state:

The emblematic image of our culture is the panicky phonechecker, filling in any interval of rest or silence, on the train or at the family dinner table or in bed, with emails and work documents, or else with social media updates, addictive games and viral videos. Nervous distraction provides the only relief from a perpetually incomplete to-do list. Not working becomes at least as tiring and incessant as working. We know we have no choice but to stop, yet doing so makes us so fearful, scornful and guilty.

It occurs there’s something of the death drive in binge watching. An embrace of what he describes in his patients as “a wish for the world, or themselves, to dissolve” (loc 162). It’s well known that Netflix see ‘sleep as the enemy’. But I wonder if they also see rest as the enemy, capitalising on the anxieties of inertia in order to ensure a surrender to the algorithmic void in lieu of a winding down an recovery of rhythm. 

If the breakdown of an established rhythm is something we increasingly struggle to cope with, how are persuasive technologies capitalising on this and entrenching it in the process? I’m still not well but I’ve deleted Netflix from my iPad, as much as I want to watch the OJ sequel. There can be something profoundly deadening about extended binge watching, even though it entails a degree of sustained immersion which would seem cultivated if performed in relation to other media. 

This is a suggestion which Roger McNamee makes on loc 2041 of Zucked. Note that he’s not suggesting corporate espionage but rather inference of trends from superficially innocuous data Amazon have privileged access to as platform provider:

Amazon can use its cloud services business to monitor the growth of potential competitors, though there is little evidence that Amazon has acted on this intelligence the way it has leveraged data about bestselling products in its marketplace.

This is something which Amazon do with increasing frequency on their main shopping site, producing generic versions of popular products that can be retailed more cheaply. In some cases, it is hard to discern you are buying an Amazon product. It’s not corporate espionage but Amazon’s gods eye view of the marketplace, as well as their capacity to privilege their own products in search, gives them an unassailable advantage over competitors.

How might AWS figure into this strategy? Are there any safeguards against it? Where might this go in the longer term? Amazon are depending on research and development taking place elsewhere which they are tracking the effectiveness of through their platform. But what happens if their anti competitive practice eviscerates this capacity? Would they be able to originate development rather than simply piggy backing on it? I think this will be the key challenge Amazon faces in its increasingly likely transition to unprecedented monopoly.

My notes on Strathern, M., & Latimer, J. (2019). A conversation. The Sociological Review, 67(2), 481–496. https://doi.org/10.1177/0038026119832424

In this interesting conversation with Marilyn Strathern, who I had the pleasure to meet when Jana Bacevic organised a a masterclass with her at our department, Joanna Latimer explores the act of writing and the influence Strathern’s has had on her own. Joanna explains her experience of how Strathern’s writing “has this kind of extraordinary way of entering into one” such that “your parts become my own, and then I discover I can’t think without your parts”. As Strathern explains, her writing is intensely conversational even if the reader might not be aware of exactly who she is having the conversation with:

And it may be that this sense of always being in conversation contributes to that. There’s an ethical side to it, and of course when I was doing my work on intellectual property I sort of touched on it, which is that, you know, nothing actually ever sprang from Zeus’s head fully formed. I mean one is in debt, one is incredibly in debt, one is always taking what other people have done, whether one knows it or not. It’s not always that I have a particular person in mind, or I’m writing for people who’ve provided me with the means to do so. Rather, you stand on, stand on the shoulders of giants and all the rest of it. I’m very conscious, that one is just simply turning the soil until the next person comes along. So there’s that aspect. There’s also the intellectual chase that one gets into, getting into somebody’s argument. It does its work, it sparks you off, and you really want to pull it apart or you want to put it back together again or you want to take bits out. There are things that you think you could do otherwise. And so forth. And that’s very often in relation to specific arguments.

It is writing which seeks to “turn your reader over”, as Joanna puts it, by upending the conventional and the assumed. Marilyn describes her object as “recurrent habits of thought people just get into, time and again”, some of which provoke “real anger, I mean I’m cross”. It left me with a strong sense of the intimacy of writing, almost as vectors of entanglement through which the concerns of the writer spill over their boundaries and into the reader. There’s a really interesting section connected to this about Marilyn’s  preference for the word person over terms like identity or individual. These are bound into an imaginary which needs to be critiqued and other choices create the opportunity to get out from under them: 

Person is a term that I get from orthodox classical British social anthropology. A person is a social configuration. It’s always a relational construct. It doesn’t have the [vernacular] implications of individuality that identity has. I think that’s where the preference is. […] But because person is slightly unusual in English, after all we do use it, everyone knows what we mean, and there are contexts where we use it on an everyday basis – like ‘a person in their own right’ – but actually we don’t use it as much as we would use the word individual for example, or human being, or whatever. Slightly unusual. And it tends to be in legal language, doesn’t it? Person of no fixed abode. Whereas we’d [ordinarily] say man or woman, or whatever.

There’s a micro-podcast here in which I respond to Joanna Latimer’s presentation of an early version of this paper at a workshop last year. My talk is at 40 mins: