My notes on Latz, A. B. (2015). Gillian Rose and Social Theory. Telos, 173, 37-54. and Fuller, B. W. (2018). Back to Hegel? On Gillian Rose’s critique of sociological reason. The British journal of sociology, 69(2), 265-285

The figure of Gillian Rose was a continual presence in the Sociology department at Warwick in the time I was there, from 2007 to 2014, with the main seminar room named after her and many staff members who had been close to her. However I’d never really engaged with her work until now so I’m pleased I’m finally got round to reading these two papers. The first by Andrew Brower Latz explores her relationship to social theory, situating her in terms of her the Frankfurt school which she identified herself as a student of. Her work emerged in a context where “a focused search to develop a better grasp and articulation of sociology and its logic” (38) was mitigating the failure of classical sociology to identify its own methodological specificity. Latz takes critical realism, structuration theory and Bourdieu’s sociology as constructive responses to this underlying problem.

It is this context that Rose’s Hegelian work was so significant, providing a means to address “perennial issues in social theory, namely:the relationship of logic to the sociology of knowledge; contradictions and antinomies; emergence; and the possibility of a totality” (38). She engaged with Hegel’s speculative philosophy as a radicalisation of the Kantian critique of reason (theory) by reason (meta-theory). In doing so, it provides a way to approach the social character of knowledge which takes us beyond the post-Kantians, for whom objectivity (“the applicability of our concepts to the external world, which creates the possibility of true or false judgments about the world”, 39) is grounded in sense data received through intuition. If I remember this correctly, our sensory apparatus is receptive to the world and we know that world through the representations that receptivity provokes in our mind. We do not encounter the object but only this (involuntary) representation of it. The transcendental form of inquiry was retained by neo-Kantians but the transcendental idealism was rejected, leaving the subject locked within thought without the (indirect) escape which the latter provided. As Latz puts it, “The touchstone for a system of thought was thought itself, in a way that tended to insulate thought from receptivity to thought’s other” (40).

For Rose the recurrence of positivism within sociology has one of its source in the lack of appreciation by sociologists of the transcendental form of their reasoning. If I understand correctly, this manifests itself as an evasion of the relationships between theory and meta-theory, mind and world, theory and evidence (to use Latz’s terms): the world is either collapsed into our experience of it or our experience is imputed to a world deemed to be devoid of intrinsic meaning. As Latz says later, drawing on a reading of Rose by Simon Jarvis, “Sociology’s danger is twofold: on the one hand, imposing a grid or pre-theorized schema on society instead of allowing experience to speak; on the other, imagining that simply pointing to experience will do” (53). For Rose speculative philosophy provides a way out of this impasse as “transcendental philosophy performed with maximum awareness of its own workings, which is gained through a historical perspective” (42). It entails a grappling with the absolute – “the unity of finite and infinite, of law and ethics, the full achievement of ethical life” (42) – but as a regulative ideal which established the unattainable horizon of our thought. This goes hand-in-hand with her “emphasizing the moments of incompleteness, provisionality, tension, and even fragmentation within thought and society” (43).

Latz considers the significance of her work in relation to logic & the sociology of knowledge, contradictions & antinomies, emergence and totality. He does this through considering sociological studies which either repudiate or embody key features of her approach, namely thinking “in terms of contradictions and determinate negation, must use increasingly comprehensive levels of explanation and historically informed analyses, and be aware of its own role within its object of study” (47). Perhaps the key point in this discussion is the relationship between theory and meta-theory: how theorising in pursuit of social explanation is itself theorised and the practical implications of this. How do we tie  substantive and meta-theoretical considerations together in the same undertaking? If I understand correctly, the point is we are always doing this regardless of whether we are aware of it. If we’re not aware of it, we’re going to do it in a way that impedes our mode of explanation. So Rose’s approach is about how to do this in a way which is adequate to the character of social reality. It is a form of theoretical reflexivity, for lack of a better term. That at least is how I understand the gist of Latz’s discussion.

I was particularly interested in her conception of totality: “Since for Rose no single view of the totality is adequate to it, various perspectives on it are required” (52). It always evades us while remaining the horizon of what we are doing and why. A further point this left me reflecting on is how invoking experience can itself render that experience abstract by cutting it from the world in relation to which that experience emerges and which accounts for the meaning and content of it. It emerges from the failure to link theory and meta-theory, facing the challenge of for instance linking mind and world rather than simply allowing that to fade into the background in a way that dichotomises lived experience and brute facts. As Latz puts it, “Rather than an abstruse methodological pedantry, issues of meta-theory can often have theoretical effects, including philosophical conclusions drawn from sociological studies, the nature of the explanations given, or the status afforded to those explanations and the theories whence they derive” (54).

The second paper by Brian W. Fuller is more explicitly concerned with the subdued reception of Rose’s work and its relationship to the wider neglect of Hegel within the social sciences. He highlights how “Rose argues that we sociologists have been systematically misunderstanding ourselves, and allowing this to happen – taking refuge, either implicitly or explicitly, in the notion of Kantian limits to our understanding” (266). If we remain locked within the Kantian problematic then we are confined to the “social investigation of unknowable objects” (266). This entails recognising the brokenness of modern thought, which a thematic I’m fascinated by without really understanding and want to return to. I assume it relates to the post-Kantian estrangement of thought from world, cutting it off from what it is, but I’m far from certain based on what I’ve read so far. What Fuller later calls ‘the strict dichotomy between cognition and its objects” which begins in Kant and is preserved in post-Kantian philosophy (268). Rose’s concern is to acknowledge this and to begin with it in the sociological enterprise. Its failure can be seen in a dichotomisation which pervades sociological thought. From 269: 

Her Hegelian move is to grasp the two paradigms as aspects of one whole. Sociology has repeated the mistakes of neo-Kantian philosophy by bifurcating into two contradictory theoretical paths, each of which represents one-half of a linked pair, and which consequently cannot be comprehended in isolation. Durkheim’s structural approach and Weber’s interpretivism each postulates a precondition and a conditioned; though their perspectives are opposite, neither can grasp the transition between spheres. Employing Hegel directly, Rose declares the former approach ‘empty’, while the latter is ‘blind’ (1981: 214). In sum, Rose’s complaint is that sociology is trapped within dichotomies which it can never overcome, because they are products of its own mode of thinking.

The confrontation with the relationship between theory and meta-theory opens up  the possibility of transcending dichotomies in their application. I’m trying to understand how this relates to the approach of Margaret Archer, whose precondition of analytical dualism entails thinking with dichotomies as ways in which we can unpick the relationship between heterogenous elements in the explanation of social outcomes. But what Archer calls explanatory methodology as a site for leaving these dichotomies behind would presumably be to effectively reproduce them from Rose’s point of view. As Fuller puts it, “the ‘speculative’ direction she suggests is designed to help uncover the meaning and significance of such recurring problems and limitations, opening up potential for transformative practice” (270). This I would argue is what Archer actually does, I’d be interested to encounter any sustained engagement between these two sets of ideas. It certainly falls short of Rose’s lofty, somewhat existential, approach to theoretical inquiry. From 270:

A second difficulty of explication is that – according to Rose’s interpretation – there is no way to adequately present the Hegelian speculative position in an abstract and concise fashion, without misunderstanding it. Hegel’s approach to philosophy intends to teach a new (speculative) way of thinking and experiencing, which requires continual ‘re-cognizing’ of one’s current position. Rose takes Hegel’s philosophy seriously as a ‘way of despair’, and the process of educating consciousness necessarily appears difficult, aporetic, or else impossible. This is not an illusion to be overcome, nor a dead end, but a process that will require failure. Accordingly, for Rose, abstract explication is out of the question, and the reader must learn through failure.

Incidentally, could this not be a meta-theory of what Daniel Little calls being an open-source philosopher? This could provide a profound intellectual-existential rationale for the virtues of thinking out loud, rather than thinking being an internal process contained until moments of careful and purified expression. More generally, Rose’s project involved a move beyond reflective thought, in the process recognising the dichotomies it creates as both its own creation and “part of a larger whole, a conceptual and social-historical whole” (271). This move has its origins in the Hegelian turn beyond Kant’s restriction to the boundaries of the finite, “insisting that the whole point of philosophical thought is to be speculative, to attempt to think the infinite, to embrace the contradictions produced by reflection” (271). Philosophical reflection merely analyses, categorises and schematises its contents, ordering the finite rather than understanding its coming to existence and totality within which it happens.

The possibility for overcoming it rests on a self awareness of the reflective position, understanding its own activity as the origin reflective understanding. In this sense, it involves taking Kant further than recognising the dependence of objectivity upon the subject. We typically see appearances as grounded in essences but the Hegelian project was about recognising this as a posited dichotomy, in order to grasp the unity of the appearance and the essence. My crude understanding of this, which I’m not very confident about, sees it as a vast multilayered mess of becoming which is parcelled out into discrete terms by the activity of cognition. Speculative philosophy involves tracing out how these discrete terms and their static relations have been created through reflective activity, moving upwards towards a totality we can never reach because our striving is part of the whole we are trying to apprehend from inside the mess. This has implications for ethical life. From 274:

In his practical philosophy, Kant derived a set of universal principles to gov- ern the moral subject, arguing that moral action must be guided by the univer- salizability of a potential action. Hegel’s critique of Kantian moral philosophy is similar to his account of speculation above. He claims that Kant’s practical philosophy cannot adequately comprehend humanity nor society in its historical concreteness, since it deals only in abstractions and universals. The critique emphasizes two points. First, he objects to the abstract separation of theoretical and practical reason, and consequently, of the realms of necessity and freedom. For Kant, for example, the human will exists in the sphere of noumena, inde- pendent of the natural, empirical world in which human subjects reason and act. Second, Hegel criticizes Kantian practical philosophy for being too ‘formal’. The formality of the moral law means that it cannot be derived from the con- crete, historical world, but only transcendentally from reason. From a Kantian view, we can only understand ethical life abstractly.

I think this is akin to the understanding Alisdair Macintyre expresses in his focus on moral particularism. We are always already inside ethical community and abstracting the individual from that community in the Kantian manner will capture nothing of the concrete reality of moral existence. It ties to Rose’s reading of Hegel’s phenomenological method, which unites thinkers I’ve been drawn to who in different sorts of ways seek to concretise subjectivity without reducing it to the dominion of first person experience. From 277:   

The phenomenological method is then a way of presenting speculative expe- rience. It treats experience concretely, in its social-historical particularity, and hence allows ‘us’ to recognize our own ‘determination’. In other words, it affords a conception of consciousness not simply in its mode of being or exis- tence, but in ‘actuality’ – consciousness at work in the social world.

Fuller’s discussion of the relationship between philosophy and sociology is extremely interesting. From 278:

Sociology has long had a difficult relationship with philosophy, beginning with the attempts of the most promi- nent classical theorists (Marx, Weber, Durkheim) to articulate a vision of soci- ology which goes beyond philosophy in some important aspects. Although perhaps the more common claim remains that sociology has in some sense tran- scended philosophy or made it obsolete, there are periodic calls to reintroduce philosophical perspectives into social science.

This ends with a really interesting critique of Daniel Chernilo’s project of philosophical sociology, arguing it merely reiterates the dichotomies it takes as its starting point while remaining with the horizons of existent sociological reason. Counter-poising the philosophical and sociological doesn’t help us better understand the relationship between them, as much as inviting this dialogue might in itself help enrich the practice of each. His final discussion of the horizons of Rose’s thought are fascinating. From 280:

Likewise we cannot use Rose to overcome the contra- dictions of structure and agency that so many have struggled with for so long. Rose did not succeed where contemporary theorists failed, any more than Hegel ‘solved’ the problems of Kantian philosophy. We need to tell the story of the conceptual oppositions and antinomies as they came to be, to structure our current social theoretical world. These antinomies are not just distractions, but have their own particular social history, which needs to be comprehended and presented phenomenologically, in order to not result in reification.

In this he departs from Latz who he frames, unfairly I think, as claiming that Rose’s speculative approach be used to avoid social theory’s missteps and contradictions. I read this instead as Latz be concerned about the practical application of Rose’s approach by working sociologists and Turner being concerned by the integrity of Rose’s approach as speculative philosophy.

My notes on Mantello, P. (2016). The machine that ate bad people: The ontopolitics of the precrime assemblage. Big Data & Society. https://doi.org/10.1177/2053951716682538

Since 9/11 the politics of prediction and risk have created an alliance between security agencies, technology firms and other commercial actors which seeks to create a precrime assemblage: the first generation sought to identify threats through data mining (“search habits, financial transactions, credit card purchases, travel history, and email communications”) but the next generation are “becoming intelligent assemblages capable of integrating data from a multitude of nodes in order to foresee and preempt harmful futures” (pg 2). These advances are being facilitated through cloud computing, machine learning and limitless storage.

The beta versions of these assemblages are being tested in real world situations, rendering it urgent for us to understand their implications. The first is what it means for criminal justice as a whole when the focus is on the anticipation of crime rather than dealing with its occurrence after the fact. The second is the expansion of surveillance into everyday life driven by the public-private alliances which are driving the agenda. The scope of surveillance is increasing but so too is to civic participation in it, driven by gamified mechanisms which “encourages citizens to do the securitization footwork of the state by offering them the opportunity to participate in do-it-yourself, reward-centered, pro-active, networked and, at times, and gamified versions of automated governance” (pg 2).

Peter Mantello argues that the allure of technological innovation is legitimating these developments, promising greater impartiality and efficiency, while the reality of their operation is extending juridicial reach in order to identify non immediate threats to the established order. The pre-crime assemblage will function “to preserve the domains of its masters, who will control immense existential and predictive data that will allow them to shape public perceptions, mold social behavior, and quell possible opposition, thereby ensuring the exception incontrovertible and infinite life” (pg 2).

He uses Massumi’s conception of ontopower to theorise this process, “a mode of power driven by an operative logic of preemption is spreading throughout the various structures, systems, and processes of modern life” (pg 3). Pre-emption itself is long standing but the preoccupation with speculative feelings of non imminent threats was, he argues, born out of the reaction to 9/11. If I understand correctly, the point is that risks are increasingly pre-empted rather than managed, with risk management becoming an anticipatory lens through actors and organisations proactively prepare for imagined futures.

Exceptionalism becomes legitimate under these circumstances, as anticipated threats are used to justify actions which would have otherwise been regarded as illegitimate. A mechanism like the “public safety orders” enacted by the New South Wale police expand the principle of anti-terror policing to civic law enforcement: “they shift the balance further away from the principles of due process where people are innocent until proven guilty and more toward a new era where crimes are committed before they happen, citizens are disappeared without recourse to defense, and where guilt and imprisonment are based on suspicion, rumor, association, or simply left to the intuitive ‘gut feeling’ of police officers” (pg 4). This goes hand-in-hand with an affirmation of the unpredictability of the future. Randomness and uncertainty mean that crimes cannot be avoided but this is why anticipatory work is seen as so important to minimise the threats on the horizon.

This anticipatory work tends to diffuse responsibility into an apparatus of knowledge production, identifying networks of connections or regional hot spots which become the locus of an intervention. A whole range of assets are deployed in the preparation of these interventions, as described on pg 5 in the case of Hitachi’s Public Safety Visualization Suite 4.5:

This includes mining data from an array of various nodes such as remote video systems (hotels/city streets/commercial and private properties/transporta- tion lines), gunshot sensors that alert CCTV cameras, vehicle license plate recognition systems, wireless com- munications, Twitter and other social media, mobile surveillance systems as well as useful data from smart parking meters, public transit systems, and online newspapers and weather forecasts.

Data visualisation plays a crucial role in this by “compressing vast amounts of invisible data into visible signifiers” (pg 5). However the uncertainty, ambiguity and construction which characterises the data itself is lost in the apparent self-evidence of the ensuing representations. The navigability, scalability, and tactility of the interface then mediates interaction with this experienced reality. The performative power falls away, as diverting police resources to ‘hotspots’ only to discover ‘more crime’ there (either comparable to what could be found elsewhere or encouraged by the aggravating factor of heavy handed police) comes to function as a legitimation of the apparatus itself. The approach also compounds existing inequalities through its reliance on historical apparatus about patterns of arrest in order to predict future offending.

What I found fascinating was the slippage in the software. An example on pg 6 concerns ‘at risk’ lists, intended to be the basis for social service interventions prior to any policing action, instead being used as target lists for people who were assumed to be likely offenders. This on the ground slippage highlights the importance of understanding the organisational context within which new tools are deployed, as a means to understand how their original intentions may mutate in the context of application.

The terrifying turn underway is from the deployment of past data to the harvesting of present data in real time. As Mantello puts it, this involves “the real-time extraction of personal data from an individual’s daily life—monitoring their patterns, routines, habits, emotional tendencies, preferences, idiosyncrasies, and geo- spatial coordinates” (pg 7). Enthusiasts claim that the broader the data that is harvested, the easier it will be to identify ‘criminal signatures’ at ever earlier points in time. This converges with what Zuboff has called surveillance capitalism in which behavioural data is leveraged to persuade rather than simply to predict. How might this modus operandi be enacted as part of the pre-crime assemblage? There is a truly dystopian horizon to such a project, described on pg 7:

Yet there is also the distinct dystopian possibility, in its never- ending ontopolitical pursuit to colonize and regulate all aspects of social life, that it may suppress dissent and discourage nonconformist thought or behavior. Already we are seeing such practices occur today with the increasing trends of self-censorship in social media due to fear of state surveillance and authoritarian reprisal

The gamified form this takes can be seen in Sesame Credit, produced in collaboration with Alibaba, as part of the early stages of China’s opt in social credit system, with rewards on offer for those who perform in ways that meet expectations. But as this becomes mandatory in 2020, we can expect this to go hand-in-hand with the proactive avoidance of people deemed to have poor social credit and potential sites where negative social credit behaviours may thrive. The author also considers the example of opt-in blackboxes in cars, where rewards on offer for those who agree to such monitoring but which eventually may be rolled out for everyone as part of a transformation of insurance. The City of Boston security app, Citizen Connect, offers ‘street cred’ recognition points for repeated contributions: “users who actively report on suspicious persons, ongoing crime, random acts of violence, or municipal infrastructure hazards get promoted to special ‘‘patrols’’ where they earn special badges of civic distinction” (pg 9).

My notes on Caplan, R., & Boyd, D. (2018). Isomorphism through algorithms: Institutional dependencies in the case of Facebook. Big Data & Society, 5(1), 2053951718757253.

Are data-driven technologies leading organisations to take on shared characteristics? This is the fascinating question addressed in this paper by Robyn Caplan and danah boyd which they begin with the example of news media. The popularity of social media platform as intermediaries has forced many news media producers to change their operations, increasingly producing with a view to popularity on these platforms. As they put it, “these platforms have upended the organizational practices of news-producing platforms, altering how both the newsroom and individual journalists operate” (2). They use the concept of isomorphism to understand how “algorithms structure disparate businesses and aims into an organizational field, leading them to change their goals and adopt new practices” (2). This is a process of homogenisation, as organisations reconstruct themselves into a field orientated around the assumptions embedded into the t mediating platform. The ensuing ambiguity has regulatory consequences, as social media platforms are not straight forward media actors but nor are they mere intermediaries. By theorising algorithmic mediation as akin to bureaucratisation, it become easier to identify the precise character of the role of platforms within it. It also makes clear the continuities with earlier isomorphic processes, for instance as corporate software platforms introduced common features to organisations.

The roots of this connection are deep. They argue that “algorithms that serve to pre- process, categorize, and classify individuals and organizations should be viewed as extensions of bureaucratic tools such as forms, that have been associated with the state in the past” (3). Software like Lotus 1-2-3 and Microsoft Office restructured business activity through the affordances it offered to digitalise bureaucratic processes and algorithmic technologies should be seen as a further extension of this process. The neutrality which animated the promise of bureaucracy is also often expressed in the belief that algorithmic judgement will negate the role of subjectivity and bias in decision making processes. This is obscured by the familiar black box of the algorithm but also the mythology of its uniqueness, seeing it as something distinct from previous organisational processes. However if we see algorithms as organisational phenomena then the problem comes to look quite different, simultaneously more straight forward but also more challenging because the problems will likely spiral outwards across dependent organisations. 

They use DiMaggio and Powell’s concept of isomorphism which considers how a common environment can lead otherwise different units of a population facing that environment to come to resemble one another. For organisations this occurs through one organisation becoming dependent on another organisation, with the expected degree of resemblance tracking the degree of that dependence. For instance in the case of Facebook’s newsfeed, the concept of what is ‘relevant’ has been redefined by the vast size of the audience whose access is mediated through this mechanism. The dependence of the news media on that mechanism means they come to reproduce its characteristics, increasingly operating with a view towards metrics like clicks, likes and shares. The early winners in the Facebook ecosystem were those publishers like Buzzfeed and Upworthy who “subsumed their own organizational practices to the logic of Facebook’s algorithms” (5). But Facebook’s attempts to modulate this mechanism in order to produce what they deemed better quality results inevitably leads the actors dependent upon it to make adaptive changes in response to these modulations. Mimesis thrives in this environment as they explain on pg 6-7:

“Changes stemming from coercive forces, especially when frequent, lead to an environment of uncertainty that prompts dependent organizations to learn from other dependent organizations that have successfully conformed to the structuring mechanisms. This process of ‘‘mimesis,’’ or imitating models for success, is another process DiMaggio and Powell (1983: 151) argue will induce similarity across an organizational field. In this sense, the dominant organization’s incentives or goals become embedded across an industry through the borrowing of practices that lead to success over the network. In the case of Facebook, this was seen in the adoption of data-driven metrics and analytics into newsrooms, as well as the growth of a new set of intermediaries that were fed directly by the Facebook API, whose role it was to analyze and com- municate Facebook metrics back to publishers”

A further ecosystem of intermediaries thrives under these circumstances, as new players emerge who help the firms concerned address their common problems. These responses to uncertainty are driven by a concern to “demonstrate to others that they are working to change their practices to be in-line with those of the dominant organization“ (7) as well as increasing possibilities for success. The discussion of professionalisation is really important for my interests. The roles themselves changed as a result of isomorphism, with normative pressure to enact new functions and perform new skills which contrbute to the success of the organisation. This is my concern about the institutionalisation of social media within higher education. There’s a lot here which I’m going to need to go back to and I think it’s crucial for my developing project on the digital university. 

My notes on Robinson, W. I. (2018). The next economic crisis: digital capitalism and global police state. Race & Class60(1), 77-92.

This paper places digitalisation in historical context, framing the current boom in terms of the fallout from the 2008 crisis. We are seeing a restructuring grounded in digitalisation and militarisation which will aggravate the conditions of the last crisis which still remain in place: growing consumer debt, vast speculative investment and an ever increasing degree of financialisation. Gross world product was $75 trillion in 201 while the global derivatives market was estimated at $1.2 quadrillion and currency speculation estimated at $5.3 trillion per day. Robinson argues that “the rapid digitalisation of global capitalism” needs to be understood alongside “debt-driven consumption and financial speculation” as an outlet for surplus capital (78). This is now coalescing in what he describes as the global police state encompassing:

  1. Ever more ubiquitous systems of mass control, repression and warfare that contain real and potential rebellion
  2. The increasing dependence of accumulation on the deployment of these systems in the face of chronic stagnation
  3. The move towards political systems which can be characterised as twenty-first century fascism

The potential causes of another crash are manifold: a burst stock market bubble, defaults in household or public debt or new international conflicts. Digitalisation is not a saviour of the system but rather an extension of how past crises have been negotiated. As he writes, “the rise of Computer and Information Technology (CIT) in the 1980s was itself a response on the part of capitalists to the crisis of overaccumulation, declining rates of profit, and well-organised working classes and social movements in the 1960s and the 1970s” (79). These facilitated a global restructuring which freed capital from redistribution at the level of the nation state (e.g. precaritisation of labour, trade liberalisation, growth of outsourcing, distributed global supply chains, increasing capacity to extract corporate subsidy, bond markets disciplining states) while leading to an escalation of inequalities which now constitutes a systemic risk. This has produced a new crisis of overaccumulation described on pg 80:

Given such extreme polarisation of income and wealth, the global market cannot absorb the output of the global economy. The Great Recession marked the onset of a new structural crisis of overaccumulation. Corporations are now awash in cash but they do not have opportunities to profitably invest this cash. Corporate profits surged after the 2008 crisis and have reached near record highs at the same time that corporate investment has declined.13 In 2017 the largest US-based companies were sitting on an outstanding $1.01 trillion in uninvested cash.

Where can this surplus be uploaded? Robinson reads Trumpism as a far-right response to this crisis which in fact aggravates it, shoring up the system through a nativist mobilisation of the disaffected but “this repressive neoliberalism ends up further restricting the market and therefore aggravating the underlying crisis of overaccumulation” (80). Accumulation by repression (the war on drugs and the war of terrorism, securitisation, militarisation leading to  Pentagon budget increased by 91% in real terms between 1998 and 2011, while defence industry profits quadrupled) is one response to this crisis which we can expect will be ratcheted up even further by Trumpism. Accumulation by digitalisation is the other outlet, with a transnationalisation of services driven by the platform economy coming to replace a globalisation of production and the financial system in an earlier phase. The growth of the tech sector in this context is described on pg 82:

The tech sector has become a major new outlet for uninvested capital in the face of stagnation. Investment in the IT sector jumped from $17 billion in the 1970s to $175 billion in 1990, then to $496 billion in 2000. It then dipped following the turn-of-century dot-com bust, only to climb up to new heights after 2008, surpassing $700 billion as 2017 drew to a close.

In the process a new class of intermediaries has been empowered, accumulating vast reserves through their data driven insertion into existing circuits of production and value. The tech giants have world leading capitalisations and the broader tech sector encompasses the digital economy, in spite of constituting a relatively small part of it once you exclude the giants. Its implications for employment have been bleak, creating unstable and low paid work while increasingly threatening a decimation of established occupations through the roll out of automation technologies. Furthermore, tech companies themselves are strikingly small employers, embodied by a billion dollar data centre built by Apple in North Carolina that only employs 50 full-time staff. Digitalisation intensifies the contradictions of capitalism and ultimately pushes costs down towards zero. If I understand correctly, Robinson argues this leaves it unable to continually absorb surplus capital because its very success erodes that capacity.

His notion of the global police state theorises what happens when “dominant groups turn to applying the new technologies to mass social control and repression in the face of real and potential resistance” as “digitalisation concentrates capital, heightens polarisation, and swells the ranks of surplus labour” (84). A terrifying new range of repressive technologies has been rendered feasible by digitalisation:

The new systems of warfare and repression made possible by more advanced digitalisation include AI powered autonomous weaponry such as unmanned attack and transportation vehicles, robot soldiers, a new generation of ‘superdrones’ and ‘flybots’, hypersonic weapons, microwave guns that immobilise, cyber attack and info-warfare, biometric identification, state data mining, and global electronic surveillance that allows for the tracking and control of every Robinson: The next economic crisis 85 movement. State data mining and global electronic surveillance are now expanding the theatre of conflict from active war zones to militarised cities and rural localities around the world.31 These combine with a restructuring of space that allow for new forms of spatial containment and control of the marginalised. The dual functions of accumulation and social control are played out in the militarisation of civil society and the crossover between the military and the civilian application of these advanced weapons, tracking, security, and surveillance systems. (84-85)

Investment in and deployment of these emerging repressive technologies provides a new vector through which accumulation can take place. A whole range of operations can be encompassed by this, from anti-crime sweeps and humanitarian missions through to drug enforcement operations and low or high intensity wars. It left me thinking of Nervous States by Will Davies and the significance of the eroding distinction between war and piece. It is inarguably that the global security sector is flourishing, ranging from arms manufacturers through to private military and security firms which now employ over 15 million people.

His terrifying suggestion is that the “Global police state and the rise of the digital economy appear to fuse three fractions of capital around a combined process of financial speculation and militarised accumulation into which the TCC is unloading billions of dollars in surplus accumulated capital” (86): financial capital supplies the direct and indirect investment, big tech develops and implements the technologies, the military-industrial-security complex applies these technologies through militarised accumulation. This extends from military conflict through to the spiralling armies of guard labour and ubiquitous private security systems. There is a propaganda component to this, with over 800 major films and 1000 television shows from 2005 to 2016 being influenced by US military and intelligence agencies in order to legitimate these operations and their targets. This is his account of the core contradiction at work, from pg 87:

There is a dangerous spiral here in the contradiction between a digitalisation that throws ever-more workers into the ranks of surplus humanity and the need for the system to unload ever-greater amounts of accumulated surplus. Once masses of people are no longer needed on a long-term and even permanent basis there arises the political problem of how to control this expanded mass of surplus humanity. Greater discipline is required, both for those who manage to secure work under new regimes of precarious employment and super-exploitation, and for those expelled and made surplus. The entire social order becomes surveilled.

Digitalisation renders workers redundant and controlling them in their redundancy offers a solution to the problem of overaccumulation that digitalisation has compounded. He suggests that nascent fascisms need to be understood as a preemptive strike at the working class against a backdrop of ever escalating tensions. There is a growing concern for the coercive exclusion of surplus humanity in lieu of a capacity or willingness to secure legitimacy (pg 88). Fascist movements are displacing the anxiety of downwardly mobile but historically privileged sectors of the global working class towards scapegoated communities presented as outside and threatening. The reality of Trumpism has been a neoliberalism on steroids only likely to accelerate the underlying downward mobility and anxiety.

My notes on Lichterman, P (2017) On Social Theory Now: Communicating Theory Now. Perspectives 39(2)

In this response to Social Theory Now, Paul Lichterman offers a compelling vision of social theory as sociology’s meta-conversation, with communicating theory being “to keep track of and facilitate that conversation, treating it as always in movement”. It is a sprawling conversation about the conceptual terms we use to articulate empirical research, linking together the particular subfields within which theories are generated in a topology of the discipline as a whole. Facilitating that conversation involves a kind of “temperature-taking”, “assessing where we are in the various sub-conversations, rather than a statement about which theories best reflect our historical era, or which theories are currently the best contenders for sociological immortality”. He contrasts this dialogical approach to theorising as transmission:

Transmissive theorizing starts with a large conceptual framework, and promotes it, applies it, passes it down with improvements or at least updates.  I’m contrasting that with this book’s version of communicating theory — which I will call “dialogue.” Dialogical theorizing propounds questions, and a few central concepts such as “culture” or “gender.” It sustains questions and central concepts, more than sustaining master theorists or distinct schools as ends in themselves. In transmissive theorizing, the theorist or school is exalted. In dialogical theorizing, the theorist or school is. . .consulted.

It is an overdrawn distinction but it’s an important one which captures the essence of my discomfort with critical realism, which I think suffers from being institutionally locked into a transmissive mode. Transmission gets in the way of “minding the conversation, recognizing its limits, checking out the rest of the party”. It is ill suited to the reality of contemporary social theory, consisting of “relatively porous conversations, where participants invite new participants now and then, rather than a world of masters, and apprentices working their way in”. Critical realism is far from alone in being transmissive but it is a powerful exemplar of this mode of theorising.

He ends with an interesting discussion of vision questions: “the big normative questions that help us envision a society that is—more democratic (Habermas, or Dewey), more self-understanding (Shils), more radically democratic (Mouffe, Seidman), not to mention more solidary, more rational, or less alienating, to invoke the big three”. If I understand correctly, he’s claiming that these vision questions tend to be baked into theorising in the transmissive mode, locked within schools to be accepted or resisted as part of a whole. But could they not be better integrated into dialogue between subfields in a way which renders them autonomous from schools? Can social theorising involve “semi-autonomous, conversational room for explicit communication about vision questions and how they relate to concepts in subfields”? He suggest public sociology and civic sociology as contributing to this process. Could a broader dialogical approach to social theorising better integrate them?

My notes on Pasquale, F. A. (2018). Tech Platforms and the Knowledge Problem. American Affairs, 2(2)

The most philosophically important aspect of Hayek’s work was his epistemological objection to central planning. He argued that the market was indispensable because it permitted distributed knowledge of a sort which a centralised decision maker couldn’t possibly hope to reconstruct. In this short paper, Frank Pasquale considers the renewed possibility of centralised planning which emerges when private tech giants have a gods eye view of consumer and business activity within their domains. As the examples he cites illustrate, this involves business-to-business as well as business-to-consumer insight:

Having logged and analyzed billions of transactions, Amazon knows intimate details about all its customers and suppliers. It can carefully calibrate screen displays to herd buyers toward certain products or shopping practices, or to copy sellers with its own, cheaper, in-house offerings. Mark Zuckerberg aspires to omniscience of consumer desires, by profiling nearly everyone on Facebook, Instagram, and WhatsApp, and then leveraging that data trove to track users across the web and into the real world (via mobile usage and device fingerprinting). You don’t even have to use any of those apps to end up in Facebook/Instagram/WhatsApp files—profiles can be assigned to you. Google’s “database of intentions” is legendary, and antitrust authorities around the world have looked with increasing alarm at its ability to squeeze out rivals from search results once it gains an interest in their lines of business. Google knows not merely what consumers are searching for, but also what other businesses are searching, buying, emailing, planning—a truly unparalleled matching of data-processing capacity to raw communication flows.

There is a parallel integration taking place in finance and insurance who increasingly operate in the manner of the platform. As he writes, “finance and insurance firms not only act as middlemen, taking a cut of transactions, but also aspire to capitalize on the knowledge they have gained from monitoring customers and providers in order to supplant them and directly provide services and investment”.

We now confront “a hierarchical, centralized regime, in which corporate power is immense, and in which large national apparatuses of regulation seem to be the only entities capable of reining it in”. What he calls populist localizers, Jeffersonians,  want a new era of antitrust to break up the tech giants, whose concentration they see as impeding new entrants and further technological innovation. In contrast the Hamiltonians argue that scale is essential to technological development (e.g. machine learning) and that we simply need to update our regulatory frameworks to take account of these new developments. Though Pasquale’s suggestion interoperability standards could address their concern is an interesting one e.g. by allowing diverse social networks to interconnect in the manner of mobile telephone networks.

This doesn’t follow a left/right divide. As he notes, a critic like Evgeny Morozov frames tech giants as natural monopolies which “get better and better at each task they take on when they have access to more and more pooled data from all the tasks they perform”. This is a reason to socialise them rather than to break them up into smaller fragments which couldn’t cope close to replicating their functionality at scale. In contrast, the Jeffersonians existing within the boundaries of contemporary statism, calling on the Federal Trade Commission to break up a firm like Facebook. For Jeffersonians concentration of data seems like a private data monopoly. For Hamiltonians it seems like a necessary measure to secure the data and mitigate the risks it generates. Understanding the basis of this disagreements rests on what a platform is:

The largest, most successful firms of digital capitalism tend to serve as platforms, ranking and rating other entities rather than directly providing goods and services. This strategy enables the platform to outsource risk to vendors and consumers, while it reliably collects a cut from each transaction. Just as a financial intermediary profits from transaction fees, regardless of whether particular investments soar or sour, the platform pockets revenues on the front end, regardless of the quality of the relationships it brokers.

This casts them in the role of policing the platform and adjudicating disputes, described by Pasquale as functional sovereignty when a private firm takes on functions previously confined to the nation state. They tend to assume these issue can be resolved through automation and generally take an absentee approach to what they manage. But this doesn’t matter under current antitrust law which seeks to protect competition, not competitors. The only concern is whether prices are going up or down. As Pasquale explains, this short-termism fails to take account of how superior offerings might have been in existence were it not for the competitive advantage of big tech:

To see the practical effects of this obsession with the short-term, imagine searching for “weather” in Google, and instantly seeing its own weather forecast filling your mobile screen. Had it linked to three forecasting sites in that precious screen space, it might have directed more exposure and advertising revenue to sites with diverse interfaces, more or less information, or other variations. For example, the site WeatherSpark used to give a beautifully precise image of storms’ movement over time—the perfect visual analogue to AccuWeather’s minute-by-minute forecasts of rain or clear skies. But WeatherSpark no longer offers that service, and who knows how many other start-ups gave up on occupying this space. To establishment antitrust authorities, there is no ground to intervene—consumers get the basics of weather from Google’s interface, and it is free. It’s a short-termist outlook that omits long-run considerations in the name of a presentist scientism. In their worldview, there is no room for argument about whether better or worse alternatives do or should exist. Antitrust is supposed to protect “competition, not competitors”—and a singular lack of concern for quality translates into profound lack of interest in whether current or future competitors could do a better job than a digital behemoth. But how can we know if there is competition, if there are no competitors to provide it?

For Hamiltonians the solution is not breaking up these firms but treating them as public utilities which can facilitate others. Regulators accepted the massiveness of power generation and phone networks but with the necessity of offering a countervailing power which could control their operations. In fact the centralisation of big tech could even be a beneficial thing in the move towards taking public control:

In a recent podcast, the socialists of Chapo Trap House joked that they were happy to see Amazon consolidate power. Once it takes over every business in the country, it will be easy to “cut off the head” and simply impose government control over the economy. “Free Whole Foods hot bar for everyone!” was the imagined denouement. Similarly, if all the private health insurers in the United States merged, the stage would finally be set for “single payer”: the government need only take over the one insurer left standing.

The Hamiltonian vision “can be the economic equivalent of geoengineering—an embrace of the radically new and large-scale, arising out of the sense that inequalities and climate change are such massive problems that only rapid technological advances can solve them”. In contrast to the precautionary principle of the Jeffersonians who question “whether any entity should accumulate the power necessary to, say, compare everyone’s genomes, convert millions of workers’ movements to patterns of behavior programmable in robotics, or maintain social credit scores on all citizens”. Interestingly, Pasquale places some of the blame on the expectations of investors:

Investors demand a fantasy of monopolization: that their firm not merely occupy a field, but develop “moats” against entrants in order to guarantee both present returns and future growth. The day-to-day reality of operational budget constraints, however, pushes the same firms toward the pathologies of absentee ownership.

My notes on Strathern, M., & Latimer, J. (2019). A conversation. The Sociological Review, 67(2), 481–496. https://doi.org/10.1177/0038026119832424

In this interesting conversation with Marilyn Strathern, who I had the pleasure to meet when Jana Bacevic organised a a masterclass with her at our department, Joanna Latimer explores the act of writing and the influence Strathern’s has had on her own. Joanna explains her experience of how Strathern’s writing “has this kind of extraordinary way of entering into one” such that “your parts become my own, and then I discover I can’t think without your parts”. As Strathern explains, her writing is intensely conversational even if the reader might not be aware of exactly who she is having the conversation with:

And it may be that this sense of always being in conversation contributes to that. There’s an ethical side to it, and of course when I was doing my work on intellectual property I sort of touched on it, which is that, you know, nothing actually ever sprang from Zeus’s head fully formed. I mean one is in debt, one is incredibly in debt, one is always taking what other people have done, whether one knows it or not. It’s not always that I have a particular person in mind, or I’m writing for people who’ve provided me with the means to do so. Rather, you stand on, stand on the shoulders of giants and all the rest of it. I’m very conscious, that one is just simply turning the soil until the next person comes along. So there’s that aspect. There’s also the intellectual chase that one gets into, getting into somebody’s argument. It does its work, it sparks you off, and you really want to pull it apart or you want to put it back together again or you want to take bits out. There are things that you think you could do otherwise. And so forth. And that’s very often in relation to specific arguments.

It is writing which seeks to “turn your reader over”, as Joanna puts it, by upending the conventional and the assumed. Marilyn describes her object as “recurrent habits of thought people just get into, time and again”, some of which provoke “real anger, I mean I’m cross”. It left me with a strong sense of the intimacy of writing, almost as vectors of entanglement through which the concerns of the writer spill over their boundaries and into the reader. There’s a really interesting section connected to this about Marilyn’s  preference for the word person over terms like identity or individual. These are bound into an imaginary which needs to be critiqued and other choices create the opportunity to get out from under them: 

Person is a term that I get from orthodox classical British social anthropology. A person is a social configuration. It’s always a relational construct. It doesn’t have the [vernacular] implications of individuality that identity has. I think that’s where the preference is. […] But because person is slightly unusual in English, after all we do use it, everyone knows what we mean, and there are contexts where we use it on an everyday basis – like ‘a person in their own right’ – but actually we don’t use it as much as we would use the word individual for example, or human being, or whatever. Slightly unusual. And it tends to be in legal language, doesn’t it? Person of no fixed abode. Whereas we’d [ordinarily] say man or woman, or whatever.

There’s a micro-podcast here in which I respond to Joanna Latimer’s presentation of an early version of this paper at a workshop last year. My talk is at 40 mins:

My notes on What image types do universities post online?

Twitter has become a mainstream activity for universities in the UK and the US, with most institutions now having a presence. The platform has taken an image based turn over the last few years, since native photo sharing was introduced in 2011 and Twitpic et al vanished, in common with social media more broadly. This presents us with a question: what types of images do universities tweet? Emma Stuart, Mike Thelwall and David Stuart analyse the use of images by university twitter feeds in the UK and consider what this can tell us about how universities see the platform and how they seek to relate to the audiences found through it.

This twitter activity is connected to rising competition, as universities compete against each other to increase enrolment following the reduction of government support. Social media offers a means for universities to differentiate themselves, including through the use of images which express a visual identity. Platforms differ in what they offer for this. As Stuart et al observe, Instagram images tend to “focus more on the aesthetics of individual images, whereas images on Twitter tend to supplement or complement the text of a tweet”.

Their study is a companion to a 2016 investigation in which 51 Uk universities (out of 128 with multiple units of assessment in REF 2014) were found to have an Instagram account. It focuses on the Twitter presence of the same 51 in order to facilitate comparison. A random sample of 20 images was taken from a date range overlapping with Instagram activity (I presume for each university) to produce a final sample of 1,020 images. They undertook a content analysis using a coding scheme developed in a previous study of Instagram use within organisations by McNely (2012) given below. Images were classified based on their content, accompanying text and the interaction they generated.

  1. Orientating: “The primary focus of the image is of specific and unique university (and university associated) locations, landmarks, or artefacts (e.g., buildings/public areas/statues/university affiliated objects)” (4.8% of Twitter images, 14.3% of Instagram images)
  2. Humanising: “The primary focus of the image is of things that add more of a human character or element of warmth/humour/or amusement to the university’s identity” (20.9% of Twitter images, 31% of Instagram images)
  3. Interacting: “The primary focus of the image is centered around people interacting at university (and university associated) events rather than people merely posing for a staged photograph” (2.1% of Twitter images, 5.7% of Instagram images) 
  4. Placemaking: “The primary focus of the image is concerned with the university ‘placing’ their identity within locations or events” (2.7% of Twitter images, 12.8% of Instagram images)
  5. Showcasing: highlighting some event, success, course, service or product of the university (61% of Twitter images, 28.8% of Instagram images)
  6. Crowdsourcing: “The primary purpose of the image is that it has been posted with the intention of generating feedback, interaction, engagement, and online interaction with viewers/followers” (7.7% of Twitter images, 7.5% of Instagram images) 

They found that 41.8% of images had no retweets, with an average of 2.7 retweets per image. It was interesting that showcasing images (most popular type) were significantly more likely to be retweeted than humanising ones (second most popular type) but I wonder how much of each can be explained in terms of staff and students at the university retweeting an expression of support or loyalty rather than an endorsement from those outside the institution? They found far more Twitter images than Instagram images overall from the time period under investigation (7,583 to 3,615) yet a few universities shared more images on Instagram. Does this suggest the influence of an Instagram enthusiast on a university’s comms team? They suggest the discrepancy has its roots in the norm of posting less on Instagram, the service being newer and the restrictions on how one can post to it.

They suggest the popularity of showcase images on Twitter accords with it being an information source rather than networking tool. The two most popular categories of humanising and showcasing seem to be externally-orientated towards potential students. Interestingly, they suggest that not only might universities benefit from posting more of the other categories, doing so “could be aligned with the practice of content curation, whereby the staff member(s) in charge of the Twitter account would specifically attempt to highlight a range of interesting and meaningful content that they think would appeal to their followers”.

My notes on Mirowski, P. (2018). The future (s) of open science. Social studies of science48(2), 171-203.

In this provocative paper, Philip he takes issue with the “taken-for-granted premise that modern science is in crying need of top-to-bottom restructuring and reform” which underpins much of the open science movement, as well as its tendency to obscure the key question of the sense in which it was ever closed and who is now intent on opening it up (pg 172)? Doing so runs contrary to a popular teleology in which a fixed scientific method is now being forced open by the inherent promise of digital technology. If we instead treat science historically, with distinct periods defined by specific orientations, it becomes possible to see that “the open science movement is an artefact of the current neoliberal regime of science, one that reconfigures both the institutions and the nature of knowledge so as to better conform to market imperatives” (pg 172).

Doing so cuts through the semantic ambiguity of openness, allowing distinct phenomena (open access, open data, citizen science, different formats for publication etc) to coalesce in a quasi-unified way, making it possible for advocates to slide between these various expressions of an open science which is rarely, if ever, precisely defined as an integrated project. He argues that this new regime combines an ethos of radical collaboration with the infrastructure of platform capitalism. Its moral force rests upon a whole range of inditement of modern science:

  1. Distrust of science is rampant in the general population: he takes issue in an interesting way with the assumption that more contact with scientists and more exposure to the practice of science will reverse this trend. Could it not do the opposite by personalising science through the mechanism of blogging and social media, making it even harder to convince the sceptical that its a disinterested pursuit? The precise form this scepticism takes varies (Mirowski’s example of educated neoliberals who believe scientists needs to feel market discipline before they can be trusted was particularly striking) but it’s a broad trend which can’t be wished away as a product of a reversible ignorance. This section reminded me a lot of the arguments Will Davies makes in Nervous States about the evisceration of representation as intermediaries are no longer trusted to act impersonally.
  2. Science suffers a democracy deficit: he suggests this fails to recognise how ‘science’ and ‘democracy’ have both been transformed since figures like Dewey first made this argument in the early 20th century. The freedom of scientists, won in relation to a military-industrial complex in which they were embedded, came at the cost of the freedom of the public to influence science. The former apparatus has given way to a market complex such that “science has been recast as a primarily commercial endeavor distributed widely across many different corporate entities and organizations, and not confined to disciplinary or academic boundaries” (pg 176). What it is taken to mean to democratise science has changed radically in this context, reducing it to a ‘scripted participation’ (citizen social science) in the research process as part of an extended marketplace of ideas, as opposed to meaningful participation in the governance of science. In fact I wonder if populist attacks on ‘wasteful research’ and ‘mickey mouse subjects’ should be interpreted as a (pathological) attempt to democratise science? He is scathing about equating “a greater quantity of people enrolled in minor (and unremunerated) support roles with a higher degree of democratic participation, when, in fact, they primarily serve as the passive reserve army of labor in the marketplace of ideas” (pg 177).
  3. The slowdown in scientific productivity: the promise suggested in open science to counteract a fall in actionable scientific outcomes (if I’ve glossed that correctly?) is belied by the form which openness takes within the regime of knowledge production found within commercial scientific research. If I understand him correctly, he’s saying that the organisational apparatus of contemporary science can’t support the openness advocated (e.g. intellectual property restrictions get in the way, the proletarianised condition of bench scientists within commercial organisations) and the “stunted and shriveled” openness it can support doesn’t seem to work anyway. Though I’m not sure I’ve interpreted this section correctly.
  4. The explosion of retractions and the falling rate of falsification: many epistemic problems are ascribed by advocates of openness to the perverse incentives of the existing journal system. These problems can be seen most dramatically in the huge growth of retractions by journals of work which had passed the peer view process, with Retraction Watch currently identifying 600-700 retractions per year. A parallel problem is the basis against publishing falsifications in favour of positive additions to the knowledge system. The hope has been that the shift to a different business model might solve both problems.

If I understand correctly, his point is that a focus upon the deficiencies of science imputes to scientific practice what has its origins elsewhere. He offers a powerful inditement of the role of neoliberalism in producing the pathologies of contemporary science, listed on pg 188. But it’s unclear to me why this is either/or because the criticisms which open science advocates raise could be the outgrowths of neoliberalism’s influence? The point can be overstressed because in some cases there’s an active misdiagnosis correctly identified in his appraisal of these critiques but these are not universal and he seemingly misses the possibility of both/and:

The ailments and crises of modern science described in this paper were largely brought about by neoliberal initiatives in the first place. First off, it was neoliberal think tanks that first stoked the fires of science distrust amongst the populace that have led to the current predicament, a fact brought to our attention by Oreskes and Conway (2011), among others. It was neoliberals who provided the justification for the strengthening of intellectual property; it was neoliberals who drove a wedge between state funding of research and state provision of findings of universities for the public good; it was neoliberal administrators who began to fragment the university into ‘cash cows’ and loss leader disciplines; it was neoliberal corporate officers who sought to wrest clinical trials away from academic health centers and towards contract research organizations to better control the disclosure or nondisclosure of the data generated. In some universities, students now have to sign nondisclosure agreements if they want initiation into the mysteries of faculty startups. It is no longer a matter of what you know; rather, success these days is your ability to position yourself with regard to the gatekeepers of what is known. Knowledge is everywhere hedged round with walls, legal prohibitions, and high market barriers breached only by those blessed with riches required to be enrolled into the elect circles of modern science. Further, belief in the Market as the ultimate arbiter of truth has served to loosen the fetters of more conscious vetting of knowledge through promulgation of negative results and the need to reprise research protocols.

But he’s certainly correct that these overstatements legitimise platform initiatives which aim to reengineer science from the bottom up. The apparent diversity of these space is likely to decline over time, as a few platforms come to dominate. This opens up the worrying possibility that “Google or some similar corporate entity or some state-supported public/private partnership will come along with its deep pockets, and integrate each segment into one grand proprietary Science 2.0 platform” (pg 190). This platformization is likely to have unintended consequences, such a rendering science an individualised pursuit (he cites Orcid ID as an example of this – unfairly?) and setting up data repositories to fail if they are insufficiently succesful in attracting the data donors on whom their ultimate viability will depend.

He correctly identifies these platforms as facilitating a form of managerless control but I have an issue with the claim that “one automatically learns to internalize these seemingly objective market-like valuations, and to abjure (say) a tenacious belief in a set of ideas, or a particular research program” (pg 191). How automatic is the process really? If he means it as a short hand to say that it tends to happen to most users over time then I withdraw my objection. But if it happens in different ways and different degrees, we need to open up the blackbox of automaticity in order to see what causal mechanisms are operating within it.

He closes the paper by concretely laying out his case about why the platformization of science is a neoliberal process. Firstly, it breaks up the research process into distinct segments which permit of rationalisation. Secondly, the focus upon radical collaboration gradually subsumed the author into collaboration, in apparent contradiction of his earlier point about the individualisation of science. Thirdly, the openness for the user goes hand in hand with an opaque surveillance for the platform provider with monetisation assumed to follow further down the line. The most interesting part of this paper is how description of the ambition towards building a unified platform portfolio (mega platform?) for research and how this fits into the longer term strategy of publishers. There’s a lot to think about here and I suspect this is a paper I will come back to multiple times.

My notes on Liboiron, M., Tironi, M., & Calvillo, N. (2018). Toxic politics: Acting in a permanently polluted world. Social studies of science48(3), 331-349.

The authors of this paper take “a permanently polluted world” as their starting point. It is one where toxicity is ubiquitous, even if unevenly distributed. Unfortunately, “[t]he tonnage, ubiquity and longevity of industrial chemicals and their inextricable presence in living systems means that traditional models of action against toxicants such as clean up, avoidance, or antidote are anachronistic approaches to change ” (pg 332). The pervasiveness is such that we need to move beyond the traditional repertoire of management (separation, containment, clean up, immunisation) which is premised on a return to purity whiled depoliticising the production of that toxicity by treating it as a technical problem to be managed. In doing so, we can begin to see how toxic harm can work to maintain systems rather than being a pathology which ensues from systemic failure

There is conceptual work required if we are to grasp the politics of toxicity, encompassing how we conceptualise toxic harm, provide evidence for it, formulate responses to it and grasp the interests reflected in its production and management. This involves rejecting a view of toxicity as “wayward particles behaving badly” (pg 333). As they explain on pg 334, toxicity is relational:

Toxicity is a way to describe a disruption of particular existing orders, collectives, materials and relations. Toxicity and harm, in other words, are not settled categories (Ah-King and Hayward, 2013; Chen, 2012) because what counts as a good and right order is not settled.

They suggest a distinction between toxins and toxicants. The former occurs naturally in cells, whereas the latter are “characterized by human creation via industrial processes, compositional heterogeneity, mass tonnage, wide economic production and distribution processes, temporal longevity, both acute and latent effects, and increasing ubiquity in homes, bodies and environments” (pg 334). This includes naturally occurring minerals which are rendered problematic through industrial processes that lead them to occur in specific forms, locations and scales productive of harm.

Laws surrounding toxicants are based upon threshold limits, usually in relation to effects on human bodies. These are supplemented by cost benefit principles based around the avoidance of ‘excessive costs’ given available technologies. In this sense, the breakdown of order on one level (enabling toxicants to spread because it wouldn’t be ‘feasible’ to prevent it) facilitates the reproduction of order on another level (ensuring viable conditions for the continued reproduction of the commercial sector involved). I really like this insight and it’s one which can be incorporated into the morphogenetic approach in an extremely productive way.

This focus on toxicity enables us to links together these levels, providing a multi-scalar politics of life. There is a temporality to toxicity in which a slow disaster is not easily apprehended. For this reason agents seek to make it legible as a event through actions like photography or protest actions. But this easily gives rise to a politics of representation, seeing the claims of environmentalists as (at best) on a par with the claims of commercial firms. Rendering these processes legible through mechanisms like sensational images can reproduce existing differences between centre and periphery, the heard and the unheard.

Their interest is in modes of action “beyond governance-via-policy, in-the-streets-activism and science-as-usual” (pg 337). I’m not sure what their motivation is for this beyond the drive to “no longer privilege the modern humanist political subject and epistemologies based in claims and counter claims”: are they saying that a narrow politics of evidence and judgement has its corollary in public activism around public issues which have been established evidentially? I can see the analytical case for trying to get beyond this dichotomy but I’m not sure I see what is at stake politically in doing so. Their interest in actions such as  “the everyday, obligatory practices of tending to plants and others as toxic politics that do not necessarily result in scaled-up material change” doesn’t seem politically fruitful to me precisely because of the multi-scalar mode of analysis they offer (pg 341). Why should we challenge “activism as heroic, event-based and coherent” (pg 341)? Again I can see an analytical case for this, even if I disagree with it, but I don’t see what is at stake in this politically. It might be there are unintended consequences to thinking in terms of ‘effective outcomes’ but the force of this argument rests on an implicit claim about outcomes. Why is it important to “make room in dominant political imaginations for multiple forms of local, low resolution, uneventful, uneven, frustrated, desireful, ethical, appropriated and incommensurate forms of justice” (pg 343)?

 

My notes on Lawson, T. (2009). Cambridge social ontology: an interview with Tony Lawson. Erasmus Journal for Philosophy and Economics, 2(1), 100-122.

Tony Lawson is a key figure in critical realism, leading the Cambridge Social Ontology group over twenty five years and playing a primary role in establishing the International Association for Critical Realism, as well as producing decades of work on social ontology and its relationship to economic thought. Originally a mathematician, I was intrigued by this interview’s insight that it was student activism which left him interested in economics, specifically the capacity of economic jargon to get in the way of political discussion. His bewilderment at ubiquitous economic modelling began as soon as he moved into an economics department, leaving him scathing in his critique of those who “are rather pedestrian in their approach to, and often very poor at, mathematics, though seemingly in awe of it, or perhaps in awe of mathematicians” (101). As he puts it, “there are limits to the uses of any specific form of mathematics” which economists seem largely unaware of. In other words, the uses and abuses of mathematics have been central to his work on social ontology, particularly the character of social reality which was obscured by techniques which sought no connection with it. This line of argument led him to connect with others in the nascent intellectual movement of critical realism:

I produced stuff criticising economics from an explicitly realist perspective for ten years or so before coming across Roy. At some point, I discovered that a number of us were making similar or anyway related critiques of current social scientific practice, but situated in different disciplines. Margaret Archer was doing it in sociology; Andrew Sayer in human geography, and so on. Roy was doing a similar thing in philosophy and had the philosophical language. Eventually, we all sort of came together
picking up especially on Bhaskar’s philosophical language—and the rest of his contribution, of course.  (102)

However his interest in social ontology predates philosophical ontology. As he puts it on pg 102, “when I first came into economics at the LSE, my basic concern was that the methods we were taught presupposed a world of a sort very different to the one in which we actually seem to live”. These methods presuppose event regularities (if A then B), atomism (factors which operate uniformly in any context) and a non-processual social reality. The focus of this argument is upon the kind of reality presupposed, featured which can be concretely manifested in different ways as opposed to there being specific claims entailed by specific methods. It is paralleled by the question of what the world must be like for everyday social practices to work in the way that they do.

It follows from his that one can’t build ‘up’ from ontological reasoning into empirical claims and substantive theorising. Its value is rather that it “helps avoid inappropriate reductionist stances and aids explanatory and ethical work” (104). This is why he stresses his primary interest is in ontology rather than critical realism, with the former leading him to the latter rather than being reducible to it. This encompasses philosophical ontology (“the practice of seeking to uncover shared properties of phenomena of a given domain”) and scientific ontology (“to explore the specifics of a phenomenon in a domain”). His work is tied up with the rejection of monism in economic method, described on pg 112:

What I take to be essential to mainstream economics is the insistence that methods of mathematical modelling be everywhere and always employed in economic analysis. I emphasise the word ‘insistence’. It is this insistence that I reject wholesale. I do not, of course, oppose economists using or experimenting with mathematical methods, though I a m pessimistic about the likelihood of much insight being so gained. But I am opposed to the insistence that we must all use these, and only these, method
s, that the use of these methods constitutes proper economics, that employment and promotion be restricted to those who use only mathematical models, that only modelling methods be taught to students, and so on

The thing I found most interesting about this interview was his account of the Cambridge Social Ontology Group as a form of collective method, responding to the growing impersonality of the Cambridge Realist Workshop on Monday nights. The same people attend each time, with discussion focused around particular topics with continuity between the tweets. The focus of both is on questions rather than answers, though obviously the two cannot be separated. To what extent can this be seen as a method for doing ontology? The prevailing culture of the academy relegates organisation to a peripheral status but actually there are some fields of inquiry where it can function as a primary method in its own right. Getting this right is getting scholarship right, as opposed to initiating something which simply allows scholarship to be refined or transmitted.

There’s a little aside on 107 which doesn’t really fit into the rest of these notes but which I don’t want to forget:

I believe the emphasis on prediction in a world that is clearly open, is ultimately an aberrant form of behaviour that itself requires an explanation, probably a psychological one. In fact I am quite susceptible to the suggestion that, in many cases, the over-
concern with prediction is something of a coping mechanism resulting from earlier traumas in life

My notes on Nash, K. (2018). Neo-liberalisation, universities and the values of bureaucracy. The Sociological Review, 0038026118754780.

It is too easy to frame neoliberalism in institutions as an outcome rather than a project. In this thoughtful paper, Kate Nash explores the space which this recognition opens up, the “competing and contradictory values in the everyday life of public sector organisations” which becomes apparent when we reject the supposition of “a fit between ideology, policy, political outcomes and practices” (178). Extending marketing competition into the university doesn’t automatically replace public goods, something which is important to grasp if we want to construct an adequate meso-social account of neoliberalisation. New Public Magement, as a theory of administration, might be explicitly opposed to bureaucracy but it is through a bureaucratic transformation that its tenets are woven into the fabric of an institution like the university. Nash begins her argument by revisiting Weber’s conception of the impartial promise of bureaucracy:

I adopt Weber’s definition of bureaucracy as enacting an ‘ethos of impartiality’, treating individuals as cases according to strict rules of professional and technical expertise. Each person in an organisation should follow correct procedures to guard against making personal judgements; to avoid using the authority of their office to exercise power according to their own personal decisions, whims or alternative values (Du Gay, 2000; Weber, 1948). For Weber, famously, instrumental values, the means rather than the ends, come to predominate in a modern capitalist economy and we are all caught in an ‘iron cage’ of technical evaluations (Beetham, 1987, pp. 60–61; Mommsen, 1989, pp. 109–120). (179)

However it is a mistake to regard bureaucracy as a totality, argues Nash, framing it as leading to the displacement of all values other than administrative efficiency. Rejecting this view allows us to distinguish between “different kinds of bureaucracy, that which undermines and that which supports education in universities” (179). It allows us to identify the values which marketisation entrenches (entrepreneurship and consumer choice) and find others to protect. The allocation of research funding (through the RAE/REF and individualised competitions) and teaching funding (through the student fees and student loans system) in UK universities reflects the entrenchment of these values. It is against this backdrop that collegiality, drawing on the analysis of Malcolm Waters, becomes interesting:

Collegiality, he argues, is relevant to university life in that, firstly, as academics we understand ourselves to be experts in our different fields, and therefore as possessing insights into knowledge – scientific, of the humanities, of the arts – on which there are no higher authorities. As such, academics have a degree of expert authority; we expect, and to a large degree we maintain, our ability to ‘have the last word’ on what counts as a university education in our specialised disciplines through procedures of peer and student evaluation. Secondly, academics tend to think of the university as a ‘company of equals’. Where knowledge is ultimately what matters, other markers of status, wealth and power must be irrelevant. As Waters puts it, ‘if expertise is paramount, then each member’s area of competence may not be subordi- nated to other forms of authority’ (Waters, 1989, p. 955). Finally, Waters suggests that the value of ‘consensus’ is a norm of universities: only decisions that have the full support of the collectivity ‘carry the weight of moral authority’ (Waters, 1989, p. 955). (181)

For Waters this is not necessarily a good thing, as collegiality brings closure i.e.the protection of insiders over outsiders, the defence of existing status against threats to it. This can make it appear to be a form of resistance to marketisation, but the intersection of the two can exasperate their existing problems e.g. superstar academics being able to exercise academic autonomy in a collegial mode, while others are left behind to aspire to collegial status (if I understand Nash’s point correctly). The fact that corporatism has displaced collegiality, to use McGettigan’s phase, doesn’t mean collegiality is a solution to the problem of corporatism.

Even if the rise of audit culture and end of contractual tenure have dented academic autonomy, there is still an entrenched expectation that we “should be free to research, to publish and to teach ‘the truth’, however inconvenient or troublesome for university administrators, governments and civil servants, without fear of losing our jobs”. It has the associated expectation that we will develop this by “reading widely, with curiosity, developing capacities to think through different meanings of concepts, challenge fundamental assumptions, and design and use systematic methodologies, as well as to uncover facts through scholarship and empirical research” (182). Meeting this expectation requires temporal autonomy in relation to free time in which nothing is being produced that can easily be registered.

Audit culture on Power’s account threatens this through twin processes: colonisation (transforming an organisation’s values through measuring its activity) and decoupling (the circularity of auditing which has paperwork produced for auditing as its sole object). The assumption underlying this is that “professionals cannot be trusted to do their jobs well; in particular, we cannot be trusted to deliver value for money” (183). However bureaucratic work is of the same kind and Nash draws attention to that we engage in outside of audit, including those activities which support education and resist abuses of collegiality and marketisation. Nash reminds us that “we should not see bureaucracy solely as marketising, nor only as imposed from above” (184). These are described by Nash as socialising bureaucracy:

Socialising bureaucracy regularises collegiality in that it helps academ- ics communicate what counts as good teaching and learning, what counts as research and learning that is of academic merit, and what assumptions and biases should not be allowed to make a difference in these judgements. It regulates collegiality in that documents and procedures help set limits on academics’ discretionary judgements. (185).

Against an exclusive focus on marketisation as a threat to education, Nash reminds us of those cases where professional power threatens it e.g. academics act in ways that serve  their own private interests rather than those of education. The first example she gives is formalisation of equal treatment where mechanisms ensure staff and students are assessed on the relevant grounds of academic performance and other criteria are excluded. The contractualisation of learning formalises the reciprocal expectations placed upon teachers and learners, mechanisms ensuring both parties have a working understanding of how the interaction will proceed.

Socialising bureaucracy in this sense mitigates the pathologies of both collegiality and marketisation. Recognising the critiques which see these mechanisms as killing spontaneity and charisma, Nash asks how we could otherwise secure the value for teaching and learning for everyone in a mass higher education system which has expanded dramatically over recent decades? Nonetheless distinguishing marketing bureaucracy from  socialising bureaucracy is difficult in practice. Both can contribute to the intensification of work and be experienced as destructive of autonomy. Furthermore, one kind of bureaucracy can stimulate the other

What’s particularly interesting for my purposes is Nash’s analysis of the grey area opened up between the two by intensified competition within and between universities:

It includes dealing with the paperwork associated with the explosion of publishing, showcasing and promotion of academic work – from reviewing articles for journals and book manuscripts and editing journals to organising and publicising conferences and seminars; the bureaucracy of applying for and dealing with funded research, which can mean managing a team; designing, developing and publicising popular programmes and courses; reviewing new programmes for other Departments and universities; acting as external examiner for other universities; and writing references for colleagues and students. In virtually every case, these activities require hours of meetings and emails, as well as filling in forms, and they often require producing online as well as offline materials. In addition, there are also meetings, emails and paperwork associated with running a Department and a university as if it were a business: writing and re-writing ‘business plans’, ‘job descriptions’, ‘programme specifications’, ‘strategies’ to promote research, enhance student experience and so on (188)

It strikes me that social media is part of this grey area but it also something through which much of the gray area is inflected i.e. it is an expectation in itself but also a way of undertaking these other activities. To use an example I talk about a lot: if social media makes it quicker to publicise seminars and conferences then why do we constantly assume it will be a net drain on our time? This seems like the theoretical framework I’ve been looking for to help make sense of the institutionalisation of social media within the university.

My notes on this report by Google Transparency Project 

There are many reasons to be cautious about the educational ambitions of tech firms. If these firms seem likely to be the dominant actors of the global economy over the coming decades, how will shape the influence they exercise over education. To offer the most concrete example I can think of: if tech firms shape the curriculum for digital citizenship and digital safety, will they present themselves as sources of digital risk? I doubt it and it’s one of many reasons why their projects and initiatives need to be carefully scrutinised. Capturing the Classroom by the Google Transparency Project is an important contribution to precisely this agenda.

It investigate how technology procurement has been upended in American schools, with “a rigorous and competitive process that carefully weighed factors including cost, usefulness and safeguards on children’s privacy” being radically transformed by Google “directly enlisting teachers to push their products into the classroom”. This has been undertaken through the recruitment of teacher evangelists and organisation of teaching summits (pg 2) with existing professional development budgets bearing the cost of helping teachers adapt to this new technological infrastructure. It is a process which “focused on teachers and their power to spread the word about Google’s classroom potential—all while bypassing the administrators that typically make decisions about technology and other educational tools” (pg 7). In some cases, the teacher trainers win consultancy contracts with no disclosure terms attached, echoing the established practice of Big Pharma offering paid speaking gigs to doctors in the expectation they act as advocates for their products.

It has also sparked the proliferation of an ecosystem of blogs, resources and consultancies “among educators and administrators looking to cash in on school districts’ technology craze” (pg 12). In some cases, these businesses then work with other tech firms, creating a sustained mobilisation of big tech advocacy within education. Third party firms can place a distance between a teacher and Google, blunting the appearance of a conflict of interest.

The authors draw the contrast to Coke and Pepsi’s ambition to produce customers for life by placing vending machines in every school. They suggest Google have already seen considerable success:

Today, 25 million students worldwide use Google’s Chromebooks at school, 30 million teachers and students use Google Classroom, and more than 80 million people use G Suite for Education. (Pg 2)

The success of their initiatives has inspired other firms to follow their lead, described on pg 5:

Google isn’t the only technology company trying to push its products into the classroom. Microsoft, Amazon and Apple, as well as other device manufacturers and software developers, all have aggressive programs targeted at classrooms. Many, such as Amazon Inspire, Microsoft’s Certified Educator program19 and Apple’s Distinguished Educator program, take a page directly from Google’s playbook, also courting teachers and administrators with free trips, software and, increasingly, lucrative consulting gigs moonlighting for EdTech companies. (Pg 5)

However they note that Google has a crucial advantage, in that it can offer hardware as loss leaders in a way that its competitors cannot. Many questions remain unanswered about the commercial significance of this, including whether student profiles built up in school are ‘switched on’ when students enter adult life (pg 7).

My notes on Davies, W. (2017). Elites without hierarchies: Intermediaries,‘agency’and the super-rich. In Cities and the super-rich (pp. 19-38). Palgrave Macmillan, New York.

Who are the super-rich, and what do they want? This is the question which a thought provoking paper by Will Davies begins with and it’s one which has preoccupied me in recent years. Our statistical understand of the super-rich has increased in recent years but this increased knowledge leaves a range of sociological questions which need to be addressed:

What do they want to do with all that money, other than protect it, grow it and pass it on to their children? Do they want political power, and if so, of what kind and to what end? Or do they employ it culturally, to achieve their own modes of Bourdieusian distinction from the other 99.9%? (pg 2)

For a Millsian approach to elites, the question is which political, cultural or military  institutions are they gravitating towards in pursuit of power? For the Marxist approach, it’s a question of shared interests, their collective consciousness of them and self-organisation in pursuit of them in relation to other classes, as well as the tools of exploitation leveraged in this process. Davies agrees with Mike Savage that these aren’t necessarily the right questions, summarising his argument that we need to take money seriously as money (rather than assume it is waiting to be converted into power, with the assumption elites are intrinsically political) and must adequately describe capital before we can theorise it (rather than apply pre-existing categories to incomplete or outdated descriptions of our object).

What is this object? Is it a class? Is it a group? To what extent is it open or closed? To these challenges Davies adds another one: “the need to avoid wholesale methodological individualism, while recognising the deeply personal and individualised nature of the relationships and strategies that appear to structure the lives of the super-rich” (pg 3). Piketty’s contribution is to reorientate analysis way from the labour market and towards the family. But this is difficult because knowledge is partial and the super-rich is secretive. In order to addresses these challenges, Davies suggests we study intermediaries: agents working on behalf of the super-rich who represent their interests. By focusing on agency, in the sense of one party being contracted to represent the interests of another, it is possible to response to Savage’s challenges and move the study of the super rich forward.

He draws on Simmel’s account of money as a teleological vacuum, a pure means which extends beyond every possible use to which it can be put, connecting this to the ambitions of the super-rich. Piketty’s insight about the increasing importance of unearned wealth in the economy, as well as Dorling’s recognition of the professional classes now being subsumed into the 99%, yield a sense of the super-rich as breaking away. As he puts it on pg 6, “To break free of the bounds of culture, politics or technological limits becomes a teleology in itself, the same anti-teleology that Simmel identified as the metaphysical nature of money”. This is tied to a phenomenology of valuing money as “a state of arbitrariness, where money can be experienced as perfect liquidity, without friction” and “extreme form of negative liberty that lacks all normative restraint and relationship only to the future” (pg 16).

The problem of agency is key if we wish to avoid taking this analysis too far, with their insulation depending on the capacity of agents to represent the interests of the super-rich to the wider world. He summarises this as a theoretical approach on pg 8:

In this spirit, I want to propose a theoretical device which may help to shape a sociological approach to the super-rich – principle-agent problems. In particular, I suggest that we can think of the relationship of the super-rich to domains of power, culture and production as a series of principle-agent problems, in which they seek a form of representation which absolves them of the need to become involved in matters of public concern or controversy.

Principle-agent problems rest on the “paranoid methodological individualism” associated with game theory, with the primary challenge being to ensure the agent does not use their position to pursue their own private interests rather than those of the principle they are representing. Interestingly, this is the rationale for stock options for executives, theoretically encouraging them to act in pursuit of shareholder interest by making them a shareholder. But as Davies notes, the fact executive renumeration has risen more quickly than the stock market suggests it actually makes the agency problem worse.

This ties to a broader ambiguity about their position, as “symptoms of the deep-lying ambiguity surrounding the corporate form generally, which is neither a piece of private property nor a political association, but flips from one to the other as it suits” (pg 9). Training as professionals has been one solution but managers lack the monopoly over a specific domain of knowledge typical of professionals and their connection to the public interest is tentative and contestable. Techniques such as edit and credit rating were introduced to address this ambiguity but this introduce their own problem of agency, at least if the rating agency is paid by the company it rates.

This sociological reframing of the principle-agent problem “is a particular way of
representing the interface of politics and economics” (pg 11). If I understand him correctly, economics is insulated from politics by outsourcing normative evaluation to agents; capital can float free of controversy because the evaluation, justification and debate takes place at a distance through the mediation of ratings agencies, auditors, central bankers and policy makers. It is a form of “moral under-writing – declaring that activities are transparent and trustworthy, sometimes when they are not” (pg 15). The same analysis can be applied to the growth of family offices whose purposes is to “save super-rich families from having to engage in public situations (getting a child into a school, handling tax, booking a restaurant table, managing property) which may involve any form of antagonism” (pg 11). Whereas professionals once anchored capital in the public sphere, now they facilitate its escape.

He uses this to make the fascinating argument that the super-rich may benefit from further neoliberalisation, but it’s unclear how actively they are supporting it. Agency in this sense allows them to avoid becoming a class-for-itself, highlighting a micro-social disjuncture between the economic and the political which prevailing concepts of ‘neoliberalism’ are unable to capture. As a project it “required considerable solidarity and reflexive self-understanding on the part of capitalists and ideologues themselves, through think tanks, lobbying bodies, political parties, philanthropic networks” (pg 14). But if I understand correctly, its success has eroded the conditions which made the is possible while also making it less necessary than was once the case. In its place, we have increasingly complex webs of “non-hierarchical, non-exploitative dyadic contractual relations” (pg 15) which often overlap within super-rich networks in which intermediaries have become full members over the preceding decades. It follows from this that the problem is not wealth corrupting politics, as much as “how wealth is kept entirely separate from politics and public life, through strategic acts of delegation, where the delegate is also a delegator” (pg 15).

My notes on Hudson, M. (2018). Ending technocracy with a neologism? Avivocracy as a conceptual tool. Technology in Society, 55, 136-139.

What does it mean to call someone technocratic? In this intriguing paper, Marc Hudson observes that the term is “thrown about as a term of abuse, but without a clear alternative other than ritual(istic) invocations of the need for citizens to be involved in decision making” (136). The common understanding of the term is clear enough, “derived from Greek words τέχνη, tekhn meaning skill and κράτος, kratos meaning power”:

Technocracy is commonly understood as a type of governing/administration where everything is built upon self-proclaimed fully rationalised (and ideally evidence-based policy) and methods, in which all decisions and actions claim to be based upon scientific and technological knowledge. (136)

But without an antonym, it’s hard to see what is at stake is using the designation and it muddies the war rather than clarifying matters. It is a term that almost always has negative connotations and is used near exclusively by critics of (what they define as) technocracy. A moralistic and pejorative term of this sort is likely to be dismissed by many of those who tend to be defined as technocrats, leading to the ironic state of affairs that it’s a framing which actually empowers those it intends to critique because “because it enables them to dismiss critics as merely moralistic” (136). Hudson therefore seeks an antonym purged of this moralism, able to demoralise claims about sustainability and position them as fully rational alternatives to the status quo. He lays out the case against technocrats on 137:

Technocrats are criticised as actors who – by preventing certain
ideas, values and their advocates entering the rooms where decisions
are being made – institutionalise epistemic injustice, and use
‘practicality’ as an intellectual baton

The core complaints about technocracy are familiar: the hubris of technocrats, their lack of accountability, their depoliticising effects. However it still leaves the question of the antonym of technocracy, with Hudson convincingly arguing that “the term democracy has become so emptied of meaning that on its own it does not act as an adequate antonym to technocracy”, even when qualified as monitory or deliberative (137). He considers a range of other possibilities: Luddism (rejected because of its pervasive, if inaccurate, connotations of technophobia), Holacracy (retaining the impulse towards control but channeling it through self-organised teams rather than a bureaucracy) and Permaculture (building stability through the modelling of natural processes, providing little vantage point from which to problematising technocracy). For this reason he reaches for a neologism:

With the existing possibilities inadequate, what is needed is a word that refers to a form of rule by capturing the importance of acknowledging irreducible uncertainty, ambiguity and uncontrollability, beyond the usual blandishments about a ‘risk society’. A word is needed which espouses cognitive humility, acceptance of limitations (something some policymakers struggle with – and the need for “clumsy organisations” to deal with wicked problems and super wicked problems. (137)
Avivocracy is intended to capture “the need for an acute awareness of the limitations of our ability not merely to see the world, but to control it” (138). It encompasses ”

the efforts of reflexive governance, adaptive governance, flux ontology, grassroots resilience, monitory democracy, transitions management (rightly understood) and
other ways of advocating reform of existing sclerotic and not fit-for-purpose institutions” (138). Democratising implications follow from this but it is a consequence rather than a cause with the weigh of avivocracy resting in an orientation to “the permanent, irreducible and escalating uncertainties of twenty-first century human civilisation” (138). If technocracy seeks to control, shut down or transcend these uncertainties, avivocracy seeks to cope with them and grow through them. It is not anti-technogical but rather suggests an orientation towards technology.

My notes on Hashemi, M. (2019). Bedouins of Silicon Valley: A neo-Khaldunian approach to sociology of technology. The Sociological Review. https://doi.org/10.1177/0038026118822823 

This hugely original paper seeks to counteract what Morteza Hashemi sees as an excessive focus on technological development in accounts of Silicon Valley, looking beyond this macro-social (often Schumpeterian) approach to “the evolution of Silicon Valley as a technological, economic and institutional phenomenon” to the micro-social questions which are implicit within it (pg 2). This is undertaken through a contemporary rereading of Ibn Khaldun’s theory, originally applied to the “Bedouin tribes of his day” whose members Would “learn to face daily crises without fear” because “[f]ailure to do this would put at stake their very survival” (pg 2). This was part of a hugely complex theory of social change, produced in the fourteenth centre, until recently confined to historical work which sought to place it in context but increasingly being taken up by sociologists exploring its contemporary relevance and capacity to be applied to issues like modern technology and technological innovation.

Ibn Khaldun developed an empirically-orientated social theory which sought to “distinguish between the series of events and their deep meanings, trajectories and recurring patterns” (pg 3) through a rational mentality, a rejection of rhetoric and an empirical examination of events. An important concept was asabiyya (group feeling), which Hashemi notes is often misdefined merely as solidarity. It refers to the “mutual emotional commitment, moral obligation and unity”, arising from sustained interaction under harsh conditions, “transform a simple interdependency into something more than that”: it is a “social mechanism able to create a powerful and functional unit which can survive and flourish under inhospitable conditions” (pg 4). He outlines on pg 4 the contrast Khaldun drew between the Bedouins and city dwellers, as well as the social dynamics which flowed from it:

The Bedouins, living in the harsh conditions of the desert, had become both skilled and trained, and their religion magnified their strong asabiyya/group feeling. The city-dwellers, on the other hand, with their secure life inside the city walls were mostly inclined towards a luxurious lifestyle and the delights of civilization. This left their society fragile in the face of the attacks of the hardier Bedouins. The point is that once the Bedouins had conquered the cities and built their own empire they were soon themselves absorbed into the life of the civilized world, thereby losing their outstanding merits and qualities, including the essential element of asabiyya. Hence, they would in their turn be replaced by new tribes of Bedouin conquerors. His estimate was that each dynasty of Bedouin conquerors could survive up to four generations. After the fourth generation of rulers, the former Bedouins would have become so accustomed to the safe, sedentary life as to be in danger of a new invasion by another group of Bedouins.

Over time inherited tradition (which I assume encompasses institutions, as well as beliefs) comes be relied upon more than the achieved qualities of the group, leaving them ill-equipped to deal with emerging challenges. Hashemi strips away the underlying environmental determinism and retains this core “notion of a cycle in which risk-takers replace risk-avoiders” (pg 4). Training is central to this because it cultivates a certain kind of group with certain kinds of orientations towards risk. It involves the accumulation of aptitudes which Hashemi notes has affinities with Bourdieu’s concept of habits. Their difference is in moments of crisis and rupture where Bourdieu understood the habitus would fail in its action guiding capacity. In contrast Khaldun saw crisis as crucial for the development of the aptitudes. As Hashemi elegantly puts it on pg 6, “for Bourdieu the game almost stops when it comes to crisis, for Ibn Khaldun crisis is the very game”: it is the norm rather than the exception.

It is a conception with a collective focus, orientated towards how the group weathers the crisis and how they are changed in the process. If I understand correctly, it’s crucial to note this does not imply unity; some of these effects happen individually, forming group characteristics through aggregation, while remaining a collective process. Drawing on Sloterdijk’s work, Hashemi reads Khaldun as having identified two anthropotechnic systems, corresponding to the latter’s distinction between city-dwellers (relying on the institutions) and Bedouins (relying on themselves):

The one is the luxurious way of shaping life that entails externalization and outsourcing of some vital skills. The other system is about cultivating those skills and relying on one’s inner abilities. (pg 9)

As he goes on to write on pg 10, Khaldun’s social theory is deeply relevant to a world characterised by risk, ‘disruption’, uncertainty and change:

For Ibn Khaldun, hazard, destruction and catastrophe are not the only results of a crisis. Crises are human-made, but they also make human beings. Crises are training camps. They are the source of construction as well as destruction. In the words of Nietzsche, that which does not kill us, makes us stronger.

He analyses the rise of the geeks in these terms, originally “an underground network of college students, university students and computer scientists who cared about the internet as an open and powerful infrastructure which can fundamentally transform aspects of our life” bound together by a shared marginalisation and a faith in the transformative possibilities offered by technology (pg 11). There are four elements to Khaldun’s conception of training which we can see in the ascendency of the geeks: “step-by-step training under conditions of hardship” (toiling in obscurity, in co-working spaces or incubators, without any guarantee of respite), “the power arising from the combination of Bedouin training and a charismatic leader who is an authority behind external law” (the role of the VCs or investors in transforming their fortunes), risk-taking (the constant necessary to avoid being superseded, the source of organisational renewal). I felt it was a shame the paper stopped here because the real force of this line of argument would be subsequent cycle of decline and challenge likely to be faced by the now ascended geeks. But it’s a fantastically original and thought-provoking paper which has left me eagerly anticipating a sequel.

My notes on Rowlands, I., Nicholas, D., Russell, B., Canty, N., & Watkinson, A. (2011). Social media use in the research workflow. Learned Publishing, 24(3), 183-195.

I was fascinated to stumble across this paper from 2011 which I’d somehow managed to miss in the past, reporting on a project funded by Emerald investigating social media use amongst academics. The authors reflect on what they see as a recent change in scholarly attitudes, noting that “[o]nce things change in the digital world they change unbelievably quickly. As they write elsewhere on pg 183:

Researchers appear to have moved from outright scepticism, to pockets of scepticism to virtually no scepticism at all. Whereas it was cool to rubbish social media three years ago, it now appears to be cool to listen and praise

The research used a survey sent through a number of participating publishers (Wiley, Taylor & Francis, Emerald, Kluwer, and CUP) supplemented by an e-mail to staff across UCL and delegates at the 2010 Charleston Conference. They received 4,012 responses out of nearly 10,000 invitations to participate, including publishers, librarians and university administrators. Responses were received from 215 countries and included 1,923 respondents who were actively using social media. These were compared to a contrast group of 491 researchers who had yet to use social media, with questions raised by the research further explored through a focus group, but the relevant methodological details for either group are confusingly absent from the paper.

They asked about eight categories of tool: social networking; blogging; microblogging; collaborative authoring tools for sharing and editing documents; social tagging and bookmarking; scheduling and meeting tools; conferencing; image or video sharing. What now seem like the most obvious examples of social media figure relatively marginally amongst their uses: 27% used social networking tools, 23.2% used imager video sharing tools, 14.6% blogged, 9.2% microblogged and 8.9% used social book marking services (pg 185). It’s interesting to note that 63% used tools in only one or two of the eight categories they inquired into, with a tiny few using 6 (2.6%), 7 (1%) or 8 (0.7%) (pg 186). Blogging/microblogging and Social networking/microblogging were the two most common pairings of tools by researchers. Interestingly, they found that men tended to have a stronger preference for LinkedIn over other social networking services and younger respondents preferred Facebook to LinkedIn.

They note that familiar brands dominate the lists within each category, what they describe as “generic, popular services” on pg 186, speculating that there might be a market niche for much specialised tools designed for academics in the future. It’s interesting to theorise about why this might be so: they are familiar, widely used, easy to pick up, come with an existing social network and have the promise of access to a much broader audience beyond that network. As they put it later in the paper, these are tools which are “generally very intuitive and require little or no third-party maintenance” (pg 191). As they put it on pg 194, researchers are demonstrably drawn to these tools and “it is worth investing time in these mass market tools as their research colleagues worldwide are committing to the same tools”. They stress this point again in the conclusion: researchers are “largely appropriating generic tools rather than using specialist or custom-built solutions and both publishers and librarians need to adapt to this reality”.

Scientists were the biggest users, something which they suggest can be partly explained by the team structures within which they work. It would be interesting to speculate whether these relatively minor divergences (e.g. 95% of earth science respondents vs 84% of social science respondents) might have been closed as digital social scientists have ‘caught’ up. Younger respondents were more likely to use microblogging, social tagging, and bookmarking, though they caution against age-based interpretations of social media uptake, suggesting that the significant difference is the “passion exhibited for social media by the young” rather than their choice of tools as such (pg 188). It’s important to meet people where they are and it might be more effective, as in their example, participating in Facebook communities than creating their own branded spaces.

To make sense of the implications for the research process, they identify seven stages while noting these are analytical constructs which simplify the messy reality of research: identifying research opportunities, finding collaborators, securing support, reviewing the literature, collecting research data, analysing research data, disseminating findings, and finally managing the research process (pg 190). Their findings provide some reason to believe that social media tends to  be used across these categories, rather than being confined to any particular one. Their findings on perceived benefits amongst these users are very interesting, presented on pg 192:

Social media was used to compliment existing forms of dissemination, rather than displace them. It was interesting that when it came to perceived obstacles, a lack of clarity over the precise benefits was most pronounced; while many early adopters discovered the benefits “through personal curiosity, and trial and error” the fact these weren’t clear to others hindered their possible adoption (pg 192).

My notes on Skeggs, B. (2019). The forces that shape us: The entangled vine of gender, race and classThe Sociological Review67(1), 28-35.

How do we make sense of the influence of Antony Giddens? The first page of his Google Scholar profile shows 149,243 citations with many more to be expected if one were inclined to dig into the long tale of his many publications. He defined the cannon for an entire generation of social theorists, offering an account of the ‘founding fathers’ which became a shared reference point. His structuration theory drew together diverse strands in a way which directly and indirectly exercised a great influence over the landscape of social theory for decades. He wrote the best selling textbook, now in its eighth edition, introducing sociology to successive cohorts of A Level students and novice undergraduates. He cofounded Policy Press which radically reshaped the terrain of social theory and introduced continental philosophy into the Anglophone theoretical mainstream. He was director of LSE, one of the leading research universities in the world. He was architect of the New Labour notion of the third way, exercising an enormous influence over the self-understanding of this government and its subsequent trajectory. However I find it hard to write this without thinking back to Tony Benn’s observation that “Anthony Giddens just hovers round trying to put an ideological cloak around whatever is being discussed”. This blistering critique from Bev Skeggs in a new paper made me think back to his comment:

I think sociology lost its critical edge when a nationalist, individualist, presentist analysis was offered by the likes of Giddens and Beck. Sociology became a source of legitimation, not a force of critique. We should never forget that Giddens was an architect of New Labour’s ‘third way’, an apologist for the institutional structures that enabled neoliberal policies to be implemented. Through his publishing enterprises Giddens has saturated sociology with this apologist perspective. Most sociologists encounter Giddens from A-level, often throughout their degrees. Giddens and Beck both proposed the denigration of class as a key unit of analysis for sociologists; yet, analysis of class can only be wilfully ignored by those with enough privilege to do so. The occlusion of attention to the processes, structures and forces that produce class (and gender, race, sexuality), i.e. those of capital, capitalism and colonialism, I would argue, was not a conspiracy but a complacency of the comfortable, a perspective of privilege.

Even if it’s a matter of political gossip, I feel we should take Benn’s remark seriously. To what extend did Giddens move across sectors in pursuit of political influence and what did this mean for the work he produced? The discursive armoury fashioned in his early 1990s work on late modernity surely provided all the instruments he needed to “put an ideological cloak” around whatever was being discussed in New Labour circles: an epochal, justificatory, exciting framing which lifted discussion out of the quagmire of politics and policy, making it seems as if history was whispering in the ear of those present.

Skeggs supports the call of Satnam Virdee, to which this essay was originally a response at the Undisciplining conference, for an end to this complacency and a return to the critique of ‘progress’, the question of ‘in whose interests?’, the reclamation of an historical frame of reference, the recognition of over-determination and the “the contradictions between race, class and gender”. If we reclaim the past in this way, rejecting what Mike Savage has elsewhere characterised as epochal sociology, it becomes easier to see how it continues on in the present. As Skeggs writes of financialisation and digital capitalism:

Rent seeking is a major form and force of capital value. Just think of digital companies who extract billions per year through rent, e.g. for cloud computing (Amazon), extracting rent through monetizing your personal data (Facebook), extracting rent though monetizing your search data (Google). Rent as profit is now a major force, existing alongside surplus value production from labour. Interest from debt (rent from money lending) is another source of expropriation that continues to expand as capital is reorganized through financialization (Lapavitsas, 2013). And technology labour platforms such as Deliveroo extract rent whilst also exploiting labour, and Uber extracts rent, exploits through labour and also generates interest on debt through car purchase. Connecting expropriation to exploitation is now more easily identified and absolutely necessary to understanding contemporary capitalism, and how it shapes our daily lives.

Classifications ossify and they circulate and undergo institutionalisation, becoming part of the order of things as “they are used by capitalists and their managers over time” and enforced through the actions of the state. As Skeggs cautions, “Never underestimate the power of managers and state officials to enforce difference”. In the absence of a historical understand, our conceptual apparatus will be ill-equipped to understand either the present or the future. We lapse into complacency because we lack the tools to see what is urgent, even if it is right in front of our face. Skeggs over evocative description of the analytical and political challenge our present conjuncture poses:

Devices beyond our control or even understanding are giving money and trade a life of their own. The world of finance is heavily invested in high frequency trading, which only algorithms that machine learn understand. Huge investments are made in block chain technology which even fewer people understand. These are the instruments that shape our daily lives, determine whether we can pay our bills, rent, mortgages, whether our national currency stays afloat and whether trading between nations can occur. Alongside deregulated political manipulation of the Brexit kind, there is a huge distribution of wealth upwards enabled by investment vehicles (and for the conspiracy theorists amongst you – Robert Mercer is key to both worlds). Repeating historical legacies, a huge amount of violence is lived by vulnerable populations, designated as disposable and deportable. People struggle to stay alive against militarization, against structural adjustment policies in the Global South and austerity in the Global North.

Recognising how historical conditions “enabled our existence as particular types of potential value, as property, as rent, as the lubricant of social reproduction that enables capital to continue its travels” is crucial if we wish to avoid remaining “entrenched in privileged provincial perspectives”. She ends with by asking how did sociology get so side-tracked and reflects on what it is for when so many crucial turnings have been missed:

How did we get so distracted? Why did sociology refuse to engage with the crucial anti-racist analysis of Cultural Studies, from Stuart Hall, Paul Gilroy, Les Back, Erol Lawrence, Hazel Carby? Or the wonderful feminists from history: Catherine Hall, Anna Davin, Carole Dyhouse, Ann McClintock, Vron Ware and many more from History Workshop Journal? What happened to the resistance detailed by the historical studies of power? Do we know about the motley crew? The pirates, the many-headed hydra? The many refusals against becoming surplus and disposable? Or the struggles together as the working class recognizes that divide and rule only benefits those with power, that Satnam identifies. When sociology turned its back on the state, away from education and social policy into the world of legitimation, it lost its traction. All those battles between anti-racism and multiculturalism were overlooked.