I’ve been reading the psychoanalyst Josh Cohen’s Not Working: Why We Have To Stop for the last few days, during a week in which I have been forced to stop by a chest infection which prevented me from making a trip to Sweden I’d been looking forward to for months. It’s a useful time to read the book because my mood this week embodies its core concern, as I realise how ill equipped I am to stop. The most success I’ve had has been through the narcotising effect of Netflix, surrendering myself to auto-play in order to watch the entirety of You and The People Vs OJ Simpson

In the process I’ve been struck by how little space I experienced between action out there and compulsion in here, with the former providing a rhythm to my days which the latter obliterates. If I understand Cohen correctly, the problem is the limited character of that rhythm, as we come to find security through a constant motion orientated towards external factors forever escalating in the demands we place on ourselves and encourage in others. He has a vivid description on loc 119 of the fidgety activity which comes to substitute for rest in this state:

The emblematic image of our culture is the panicky phonechecker, filling in any interval of rest or silence, on the train or at the family dinner table or in bed, with emails and work documents, or else with social media updates, addictive games and viral videos. Nervous distraction provides the only relief from a perpetually incomplete to-do list. Not working becomes at least as tiring and incessant as working. We know we have no choice but to stop, yet doing so makes us so fearful, scornful and guilty.

It occurs there’s something of the death drive in binge watching. An embrace of what he describes in his patients as “a wish for the world, or themselves, to dissolve” (loc 162). It’s well known that Netflix see ‘sleep as the enemy’. But I wonder if they also see rest as the enemy, capitalising on the anxieties of inertia in order to ensure a surrender to the algorithmic void in lieu of a winding down an recovery of rhythm. 

If the breakdown of an established rhythm is something we increasingly struggle to cope with, how are persuasive technologies capitalising on this and entrenching it in the process? I’m still not well but I’ve deleted Netflix from my iPad, as much as I want to watch the OJ sequel. There can be something profoundly deadening about extended binge watching, even though it entails a degree of sustained immersion which would seem cultivated if performed in relation to other media. 

This is a suggestion which Roger McNamee makes on loc 2041 of Zucked. Note that he’s not suggesting corporate espionage but rather inference of trends from superficially innocuous data Amazon have privileged access to as platform provider:

Amazon can use its cloud services business to monitor the growth of potential competitors, though there is little evidence that Amazon has acted on this intelligence the way it has leveraged data about bestselling products in its marketplace.

This is something which Amazon do with increasing frequency on their main shopping site, producing generic versions of popular products that can be retailed more cheaply. In some cases, it is hard to discern you are buying an Amazon product. It’s not corporate espionage but Amazon’s gods eye view of the marketplace, as well as their capacity to privilege their own products in search, gives them an unassailable advantage over competitors.

How might AWS figure into this strategy? Are there any safeguards against it? Where might this go in the longer term? Amazon are depending on research and development taking place elsewhere which they are tracking the effectiveness of through their platform. But what happens if their anti competitive practice eviscerates this capacity? Would they be able to originate development rather than simply piggy backing on it? I think this will be the key challenge Amazon faces in its increasingly likely transition to unprecedented monopoly.

My notes on Strathern, M., & Latimer, J. (2019). A conversation. The Sociological Review, 67(2), 481–496. https://doi.org/10.1177/0038026119832424

In this interesting conversation with Marilyn Strathern, who I had the pleasure to meet when Jana Bacevic organised a a masterclass with her at our department, Joanna Latimer explores the act of writing and the influence Strathern’s has had on her own. Joanna explains her experience of how Strathern’s writing “has this kind of extraordinary way of entering into one” such that “your parts become my own, and then I discover I can’t think without your parts”. As Strathern explains, her writing is intensely conversational even if the reader might not be aware of exactly who she is having the conversation with:

And it may be that this sense of always being in conversation contributes to that. There’s an ethical side to it, and of course when I was doing my work on intellectual property I sort of touched on it, which is that, you know, nothing actually ever sprang from Zeus’s head fully formed. I mean one is in debt, one is incredibly in debt, one is always taking what other people have done, whether one knows it or not. It’s not always that I have a particular person in mind, or I’m writing for people who’ve provided me with the means to do so. Rather, you stand on, stand on the shoulders of giants and all the rest of it. I’m very conscious, that one is just simply turning the soil until the next person comes along. So there’s that aspect. There’s also the intellectual chase that one gets into, getting into somebody’s argument. It does its work, it sparks you off, and you really want to pull it apart or you want to put it back together again or you want to take bits out. There are things that you think you could do otherwise. And so forth. And that’s very often in relation to specific arguments.

It is writing which seeks to “turn your reader over”, as Joanna puts it, by upending the conventional and the assumed. Marilyn describes her object as “recurrent habits of thought people just get into, time and again”, some of which provoke “real anger, I mean I’m cross”. It left me with a strong sense of the intimacy of writing, almost as vectors of entanglement through which the concerns of the writer spill over their boundaries and into the reader. There’s a really interesting section connected to this about Marilyn’s  preference for the word person over terms like identity or individual. These are bound into an imaginary which needs to be critiqued and other choices create the opportunity to get out from under them: 

Person is a term that I get from orthodox classical British social anthropology. A person is a social configuration. It’s always a relational construct. It doesn’t have the [vernacular] implications of individuality that identity has. I think that’s where the preference is. […] But because person is slightly unusual in English, after all we do use it, everyone knows what we mean, and there are contexts where we use it on an everyday basis – like ‘a person in their own right’ – but actually we don’t use it as much as we would use the word individual for example, or human being, or whatever. Slightly unusual. And it tends to be in legal language, doesn’t it? Person of no fixed abode. Whereas we’d [ordinarily] say man or woman, or whatever.

There’s a micro-podcast here in which I respond to Joanna Latimer’s presentation of an early version of this paper at a workshop last year. My talk is at 40 mins:

This weekend I went back to my CV for the first time in a year and a half, condensing it down from nine pages into two pages for a particular application. Any work on it is always a strange and alienating experience. As Barbara Ehrenreich has put it, CVs “should have an odd, disembodied tone, as if [your] life had been lived by some invisible Other” (pg 28). This is even more the case when it comes to condensing what you have done. Selecting some things to make the cut while abandoning others in a process that abstracts even further from why you did these things and what they meant to you. A CV is a uniquely alienated and alienating form of life writing.

It once more made me think about how social media can function as a living CV, documenting the qualities which structure and convention obscure in the formal document. It provides background and context, reconstructing the lived engagements which are lost when they are reduced to a single item in a last. The ensuing living CV isn’t a document which can be scanned in one go, hence the problem when it is treated like this and superficial judgements of someone’s recent posts stand in for a sustained engagement with their online activity. But there’s something interesting here about how we represent ourselves in professional settings, as well as how others interpret those representations.

I’m slowly making my way through Shoshana Zuboff’s The Age of Surveillance Capitalism and I thought I’d benefit from a quick recap of where I’d got to so far. In essence the first part of the book is an account of behavioural surplus: data about user behaviour  left over after narrowly technical requirements that can be leveraged for commercial benefit. Transactional data is produced as a by-product of activity mediated through digital systems, including firms and actors who existed prior to the nascent tech giants. But Google as one of the early firms which had no relationship to users other than through that digital system began to realise the commercial significance of transactional data. They were far from unique in seeing technological and engineering value in this data, even early stage Yahoo was interested in ‘click trails’ as a way of optimising their directory and finding ideas for new services.

What made Google unique, argues Zuboff, arose from their capacity to build a sustained business model around the insights into future behaviour which could be inferred from past data. This is something which has to be understood against the background of the dot com crash and the institutional pressure which a nascent firm like Google was subject to. The take away point from this is not only that it’s a matter of business model rather than technology but the contingency of how this business model came about in the first place. It’s a story about the institutionalisation of behavioural surplus much more than one about the technology which made that institutionalisation possible.

My notes on What image types do universities post online?

Twitter has become a mainstream activity for universities in the UK and the US, with most institutions now having a presence. The platform has taken an image based turn over the last few years, since native photo sharing was introduced in 2011 and Twitpic et al vanished, in common with social media more broadly. This presents us with a question: what types of images do universities tweet? Emma Stuart, Mike Thelwall and David Stuart analyse the use of images by university twitter feeds in the UK and consider what this can tell us about how universities see the platform and how they seek to relate to the audiences found through it.

This twitter activity is connected to rising competition, as universities compete against each other to increase enrolment following the reduction of government support. Social media offers a means for universities to differentiate themselves, including through the use of images which express a visual identity. Platforms differ in what they offer for this. As Stuart et al observe, Instagram images tend to “focus more on the aesthetics of individual images, whereas images on Twitter tend to supplement or complement the text of a tweet”.

Their study is a companion to a 2016 investigation in which 51 Uk universities (out of 128 with multiple units of assessment in REF 2014) were found to have an Instagram account. It focuses on the Twitter presence of the same 51 in order to facilitate comparison. A random sample of 20 images was taken from a date range overlapping with Instagram activity (I presume for each university) to produce a final sample of 1,020 images. They undertook a content analysis using a coding scheme developed in a previous study of Instagram use within organisations by McNely (2012) given below. Images were classified based on their content, accompanying text and the interaction they generated.

  1. Orientating: “The primary focus of the image is of specific and unique university (and university associated) locations, landmarks, or artefacts (e.g., buildings/public areas/statues/university affiliated objects)” (4.8% of Twitter images, 14.3% of Instagram images)
  2. Humanising: “The primary focus of the image is of things that add more of a human character or element of warmth/humour/or amusement to the university’s identity” (20.9% of Twitter images, 31% of Instagram images)
  3. Interacting: “The primary focus of the image is centered around people interacting at university (and university associated) events rather than people merely posing for a staged photograph” (2.1% of Twitter images, 5.7% of Instagram images) 
  4. Placemaking: “The primary focus of the image is concerned with the university ‘placing’ their identity within locations or events” (2.7% of Twitter images, 12.8% of Instagram images)
  5. Showcasing: highlighting some event, success, course, service or product of the university (61% of Twitter images, 28.8% of Instagram images)
  6. Crowdsourcing: “The primary purpose of the image is that it has been posted with the intention of generating feedback, interaction, engagement, and online interaction with viewers/followers” (7.7% of Twitter images, 7.5% of Instagram images) 

They found that 41.8% of images had no retweets, with an average of 2.7 retweets per image. It was interesting that showcasing images (most popular type) were significantly more likely to be retweeted than humanising ones (second most popular type) but I wonder how much of each can be explained in terms of staff and students at the university retweeting an expression of support or loyalty rather than an endorsement from those outside the institution? They found far more Twitter images than Instagram images overall from the time period under investigation (7,583 to 3,615) yet a few universities shared more images on Instagram. Does this suggest the influence of an Instagram enthusiast on a university’s comms team? They suggest the discrepancy has its roots in the norm of posting less on Instagram, the service being newer and the restrictions on how one can post to it.

They suggest the popularity of showcase images on Twitter accords with it being an information source rather than networking tool. The two most popular categories of humanising and showcasing seem to be externally-orientated towards potential students. Interestingly, they suggest that not only might universities benefit from posting more of the other categories, doing so “could be aligned with the practice of content curation, whereby the staff member(s) in charge of the Twitter account would specifically attempt to highlight a range of interesting and meaningful content that they think would appeal to their followers”.

One of my obsessions in the last year has been how firms seeking to optimise their platforms influence user behaviour in the process. On one level, influencing users in this way is the goal, as real time data allows continuous optimisation to increase user engagement i.e. encouraging users to engage with more content, spend longer on the platform and return more frequently. But on another level, the growth hacking methodologies which dominate this activity produce unintended consequences. The approach is described here in Roger McNamee’s Zucked online 1190:

From late 2012 to 2017, Facebook perfected growth hacking. The company experimented constantly with algorithms, new data types, and small changes in design, measuring everything. Every action a user took gave Facebook a better understanding of that user—and of that user’s friends—enabling the company to make tiny improvements in the “user experience” every day, which is to say they got better at manipulating the attention of users. The goal of growth hacking is to generate more revenue and profits, and at Facebook those metrics blocked out all other considerations.

In terms of mapping the contours of user behaviour, the metrics available to platforms are sophisticated. But from a hermeneutical point of view, they are a remarkably crude instrument. What matters to the growth hacker using these tools is the accumulation of attention, not how it is deployed. In my recent work, I’ve offered the idea of amplification-itis to make sense of where this can lead: a condition in which pursuit of online popularity becomes an end in itself, pursued through an overriding concern with how widely what you shares circulates on a platform.

This is something which there would always be a possibility of catching simply because human agents have a generic concern for social standing. But growth hacking produces conditions which incubate amplification-itis in an unprecedented way and it has spread to the status of a pandemic. Hence Twitter’s recent concern with ‘conversational health’ and introduction of Terms of Services changes which clamp down on some of the behaviours this condition can give rise to. They’ve recognised the harm which the pursuit of amplification as an end in itself causes, with its tendency to undermine the reasons why people use the platform in the first place. But do they recognise how their own activity has lead it to spread as widely and as virulently as it has?

  1. Rewritten line by line, including new ideas, insights, suggestions, references
  2. One huge new chapter (the dark side of social media)
  3. One 50% rewritten chapter
  4. New forward with lots of substantive content & overview of my approach to SMA
  5. Large new section on podcast, videocasting, live streaming, working with freelancers
  6. Large new section on using social media to help inform and support research projects
  7. New section on hybrid forms of publication
  8. New advice on Facebook groups, Live blogging, planning hashtags
  9. New section on planning an impact strategy
  10. New conversations featured with Dave Beer, Roger Burrows, Petra Boynton, Emma Jackson and Deborah Lupton.
  11. New section on e-mail marketing and newsletters
  12. Extended section on using images and how it can go wrong
  13. Extended section on finding your voice online, including with multimedia projects
  14. Additional recommended reading for every chapter
  15. Somehow consumed more intellectual and emotional energy than the first edition did and is being submitted six months late 😬

There’s an interesting extract in Roger McNamee’s Zucked about how strategically Facebook have reduced the significance of organic reach (i.e. unpaid distribution of content) on the platform. The promise of being able to communicate directly to a vast audience through Facebook pages has been central to the motivation of individuals, networks and organisations who have directed their resources towards engaging on the site. But there has been a steady decline in the percentage of followers who will see a post for free. As he describes on loc 1147-1161:

Every year or so, Facebook would adjust the algorithm to reduce organic reach. The company made its money from advertising, and having convinced millions of organizations to set up shop on the platform, Facebook held all the cards. The biggest beneficiaries of organic reach had no choice but to buy ads to maintain their overall reach. They had invested too much time and had established too much brand equity on Facebook to abandon the platform. Organic reach declined in fits and starts until it finally bottomed at about 1 percent or less. Fortunately, Facebook would periodically introduce a new product—the Facebook Live video service, for example—and give those new products greater organic reach to persuade people like us to use them.

There’s a difficult balance to strike here between nudging people into using paid advertising and obliterating the economic rationale for using Facebook in the first place. Once there is a long term trajectory of engagement, the effectiveness of the platform will still beat the alternatives even with declining organic reach. For many actors organic reach is what brings them to the platform in the first place, in spite of the fact the business model relies on soliciting advertising income. But will the long, slow decline of organic reach ultimately end with its demise? Almost certainly not but this tension is at the heart of the business model yet the dynamic it gives rise to is extremely subtle.

I’m enjoying Zucked by Roger McNamee more than I expected to. What’s useful about his account is the stress it places on the engineering capacity of startups. What makes cloud computing significant is not just enabling growth without major capital investment in infrastructure, but also “eliminating what had previously been a very costly and time-consuming process of upgrading individual PCs and servers“ (loc 674). What makes open source software significant is not just the reduced costs of free of the self components, but the time saved when everything can be built on open source stacks. I’d tended to see these things as a matter of reduced costs and increasing flexibility, without grasping their significance for how limited resources could be deployed within the firm.

Both mean that the engineer capacity of a startup can be directed mainly at the “the valuable functionality of their app, rather than building infrastructure from the ground up“ (loc 660). This led to the lean start up model and the capacity to go live then iterate in response to what happens. Without this modus operandi and the infrastructure supporting it, would the harvesting and use of what Zuboff calls behavioural surplus have even able to happen? A little earlier he frames this in terms of an epochal shift in the engineering constraints which tech startups had been subject to:

For the fifty years before 2000, Silicon Valley operated in a world of tight engineering constraints. Engineers never had enough processing power, memory, storage, or bandwidth to do what customers wanted, so they had to make trade-offs. Engineering and software programming in that era rewarded skill and experience. The best engineers and programmers were artists. Just as Facebook came along, however, processing power, memory, storage, and bandwidth went from being engineering limits to turbochargers of growth. The technology industry changed dramatically in less than a decade, but in ways few people recognized. What happened with Facebook and the other internet platforms could not have happened in prior generations of technology.

Under these conditions, it’s much easier to experiment and it’s easier to fail. As he points out, Facebook’s infamous motto Move Fast and Break Things is emblematic of the freedom which decreasing engineer constraints offers for a nascent firm. But what does this mean culturally? I think this is a crucial question to explore both in terms of the firm itself but also the expectation which investors then have of what constitutes adequate growth for a lean startup?

To what extent does the obsessive drive for increased user engagement have its origins in the changing expectations of investors and the challenge of meeting them?  McNamee observes it makes investors more inclined to identify what they perceive as losers and kill the startup before it uses too much money. The rewards on offer for successes are so immense that many failures can be tolerated if they lead to one success.

It also makes it much less onerous for the firm to scale, with important consequences for a nascent corporate culture. Inexperienced staff can be a plus under these circumstances, as long as they have the relevant technical skills. McNamee argues that the success of Facebook’s staffing policy had a big impact on human resources culture in Silicon Valley.

The fascination with the propensity of tech founders to go silent reminds me of how the earliest philosophers were framed as unworldly due to their capacity to go into thought trances. From Roger McNamee’s Zucked, loc 269-284.

This little speech took about two minutes to deliver. What followed was the longest silence I have ever endured in a one-on-one meeting. It probably lasted four or five minutes, but it seemed like forever. Zuck was lost in thought, pantomiming a range of Thinker poses. I have never seen anything like it before or since. It was painful. I felt my fingers involuntarily digging into the upholstered arms of my chair, knuckles white, tension rising to a boiling point. At the three-minute mark, I was ready to scream. Zuck paid me no mind. I imagined thought bubbles over his head, with reams of text rolling past. How long would he go on like this? He was obviously trying to decide if he could trust me. How long would it take? How long could I sit there?

I’ve read similar observations about Musk and Tiel. Even if there might be differences drawn between the reasons for silence, these stories seem to share an interest in that silence as a sign of how the founders are different from others.

One of the most interesting aspects of the Bezos story earlier this month was the insight it offered into the security apparatus he surrounds himself with, particularly his instruction for Gavin De Becker “to proceed with whatever budget he needed to pursue the facts”. There’s something oddly thrilling to read this, inviting us to imagine that we too had unlimited resources which could be brought to bear on the righting of wrongs. But the fact this relation exists to this category of service providers is itself interesting.

De Becker was apparently a leading figure in modernising the security industry, moving it beyond ex-military personal (who presumably still have a role, at least if descriptions of the entourage Zuckerberg sometimes travels with are accurate) into what appears to be a more ‘full spectrum’ approach to the security of his clients. But even if De Becker seems to be an interesting chap without any reals sins, it’s hard not to wonder who else inhabits this space. I’m particularly curious about the murkier and more opaque areas of it, with the possibility that ‘fixers’ can now be hired in a modernised manner as if they were providing any other services to the rich and famous.

My notes on Mirowski, P. (2018). The future (s) of open science. Social studies of science48(2), 171-203.

In this provocative paper, Philip he takes issue with the “taken-for-granted premise that modern science is in crying need of top-to-bottom restructuring and reform” which underpins much of the open science movement, as well as its tendency to obscure the key question of the sense in which it was ever closed and who is now intent on opening it up (pg 172)? Doing so runs contrary to a popular teleology in which a fixed scientific method is now being forced open by the inherent promise of digital technology. If we instead treat science historically, with distinct periods defined by specific orientations, it becomes possible to see that “the open science movement is an artefact of the current neoliberal regime of science, one that reconfigures both the institutions and the nature of knowledge so as to better conform to market imperatives” (pg 172).

Doing so cuts through the semantic ambiguity of openness, allowing distinct phenomena (open access, open data, citizen science, different formats for publication etc) to coalesce in a quasi-unified way, making it possible for advocates to slide between these various expressions of an open science which is rarely, if ever, precisely defined as an integrated project. He argues that this new regime combines an ethos of radical collaboration with the infrastructure of platform capitalism. Its moral force rests upon a whole range of inditement of modern science:

  1. Distrust of science is rampant in the general population: he takes issue in an interesting way with the assumption that more contact with scientists and more exposure to the practice of science will reverse this trend. Could it not do the opposite by personalising science through the mechanism of blogging and social media, making it even harder to convince the sceptical that its a disinterested pursuit? The precise form this scepticism takes varies (Mirowski’s example of educated neoliberals who believe scientists needs to feel market discipline before they can be trusted was particularly striking) but it’s a broad trend which can’t be wished away as a product of a reversible ignorance. This section reminded me a lot of the arguments Will Davies makes in Nervous States about the evisceration of representation as intermediaries are no longer trusted to act impersonally.
  2. Science suffers a democracy deficit: he suggests this fails to recognise how ‘science’ and ‘democracy’ have both been transformed since figures like Dewey first made this argument in the early 20th century. The freedom of scientists, won in relation to a military-industrial complex in which they were embedded, came at the cost of the freedom of the public to influence science. The former apparatus has given way to a market complex such that “science has been recast as a primarily commercial endeavor distributed widely across many different corporate entities and organizations, and not confined to disciplinary or academic boundaries” (pg 176). What it is taken to mean to democratise science has changed radically in this context, reducing it to a ‘scripted participation’ (citizen social science) in the research process as part of an extended marketplace of ideas, as opposed to meaningful participation in the governance of science. In fact I wonder if populist attacks on ‘wasteful research’ and ‘mickey mouse subjects’ should be interpreted as a (pathological) attempt to democratise science? He is scathing about equating “a greater quantity of people enrolled in minor (and unremunerated) support roles with a higher degree of democratic participation, when, in fact, they primarily serve as the passive reserve army of labor in the marketplace of ideas” (pg 177).
  3. The slowdown in scientific productivity: the promise suggested in open science to counteract a fall in actionable scientific outcomes (if I’ve glossed that correctly?) is belied by the form which openness takes within the regime of knowledge production found within commercial scientific research. If I understand him correctly, he’s saying that the organisational apparatus of contemporary science can’t support the openness advocated (e.g. intellectual property restrictions get in the way, the proletarianised condition of bench scientists within commercial organisations) and the “stunted and shriveled” openness it can support doesn’t seem to work anyway. Though I’m not sure I’ve interpreted this section correctly.
  4. The explosion of retractions and the falling rate of falsification: many epistemic problems are ascribed by advocates of openness to the perverse incentives of the existing journal system. These problems can be seen most dramatically in the huge growth of retractions by journals of work which had passed the peer view process, with Retraction Watch currently identifying 600-700 retractions per year. A parallel problem is the basis against publishing falsifications in favour of positive additions to the knowledge system. The hope has been that the shift to a different business model might solve both problems.

If I understand correctly, his point is that a focus upon the deficiencies of science imputes to scientific practice what has its origins elsewhere. He offers a powerful inditement of the role of neoliberalism in producing the pathologies of contemporary science, listed on pg 188. But it’s unclear to me why this is either/or because the criticisms which open science advocates raise could be the outgrowths of neoliberalism’s influence? The point can be overstressed because in some cases there’s an active misdiagnosis correctly identified in his appraisal of these critiques but these are not universal and he seemingly misses the possibility of both/and:

The ailments and crises of modern science described in this paper were largely brought about by neoliberal initiatives in the first place. First off, it was neoliberal think tanks that first stoked the fires of science distrust amongst the populace that have led to the current predicament, a fact brought to our attention by Oreskes and Conway (2011), among others. It was neoliberals who provided the justification for the strengthening of intellectual property; it was neoliberals who drove a wedge between state funding of research and state provision of findings of universities for the public good; it was neoliberal administrators who began to fragment the university into ‘cash cows’ and loss leader disciplines; it was neoliberal corporate officers who sought to wrest clinical trials away from academic health centers and towards contract research organizations to better control the disclosure or nondisclosure of the data generated. In some universities, students now have to sign nondisclosure agreements if they want initiation into the mysteries of faculty startups. It is no longer a matter of what you know; rather, success these days is your ability to position yourself with regard to the gatekeepers of what is known. Knowledge is everywhere hedged round with walls, legal prohibitions, and high market barriers breached only by those blessed with riches required to be enrolled into the elect circles of modern science. Further, belief in the Market as the ultimate arbiter of truth has served to loosen the fetters of more conscious vetting of knowledge through promulgation of negative results and the need to reprise research protocols.

But he’s certainly correct that these overstatements legitimise platform initiatives which aim to reengineer science from the bottom up. The apparent diversity of these space is likely to decline over time, as a few platforms come to dominate. This opens up the worrying possibility that “Google or some similar corporate entity or some state-supported public/private partnership will come along with its deep pockets, and integrate each segment into one grand proprietary Science 2.0 platform” (pg 190). This platformization is likely to have unintended consequences, such a rendering science an individualised pursuit (he cites Orcid ID as an example of this – unfairly?) and setting up data repositories to fail if they are insufficiently succesful in attracting the data donors on whom their ultimate viability will depend.

He correctly identifies these platforms as facilitating a form of managerless control but I have an issue with the claim that “one automatically learns to internalize these seemingly objective market-like valuations, and to abjure (say) a tenacious belief in a set of ideas, or a particular research program” (pg 191). How automatic is the process really? If he means it as a short hand to say that it tends to happen to most users over time then I withdraw my objection. But if it happens in different ways and different degrees, we need to open up the blackbox of automaticity in order to see what causal mechanisms are operating within it.

He closes the paper by concretely laying out his case about why the platformization of science is a neoliberal process. Firstly, it breaks up the research process into distinct segments which permit of rationalisation. Secondly, the focus upon radical collaboration gradually subsumed the author into collaboration, in apparent contradiction of his earlier point about the individualisation of science. Thirdly, the openness for the user goes hand in hand with an opaque surveillance for the platform provider with monetisation assumed to follow further down the line. The most interesting part of this paper is how description of the ambition towards building a unified platform portfolio (mega platform?) for research and how this fits into the longer term strategy of publishers. There’s a lot to think about here and I suspect this is a paper I will come back to multiple times.

My notes on Liboiron, M., Tironi, M., & Calvillo, N. (2018). Toxic politics: Acting in a permanently polluted world. Social studies of science48(3), 331-349.

The authors of this paper take “a permanently polluted world” as their starting point. It is one where toxicity is ubiquitous, even if unevenly distributed. Unfortunately, “[t]he tonnage, ubiquity and longevity of industrial chemicals and their inextricable presence in living systems means that traditional models of action against toxicants such as clean up, avoidance, or antidote are anachronistic approaches to change ” (pg 332). The pervasiveness is such that we need to move beyond the traditional repertoire of management (separation, containment, clean up, immunisation) which is premised on a return to purity whiled depoliticising the production of that toxicity by treating it as a technical problem to be managed. In doing so, we can begin to see how toxic harm can work to maintain systems rather than being a pathology which ensues from systemic failure

There is conceptual work required if we are to grasp the politics of toxicity, encompassing how we conceptualise toxic harm, provide evidence for it, formulate responses to it and grasp the interests reflected in its production and management. This involves rejecting a view of toxicity as “wayward particles behaving badly” (pg 333). As they explain on pg 334, toxicity is relational:

Toxicity is a way to describe a disruption of particular existing orders, collectives, materials and relations. Toxicity and harm, in other words, are not settled categories (Ah-King and Hayward, 2013; Chen, 2012) because what counts as a good and right order is not settled.

They suggest a distinction between toxins and toxicants. The former occurs naturally in cells, whereas the latter are “characterized by human creation via industrial processes, compositional heterogeneity, mass tonnage, wide economic production and distribution processes, temporal longevity, both acute and latent effects, and increasing ubiquity in homes, bodies and environments” (pg 334). This includes naturally occurring minerals which are rendered problematic through industrial processes that lead them to occur in specific forms, locations and scales productive of harm.

Laws surrounding toxicants are based upon threshold limits, usually in relation to effects on human bodies. These are supplemented by cost benefit principles based around the avoidance of ‘excessive costs’ given available technologies. In this sense, the breakdown of order on one level (enabling toxicants to spread because it wouldn’t be ‘feasible’ to prevent it) facilitates the reproduction of order on another level (ensuring viable conditions for the continued reproduction of the commercial sector involved). I really like this insight and it’s one which can be incorporated into the morphogenetic approach in an extremely productive way.

This focus on toxicity enables us to links together these levels, providing a multi-scalar politics of life. There is a temporality to toxicity in which a slow disaster is not easily apprehended. For this reason agents seek to make it legible as a event through actions like photography or protest actions. But this easily gives rise to a politics of representation, seeing the claims of environmentalists as (at best) on a par with the claims of commercial firms. Rendering these processes legible through mechanisms like sensational images can reproduce existing differences between centre and periphery, the heard and the unheard.

Their interest is in modes of action “beyond governance-via-policy, in-the-streets-activism and science-as-usual” (pg 337). I’m not sure what their motivation is for this beyond the drive to “no longer privilege the modern humanist political subject and epistemologies based in claims and counter claims”: are they saying that a narrow politics of evidence and judgement has its corollary in public activism around public issues which have been established evidentially? I can see the analytical case for trying to get beyond this dichotomy but I’m not sure I see what is at stake politically in doing so. Their interest in actions such as  “the everyday, obligatory practices of tending to plants and others as toxic politics that do not necessarily result in scaled-up material change” doesn’t seem politically fruitful to me precisely because of the multi-scalar mode of analysis they offer (pg 341). Why should we challenge “activism as heroic, event-based and coherent” (pg 341)? Again I can see an analytical case for this, even if I disagree with it, but I don’t see what is at stake in this politically. It might be there are unintended consequences to thinking in terms of ‘effective outcomes’ but the force of this argument rests on an implicit claim about outcomes. Why is it important to “make room in dominant political imaginations for multiple forms of local, low resolution, uneventful, uneven, frustrated, desireful, ethical, appropriated and incommensurate forms of justice” (pg 343)?

 

This is such an important point in Tim Carmody’s (highly recommended) Amazon newsletter. Not only is Amazon enormously popular but critics of the firm fail to understand the basis of this popularity, as opposed to the insight they have into the popularity of a firm like Apple:

One study last year showed that Amazon was the second most trusted institution in American life, behind only the military. If you only poll Democrats? Amazon is number one. People love Amazon. Most of them don’t know about and have never thought they needed a mesh router for their house. But they will now.

It suggests criticism of big tech could remain a marginal pursuit, embedded to the point of doxa within certain segments of the population while others remain oblivious to it. It also suggests the need for caution in how we treat ‘big tech’. It’s not a monolithic block and people have different relationships to these firms. I’ve assumed there’s a political value in using the designation, as it defines an issue in a way orientated towards action, but perhaps it’s a mistake while there’s a divergence in public affection between the firms in question.

I’m increasingly hopeful that I’ll submit the second edition of Social Media for Academics to Sage next week, meeting a deadline which I suspect my editor had expected I would break. The book is six months overdue, I’ve broken countless deadlines and the impending date was only agreed after a period in which we agreed to withdraw any deadline in order to counter the anxiety which was making it hard for me to write. Sitting in my office on a Saturday afternoon, I’m finishing the final chapter which needs copy editing before I turn to the two chapters and introduction that require substantive work. It seems like a good time to reflect on what went wrong, with a second edition that has been objectively massively behind schedule and subjectively a nightmare to produce.

It was a project I thought would be easy. I feel ridiculous admitting that I’d effectively set aside two weeks of work to prepare the second edition. The first edition had been so well reviewed that I felt all I needed was to insert some new material to take account of what had changed on social media in the preceding years. I had been blogging regularly using a category on my blog, I’d given over 60 talks in which I’d developed new ideas and I’d had countless suggestions from people who had read the first edition. My plan was to spend time producing copy on each new topic before going through the text line by line to find where I could fit these in.

This was a big mistake because I rapidly generated vast amounts of text which didn’t fit substantively or stylistically into the existing book. In turn the process of going through it line by line eroded its structure, leaving me feeling as if I was sitting weeping in the ruins of a neatly turned out home that I’d caused to collapse in a reckless act of home repair. The project quickly became unmanageable because I had little traction upon (a) the scope and remit of the new material I’d produced to go into the book (b) the structure into which this material had to be fitted. By traction I mean a sense of how what was in front of me as I wrote or edited connected to a broader context.

I’ve often been fascinated by the experience of producing a text as a totality. That moment when you hold a thesis in your hands for the first time and this project which had dominated your life suddenly becomes an object you can easily manipulate. The second edition of Social Media for Academics has involved that process in reverse, as the objectivity of the text evaporated into a dispiriting horizon of unmet deadlines and postponed commitments. By simply piling up the new material in blog posts, Scrivener notes and artefact cards without any attempt to link these together, it was inevitable I was going to find myself drowning in ideas. I gripped onto the existing structure of the book in the hope it could keep me afloat but I simply pulled it into the ocean as well.

It occurs to me now that my mistake was a simple one. I should have read through the whole text, making free form notes, before trying to do anything else. Furthermore, in the last few years of being increasingly busy, I’d become adept at producing ephemera: short talks, blog posts, fragments of writing. But I’d gradually lost the habit of connecting these things together and making sense of what I’d been producing. My thoughts didn’t condense in the way they used to, both as a consequence of less mental bandwidth and less inclination to do the connective work upon which creativity depends. The combination of jumping straight into editing without having imposed any order on what I was trying to incorporate goes much of the way to explaining the disaster which has been my experience of producing this second edition.

The only thing that made it tractable in the end was printing out each chapter, going through it with a pen to rewrite, restructure and extend. Not all of my new material has survived and I’m still nervous that there are topics I’ve missed out. But it’s a much tighter book as a result of this process, in spite of being substantially longer. It’s also left me reflecting on the approach I take to my work and made me realise the importance of ordering what I do. This used to happen automatically, as I thought and reflected in the course of days which had a quantity of unstructured time that now seems a distant memory to me. It’s also something I did through blogging, using this as a mechanism to elaborate upon my ideas in order to connect things together. I never stopped blogging but something about the process has changed. It became more efficient, producing units of thought across a diverse range of topics, while leaving these fragmented from each other. I’d rarely stop to write a blog post when I was seized by what C Wright Mills called the feel of an idea, something which I used to do regularly and inevitably left me with the experience of a connection between things that had previously seemed disconnected.

This blog post is an example of this process and I feel much clearer about what went wrong as a result. It’s taken more time and energy than would have been involved in writing a new book. I now know how not to produce the second edition of a book. But I’m quite proud of the result and I hope people like it.

But you can recognize me because I’m you, mate
It’s never too late to see deeper than the surface.
Trust me, there’s so much more to it.
There’s a world beyond this one
That creeps in when your wits have gone soft
And all your edges start shifting
I mean it
A world that it breathing
Heaving its shoulders and weeping
Bleeding through open wounds
That’s why I’m grieving.
Down on my knees and I am feeling everything that I’m feeling.
So come here
Give me your hand
Because I know how to hold it.
I will write every single one of you a poem
And then I’ll set them all on fire
Because I am stunned by how the light in your eyes resembles
Brightening skies.
Mate, I would fight for your life like it was mine

 

This ECPR panel looks superb. Saving here to follow up later:

please find attached, the call for papers for a panel at the ECPR General
Conference in Wrocław (4 – 7 September).

Title of the panel:***The Relationship Between Digital Platforms and**
**Government Agencies in Surveillance: Oversight of or by Platforms?*

If you are interested in participating please submit an abstract (500
words maximum) no later then *15 February* via the ECPR Website (ecpr.eu).

*Abstract*
Revelations of surveillance practices like those of the National
Security Agency or Cambridge Analytica have shown that the digital age
is developing into an age of surveillance. What said revelations have
also shown is that digital platforms are significantly contributing to
this development. As intermediaries between communications and business
partners platforms enjoy a privileged position (Trottier 2011). Platform
increasingly use this position by surveilling and manipulating end users
for the sake of profit maximization (Fuchs 2011, Zuboff 2015). Platforms
with a business model of surveillance and manipulation, seem to have
become the most successful type of corporations of today. Already two of
the three most valuable corporations are operating as such platforms.
While platforms are emerging and expanding in ever more established as
well as new markets and thus gain influence on large parts of society,
the question arises how states are dealing with these new actors and
their capabilities. The panel is intended to provide answers to this
question by studying the spectrum of state-platform relations.
As empirical examples show, the relationship between digital platforms
and the states is multi-faced. On the one hand public institutions are
partnering with private platforms. The data from platforms is used for
example by intelligence agencies to combat terrorist groups, by police
departments to search for criminal suspects and by regulatory agencies
to counter hate speech or violations of copyrights.
On the other hand, the capabilities of platforms can also be turned
against the state. As the last US presidential elections showed
platforms can be utilized to influence the electorate or to compromise
political actors.
From the point of view of the platforms, the state represents on the one
hand an instance that may restricts their actions as it declares
specific types of business activity illegal. The new general data
protection regulation of the EU is one example.
At the same time, states are providing the legal basis for the
platforms’ activities. In order to promote e-commerce for example many
European states liberalized their privacy regulation in the beginning of
the new millennium.
These examples illustrate the diversity in platform-state relations. The
panel will acknowledge this diversity and will bring together works
considering various empirical cases as well as theoretical frameworks.
We welcome contributions focusing on different political systems as well
as different platforms like for example social media, retail, transport
or cloud computing platforms.
Exemplary questions that may be addressed are:

• Which major privacy, anti-trust or media regulations of platforms
where enacted on the national level recently? Which types of platforms
were addressed and which were not? In how far do these regulations
resemble a general trend? To which degree do they effect surveillance
practices?
• In which areas and by which means of surveillance are platforms
already enforcing public policies? Which kind of data is provided by
platforms for predictive policing? How are platforms identifying and
depublishing illegal content? When are platforms collaborating with
intelligence agencies?
• How can platforms be regulated efficiently? Which forms of regulations
between hierarchical regulation and self-regulation exist and how did
these forms emerge? In how far is oversight of platforms comparable to
oversight by platforms?
• Are policies of platform regulation defusing? If so, which states are
setting the standards?
• Which international institutions in the field of platform regulation
were created so far? Is an international regime of platform regulation
evolving?

This looks like an interesting job at a new institute I’d like to keep track of:

The Department of Science & Technology Studies at Cornell University seeks
a Postdoctoral Researcher to play a major role in a two-year project on
Data Science & Society. We invite applications from scholars with a recent
Ph.D. in science & technology studies (STS) or related fields (e.g.,
sociology, anthropology, law, media studies, information science) and an
active research agenda on the social aspects of data science.

The Postdoctoral Researcher will be expected to devote 50% time to his or
her own research agenda and 50% time to working with S&TS Department
faculty on developing the Data Science & Society Lab, a new and innovative
course that is part of the Cornell Data Science Curriculum Initiative. The
lab will engage undergraduate students in two components: instruction in
theoretical tools and practical skills for analyzing social and ethical
problems in contemporary data science (e.g., data science as ethical
practice; fairness, justice, discrimination; privacy; openness, ownership,
and control; or credibility of data science); and participation in
interdisciplinary project teams that work with external partners to address
a real-world data science & society problem.

The Postdoctoral Researcher will have the opportunity to help launch and
shape the initiative, to develop curriculum and engagement projects, build
relationships with external partners and participate in teaching the
course. S/he will work with two S&TS Department faculty members, Malte
Ziewitz and Stephen Hilgartner, who will have primary responsibility for
teaching the course.

Applicants should send:

– Cover letter summarizing the candidate’s relevant background,
accomplishments, and fit with the position
– CV
– Up to two publications (or writing samples)
– Three letters of recommendation
– A transcript of graduate work (unofficial is acceptable)

Required Qualifications:

PhD in science & technology studies (STS) or related fields (e.g.,
sociology, anthropology, law, media studies, information science) and an
active research agenda on the social aspects of data science. ABD students
are eligible to apply, but proof of completion of the Ph.D. degree must be
obtained prior to beginning the position. Recent graduates who received
their Ph.D. during the last five years are especially encouraged to apply.

The position is available for a Summer 2019 start (as early as July 1). We
will begin to review applications on February 28. Apply at
https://academicjobsonline.org/ajo/jobs/13236. For further information,
please contact Sarah Albrecht, saa9@cornell.edu.

Diversity and inclusion are a part of Cornell University’s heritage. We are
a recognized employer and educator valuing AA/EEO, Protected Veterans, and
Individuals with Disabilities.

Using the communal kitchen at the Faculty of Education last Friday, I noticed that the lid had fallen off the bin and was sitting on the floor. In the middle of something and keen to get home, I didn’t stop to pick it up. I just came back from the same kitchen on Monday afternoon and noticed it was still on the floor. “Ah the tragedy of the commons” I said internally while stroking my chin and nodding sagely, before beginning to walk out of the room. At which point I realised how absurd I was being and stopped to pick the lid up from the floor, immediately wishing I’d done it on Friday.

It left me wondering how certain forms of abstraction, stepping back from a concrete phenomenon and subsuming it into a general category, make action less likely. There’s something about that moment of understanding, recognising a fragment of the general in the mundanity of the particular, liable to induce passivity. It’s hard to argue a counter factual but I suspect I would have immediately picked up the lid if I hadn’t experienced that moment of abstract recognition. However I’m aware I’m doing exactly the same thing in writing this blog, recognising a general propensity in a particular instance, encouraging others to do the same by raising it discursively in a public forum.