Updates from February, 2019 Toggle Comment Threads | Keyboard Shortcuts

  • Mark 8:00 pm on February 26, 2019 Permalink | Reply
    Tags: amplificationitis, , platform optimisation,   

    How growth hacking is leading to a pandemic of amplification-itis on social media platforms 

    One of my obsessions in the last year has been how firms seeking to optimise their platforms influence user behaviour in the process. On one level, influencing users in this way is the goal, as real time data allows continuous optimisation to increase user engagement i.e. encouraging users to engage with more content, spend longer on the platform and return more frequently. But on another level, the growth hacking methodologies which dominate this activity produce unintended consequences. The approach is described here in Roger McNamee’s Zucked online 1190:

    From late 2012 to 2017, Facebook perfected growth hacking. The company experimented constantly with algorithms, new data types, and small changes in design, measuring everything. Every action a user took gave Facebook a better understanding of that user—and of that user’s friends—enabling the company to make tiny improvements in the “user experience” every day, which is to say they got better at manipulating the attention of users. The goal of growth hacking is to generate more revenue and profits, and at Facebook those metrics blocked out all other considerations.

    In terms of mapping the contours of user behaviour, the metrics available to platforms are sophisticated. But from a hermeneutical point of view, they are a remarkably crude instrument. What matters to the growth hacker using these tools is the accumulation of attention, not how it is deployed. In my recent work, I’ve offered the idea of amplification-itis to make sense of where this can lead: a condition in which pursuit of online popularity becomes an end in itself, pursued through an overriding concern with how widely what you shares circulates on a platform.

    This is something which there would always be a possibility of catching simply because human agents have a generic concern for social standing. But growth hacking produces conditions which incubate amplification-itis in an unprecedented way and it has spread to the status of a pandemic. Hence Twitter’s recent concern with ‘conversational health’ and introduction of Terms of Services changes which clamp down on some of the behaviours this condition can give rise to. They’ve recognised the harm which the pursuit of amplification as an end in itself causes, with its tendency to undermine the reasons why people use the platform in the first place. But do they recognise how their own activity has lead it to spread as widely and as virulently as it has?

     
    • Patrick Ainley 5:55 pm on February 28, 2019 Permalink

      Have you read Shoshana Zuboff’s ‘The Age of Surveillance Capitalism’? I would be interested in your (re)view if so.

    • Mark 3:03 pm on March 1, 2019 Permalink

      just wrote a quick post! Still only 15% of the way through. You?

  • Mark 10:46 am on February 26, 2019 Permalink | Reply
    Tags:   

    I’ve finally finished the second edition of Social Media for Academics 

    1. Rewritten line by line, including new ideas, insights, suggestions, references
    2. One huge new chapter (the dark side of social media)
    3. One 50% rewritten chapter
    4. New forward with lots of substantive content & overview of my approach to SMA
    5. Large new section on podcast, videocasting, live streaming, working with freelancers
    6. Large new section on using social media to help inform and support research projects
    7. New section on hybrid forms of publication
    8. New advice on Facebook groups, Live blogging, planning hashtags
    9. New section on planning an impact strategy
    10. New conversations featured with Dave Beer, Roger Burrows, Petra Boynton, Emma Jackson and Deborah Lupton.
    11. New section on e-mail marketing and newsletters
    12. Extended section on using images and how it can go wrong
    13. Extended section on finding your voice online, including with multimedia projects
    14. Additional recommended reading for every chapter
    15. Somehow consumed more intellectual and emotional energy than the first edition did and is being submitted six months late 😬
     
    • Kai 8:57 pm on March 5, 2019 Permalink

      Excellent news! Any idea when this will hit the shops?

    • Mark 5:00 pm on March 8, 2019 Permalink

      Just confirmed today: October 😀

    • Kai 10:37 am on March 12, 2019 Permalink

      Super – got it in my diary 😉

  • Mark 5:51 pm on February 25, 2019 Permalink | Reply
    Tags: , , organic reach   

    The long, slow decline of organic reach on social media 

    There’s an interesting extract in Roger McNamee’s Zucked about how strategically Facebook have reduced the significance of organic reach (i.e. unpaid distribution of content) on the platform. The promise of being able to communicate directly to a vast audience through Facebook pages has been central to the motivation of individuals, networks and organisations who have directed their resources towards engaging on the site. But there has been a steady decline in the percentage of followers who will see a post for free. As he describes on loc 1147-1161:

    Every year or so, Facebook would adjust the algorithm to reduce organic reach. The company made its money from advertising, and having convinced millions of organizations to set up shop on the platform, Facebook held all the cards. The biggest beneficiaries of organic reach had no choice but to buy ads to maintain their overall reach. They had invested too much time and had established too much brand equity on Facebook to abandon the platform. Organic reach declined in fits and starts until it finally bottomed at about 1 percent or less. Fortunately, Facebook would periodically introduce a new product—the Facebook Live video service, for example—and give those new products greater organic reach to persuade people like us to use them.

    There’s a difficult balance to strike here between nudging people into using paid advertising and obliterating the economic rationale for using Facebook in the first place. Once there is a long term trajectory of engagement, the effectiveness of the platform will still beat the alternatives even with declining organic reach. For many actors organic reach is what brings them to the platform in the first place, in spite of the fact the business model relies on soliciting advertising income. But will the long, slow decline of organic reach ultimately end with its demise? Almost certainly not but this tension is at the heart of the business model yet the dynamic it gives rise to is extremely subtle.

     
  • Mark 4:52 pm on February 24, 2019 Permalink | Reply
    Tags: , , , open source, Roger McNamee, , , user engagement,   

    The tightness of engineering constraints and the organisational sociology of tech startups 

    I’m enjoying Zucked by Roger McNamee more than I expected to. What’s useful about his account is the stress it places on the engineering capacity of startups. What makes cloud computing significant is not just enabling growth without major capital investment in infrastructure, but also “eliminating what had previously been a very costly and time-consuming process of upgrading individual PCs and servers“ (loc 674). What makes open source software significant is not just the reduced costs of free of the self components, but the time saved when everything can be built on open source stacks. I’d tended to see these things as a matter of reduced costs and increasing flexibility, without grasping their significance for how limited resources could be deployed within the firm.

    Both mean that the engineer capacity of a startup can be directed mainly at the “the valuable functionality of their app, rather than building infrastructure from the ground up“ (loc 660). This led to the lean start up model and the capacity to go live then iterate in response to what happens. Without this modus operandi and the infrastructure supporting it, would the harvesting and use of what Zuboff calls behavioural surplus have even able to happen? A little earlier he frames this in terms of an epochal shift in the engineering constraints which tech startups had been subject to:

    For the fifty years before 2000, Silicon Valley operated in a world of tight engineering constraints. Engineers never had enough processing power, memory, storage, or bandwidth to do what customers wanted, so they had to make trade-offs. Engineering and software programming in that era rewarded skill and experience. The best engineers and programmers were artists. Just as Facebook came along, however, processing power, memory, storage, and bandwidth went from being engineering limits to turbochargers of growth. The technology industry changed dramatically in less than a decade, but in ways few people recognized. What happened with Facebook and the other internet platforms could not have happened in prior generations of technology.

    Under these conditions, it’s much easier to experiment and it’s easier to fail. As he points out, Facebook’s infamous motto Move Fast and Break Things is emblematic of the freedom which decreasing engineer constraints offers for a nascent firm. But what does this mean culturally? I think this is a crucial question to explore both in terms of the firm itself but also the expectation which investors then have of what constitutes adequate growth for a lean startup?

    To what extent does the obsessive drive for increased user engagement have its origins in the changing expectations of investors and the challenge of meeting them?  McNamee observes it makes investors more inclined to identify what they perceive as losers and kill the startup before it uses too much money. The rewards on offer for successes are so immense that many failures can be tolerated if they lead to one success.

    It also makes it much less onerous for the firm to scale, with important consequences for a nascent corporate culture. Inexperienced staff can be a plus under these circumstances, as long as they have the relevant technical skills. McNamee argues that the success of Facebook’s staffing policy had a big impact on human resources culture in Silicon Valley.

     
  • Mark 8:15 am on February 24, 2019 Permalink | Reply
    Tags: , , , mark zuckerberg   

    The awkward silences of digital elites 

    The fascination with the propensity of tech founders to go silent reminds me of how the earliest philosophers were framed as unworldly due to their capacity to go into thought trances. From Roger McNamee’s Zucked, loc 269-284.

    This little speech took about two minutes to deliver. What followed was the longest silence I have ever endured in a one-on-one meeting. It probably lasted four or five minutes, but it seemed like forever. Zuck was lost in thought, pantomiming a range of Thinker poses. I have never seen anything like it before or since. It was painful. I felt my fingers involuntarily digging into the upholstered arms of my chair, knuckles white, tension rising to a boiling point. At the three-minute mark, I was ready to scream. Zuck paid me no mind. I imagined thought bubbles over his head, with reams of text rolling past. How long would he go on like this? He was obviously trying to decide if he could trust me. How long would it take? How long could I sit there?

    I’ve read similar observations about Musk and Tiel. Even if there might be differences drawn between the reasons for silence, these stories seem to share an interest in that silence as a sign of how the founders are different from others.

     
    • landzek 3:46 pm on February 24, 2019 Permalink

      Jesus leaned down and drew in the sand the image of the fish.

      Though it’s off topic, you might enjoy Nick Cave’s thoughts:

  • Mark 2:27 pm on February 19, 2019 Permalink | Reply
    Tags:   

    The security apparatus of digital elites 

    One of the most interesting aspects of the Bezos story earlier this month was the insight it offered into the security apparatus he surrounds himself with, particularly his instruction for Gavin De Becker “to proceed with whatever budget he needed to pursue the facts”. There’s something oddly thrilling to read this, inviting us to imagine that we too had unlimited resources which could be brought to bear on the righting of wrongs. But the fact this relation exists to this category of service providers is itself interesting.

    De Becker was apparently a leading figure in modernising the security industry, moving it beyond ex-military personal (who presumably still have a role, at least if descriptions of the entourage Zuckerberg sometimes travels with are accurate) into what appears to be a more ‘full spectrum’ approach to the security of his clients. But even if De Becker seems to be an interesting chap without any reals sins, it’s hard not to wonder who else inhabits this space. I’m particularly curious about the murkier and more opaque areas of it, with the possibility that ‘fixers’ can now be hired in a modernised manner as if they were providing any other services to the rich and famous.

     
  • Mark 12:54 pm on February 19, 2019 Permalink | Reply
    Tags: , platform science, , , science 2.0   

    Open science and platform capitalism: a love story? 

    My notes on Mirowski, P. (2018). The future (s) of open science. Social studies of science48(2), 171-203.

    In this provocative paper, Philip he takes issue with the “taken-for-granted premise that modern science is in crying need of top-to-bottom restructuring and reform” which underpins much of the open science movement, as well as its tendency to obscure the key question of the sense in which it was ever closed and who is now intent on opening it up (pg 172)? Doing so runs contrary to a popular teleology in which a fixed scientific method is now being forced open by the inherent promise of digital technology. If we instead treat science historically, with distinct periods defined by specific orientations, it becomes possible to see that “the open science movement is an artefact of the current neoliberal regime of science, one that reconfigures both the institutions and the nature of knowledge so as to better conform to market imperatives” (pg 172).

    Doing so cuts through the semantic ambiguity of openness, allowing distinct phenomena (open access, open data, citizen science, different formats for publication etc) to coalesce in a quasi-unified way, making it possible for advocates to slide between these various expressions of an open science which is rarely, if ever, precisely defined as an integrated project. He argues that this new regime combines an ethos of radical collaboration with the infrastructure of platform capitalism. Its moral force rests upon a whole range of inditement of modern science:

    1. Distrust of science is rampant in the general population: he takes issue in an interesting way with the assumption that more contact with scientists and more exposure to the practice of science will reverse this trend. Could it not do the opposite by personalising science through the mechanism of blogging and social media, making it even harder to convince the sceptical that its a disinterested pursuit? The precise form this scepticism takes varies (Mirowski’s example of educated neoliberals who believe scientists needs to feel market discipline before they can be trusted was particularly striking) but it’s a broad trend which can’t be wished away as a product of a reversible ignorance. This section reminded me a lot of the arguments Will Davies makes in Nervous States about the evisceration of representation as intermediaries are no longer trusted to act impersonally.
    2. Science suffers a democracy deficit: he suggests this fails to recognise how ‘science’ and ‘democracy’ have both been transformed since figures like Dewey first made this argument in the early 20th century. The freedom of scientists, won in relation to a military-industrial complex in which they were embedded, came at the cost of the freedom of the public to influence science. The former apparatus has given way to a market complex such that “science has been recast as a primarily commercial endeavor distributed widely across many different corporate entities and organizations, and not confined to disciplinary or academic boundaries” (pg 176). What it is taken to mean to democratise science has changed radically in this context, reducing it to a ‘scripted participation’ (citizen social science) in the research process as part of an extended marketplace of ideas, as opposed to meaningful participation in the governance of science. In fact I wonder if populist attacks on ‘wasteful research’ and ‘mickey mouse subjects’ should be interpreted as a (pathological) attempt to democratise science? He is scathing about equating “a greater quantity of people enrolled in minor (and unremunerated) support roles with a higher degree of democratic participation, when, in fact, they primarily serve as the passive reserve army of labor in the marketplace of ideas” (pg 177).
    3. The slowdown in scientific productivity: the promise suggested in open science to counteract a fall in actionable scientific outcomes (if I’ve glossed that correctly?) is belied by the form which openness takes within the regime of knowledge production found within commercial scientific research. If I understand him correctly, he’s saying that the organisational apparatus of contemporary science can’t support the openness advocated (e.g. intellectual property restrictions get in the way, the proletarianised condition of bench scientists within commercial organisations) and the “stunted and shriveled” openness it can support doesn’t seem to work anyway. Though I’m not sure I’ve interpreted this section correctly.
    4. The explosion of retractions and the falling rate of falsification: many epistemic problems are ascribed by advocates of openness to the perverse incentives of the existing journal system. These problems can be seen most dramatically in the huge growth of retractions by journals of work which had passed the peer view process, with Retraction Watch currently identifying 600-700 retractions per year. A parallel problem is the basis against publishing falsifications in favour of positive additions to the knowledge system. The hope has been that the shift to a different business model might solve both problems.

    If I understand correctly, his point is that a focus upon the deficiencies of science imputes to scientific practice what has its origins elsewhere. He offers a powerful inditement of the role of neoliberalism in producing the pathologies of contemporary science, listed on pg 188. But it’s unclear to me why this is either/or because the criticisms which open science advocates raise could be the outgrowths of neoliberalism’s influence? The point can be overstressed because in some cases there’s an active misdiagnosis correctly identified in his appraisal of these critiques but these are not universal and he seemingly misses the possibility of both/and:

    The ailments and crises of modern science described in this paper were largely brought about by neoliberal initiatives in the first place. First off, it was neoliberal think tanks that first stoked the fires of science distrust amongst the populace that have led to the current predicament, a fact brought to our attention by Oreskes and Conway (2011), among others. It was neoliberals who provided the justification for the strengthening of intellectual property; it was neoliberals who drove a wedge between state funding of research and state provision of findings of universities for the public good; it was neoliberal administrators who began to fragment the university into ‘cash cows’ and loss leader disciplines; it was neoliberal corporate officers who sought to wrest clinical trials away from academic health centers and towards contract research organizations to better control the disclosure or nondisclosure of the data generated. In some universities, students now have to sign nondisclosure agreements if they want initiation into the mysteries of faculty startups. It is no longer a matter of what you know; rather, success these days is your ability to position yourself with regard to the gatekeepers of what is known. Knowledge is everywhere hedged round with walls, legal prohibitions, and high market barriers breached only by those blessed with riches required to be enrolled into the elect circles of modern science. Further, belief in the Market as the ultimate arbiter of truth has served to loosen the fetters of more conscious vetting of knowledge through promulgation of negative results and the need to reprise research protocols.

    But he’s certainly correct that these overstatements legitimise platform initiatives which aim to reengineer science from the bottom up. The apparent diversity of these space is likely to decline over time, as a few platforms come to dominate. This opens up the worrying possibility that “Google or some similar corporate entity or some state-supported public/private partnership will come along with its deep pockets, and integrate each segment into one grand proprietary Science 2.0 platform” (pg 190). This platformization is likely to have unintended consequences, such a rendering science an individualised pursuit (he cites Orcid ID as an example of this – unfairly?) and setting up data repositories to fail if they are insufficiently succesful in attracting the data donors on whom their ultimate viability will depend.

    He correctly identifies these platforms as facilitating a form of managerless control but I have an issue with the claim that “one automatically learns to internalize these seemingly objective market-like valuations, and to abjure (say) a tenacious belief in a set of ideas, or a particular research program” (pg 191). How automatic is the process really? If he means it as a short hand to say that it tends to happen to most users over time then I withdraw my objection. But if it happens in different ways and different degrees, we need to open up the blackbox of automaticity in order to see what causal mechanisms are operating within it.

    He closes the paper by concretely laying out his case about why the platformization of science is a neoliberal process. Firstly, it breaks up the research process into distinct segments which permit of rationalisation. Secondly, the focus upon radical collaboration gradually subsumed the author into collaboration, in apparent contradiction of his earlier point about the individualisation of science. Thirdly, the openness for the user goes hand in hand with an opaque surveillance for the platform provider with monetisation assumed to follow further down the line. The most interesting part of this paper is how description of the ambition towards building a unified platform portfolio (mega platform?) for research and how this fits into the longer term strategy of publishers. There’s a lot to think about here and I suspect this is a paper I will come back to multiple times.

     
  • Mark 10:57 am on February 14, 2019 Permalink | Reply
    Tags: , Anthropocene, , , , , toxic, toxicity   

    The ontology of toxicity 

    My notes on Liboiron, M., Tironi, M., & Calvillo, N. (2018). Toxic politics: Acting in a permanently polluted world. Social studies of science48(3), 331-349.

    The authors of this paper take “a permanently polluted world” as their starting point. It is one where toxicity is ubiquitous, even if unevenly distributed. Unfortunately, “[t]he tonnage, ubiquity and longevity of industrial chemicals and their inextricable presence in living systems means that traditional models of action against toxicants such as clean up, avoidance, or antidote are anachronistic approaches to change ” (pg 332). The pervasiveness is such that we need to move beyond the traditional repertoire of management (separation, containment, clean up, immunisation) which is premised on a return to purity whiled depoliticising the production of that toxicity by treating it as a technical problem to be managed. In doing so, we can begin to see how toxic harm can work to maintain systems rather than being a pathology which ensues from systemic failure

    There is conceptual work required if we are to grasp the politics of toxicity, encompassing how we conceptualise toxic harm, provide evidence for it, formulate responses to it and grasp the interests reflected in its production and management. This involves rejecting a view of toxicity as “wayward particles behaving badly” (pg 333). As they explain on pg 334, toxicity is relational:

    Toxicity is a way to describe a disruption of particular existing orders, collectives, materials and relations. Toxicity and harm, in other words, are not settled categories (Ah-King and Hayward, 2013; Chen, 2012) because what counts as a good and right order is not settled.

    They suggest a distinction between toxins and toxicants. The former occurs naturally in cells, whereas the latter are “characterized by human creation via industrial processes, compositional heterogeneity, mass tonnage, wide economic production and distribution processes, temporal longevity, both acute and latent effects, and increasing ubiquity in homes, bodies and environments” (pg 334). This includes naturally occurring minerals which are rendered problematic through industrial processes that lead them to occur in specific forms, locations and scales productive of harm.

    Laws surrounding toxicants are based upon threshold limits, usually in relation to effects on human bodies. These are supplemented by cost benefit principles based around the avoidance of ‘excessive costs’ given available technologies. In this sense, the breakdown of order on one level (enabling toxicants to spread because it wouldn’t be ‘feasible’ to prevent it) facilitates the reproduction of order on another level (ensuring viable conditions for the continued reproduction of the commercial sector involved). I really like this insight and it’s one which can be incorporated into the morphogenetic approach in an extremely productive way.

    This focus on toxicity enables us to links together these levels, providing a multi-scalar politics of life. There is a temporality to toxicity in which a slow disaster is not easily apprehended. For this reason agents seek to make it legible as a event through actions like photography or protest actions. But this easily gives rise to a politics of representation, seeing the claims of environmentalists as (at best) on a par with the claims of commercial firms. Rendering these processes legible through mechanisms like sensational images can reproduce existing differences between centre and periphery, the heard and the unheard.

    Their interest is in modes of action “beyond governance-via-policy, in-the-streets-activism and science-as-usual” (pg 337). I’m not sure what their motivation is for this beyond the drive to “no longer privilege the modern humanist political subject and epistemologies based in claims and counter claims”: are they saying that a narrow politics of evidence and judgement has its corollary in public activism around public issues which have been established evidentially? I can see the analytical case for trying to get beyond this dichotomy but I’m not sure I see what is at stake politically in doing so. Their interest in actions such as  “the everyday, obligatory practices of tending to plants and others as toxic politics that do not necessarily result in scaled-up material change” doesn’t seem politically fruitful to me precisely because of the multi-scalar mode of analysis they offer (pg 341). Why should we challenge “activism as heroic, event-based and coherent” (pg 341)? Again I can see an analytical case for this, even if I disagree with it, but I don’t see what is at stake in this politically. It might be there are unintended consequences to thinking in terms of ‘effective outcomes’ but the force of this argument rests on an implicit claim about outcomes. Why is it important to “make room in dominant political imaginations for multiple forms of local, low resolution, uneventful, uneven, frustrated, desireful, ethical, appropriated and incommensurate forms of justice” (pg 343)?

     

     
  • Mark 9:42 am on February 14, 2019 Permalink | Reply
    Tags: ,   

    Critics of Amazon don’t understand how popular it is 

    This is such an important point in Tim Carmody’s (highly recommended) Amazon newsletter. Not only is Amazon enormously popular but critics of the firm fail to understand the basis of this popularity, as opposed to the insight they have into the popularity of a firm like Apple:

    One study last year showed that Amazon was the second most trusted institution in American life, behind only the military. If you only poll Democrats? Amazon is number one. People love Amazon. Most of them don’t know about and have never thought they needed a mesh router for their house. But they will now.

    It suggests criticism of big tech could remain a marginal pursuit, embedded to the point of doxa within certain segments of the population while others remain oblivious to it. It also suggests the need for caution in how we treat ‘big tech’. It’s not a monolithic block and people have different relationships to these firms. I’ve assumed there’s a political value in using the designation, as it defines an issue in a way orientated towards action, but perhaps it’s a mistake while there’s a divergence in public affection between the firms in question.

     
  • Mark 2:27 pm on February 9, 2019 Permalink | Reply
    Tags: ,   

    😪 How not to prepare the second edition of a book 😪 

    I’m increasingly hopeful that I’ll submit the second edition of Social Media for Academics to Sage next week, meeting a deadline which I suspect my editor had expected I would break. The book is six months overdue, I’ve broken countless deadlines and the impending date was only agreed after a period in which we agreed to withdraw any deadline in order to counter the anxiety which was making it hard for me to write. Sitting in my office on a Saturday afternoon, I’m finishing the final chapter which needs copy editing before I turn to the two chapters and introduction that require substantive work. It seems like a good time to reflect on what went wrong, with a second edition that has been objectively massively behind schedule and subjectively a nightmare to produce.

    It was a project I thought would be easy. I feel ridiculous admitting that I’d effectively set aside two weeks of work to prepare the second edition. The first edition had been so well reviewed that I felt all I needed was to insert some new material to take account of what had changed on social media in the preceding years. I had been blogging regularly using a category on my blog, I’d given over 60 talks in which I’d developed new ideas and I’d had countless suggestions from people who had read the first edition. My plan was to spend time producing copy on each new topic before going through the text line by line to find where I could fit these in.

    This was a big mistake because I rapidly generated vast amounts of text which didn’t fit substantively or stylistically into the existing book. In turn the process of going through it line by line eroded its structure, leaving me feeling as if I was sitting weeping in the ruins of a neatly turned out home that I’d caused to collapse in a reckless act of home repair. The project quickly became unmanageable because I had little traction upon (a) the scope and remit of the new material I’d produced to go into the book (b) the structure into which this material had to be fitted. By traction I mean a sense of how what was in front of me as I wrote or edited connected to a broader context.

    I’ve often been fascinated by the experience of producing a text as a totality. That moment when you hold a thesis in your hands for the first time and this project which had dominated your life suddenly becomes an object you can easily manipulate. The second edition of Social Media for Academics has involved that process in reverse, as the objectivity of the text evaporated into a dispiriting horizon of unmet deadlines and postponed commitments. By simply piling up the new material in blog posts, Scrivener notes and artefact cards without any attempt to link these together, it was inevitable I was going to find myself drowning in ideas. I gripped onto the existing structure of the book in the hope it could keep me afloat but I simply pulled it into the ocean as well.

    It occurs to me now that my mistake was a simple one. I should have read through the whole text, making free form notes, before trying to do anything else. Furthermore, in the last few years of being increasingly busy, I’d become adept at producing ephemera: short talks, blog posts, fragments of writing. But I’d gradually lost the habit of connecting these things together and making sense of what I’d been producing. My thoughts didn’t condense in the way they used to, both as a consequence of less mental bandwidth and less inclination to do the connective work upon which creativity depends. The combination of jumping straight into editing without having imposed any order on what I was trying to incorporate goes much of the way to explaining the disaster which has been my experience of producing this second edition.

    The only thing that made it tractable in the end was printing out each chapter, going through it with a pen to rewrite, restructure and extend. Not all of my new material has survived and I’m still nervous that there are topics I’ve missed out. But it’s a much tighter book as a result of this process, in spite of being substantially longer. It’s also left me reflecting on the approach I take to my work and made me realise the importance of ordering what I do. This used to happen automatically, as I thought and reflected in the course of days which had a quantity of unstructured time that now seems a distant memory to me. It’s also something I did through blogging, using this as a mechanism to elaborate upon my ideas in order to connect things together. I never stopped blogging but something about the process has changed. It became more efficient, producing units of thought across a diverse range of topics, while leaving these fragmented from each other. I’d rarely stop to write a blog post when I was seized by what C Wright Mills called the feel of an idea, something which I used to do regularly and inevitably left me with the experience of a connection between things that had previously seemed disconnected.

    This blog post is an example of this process and I feel much clearer about what went wrong as a result. It’s taken more time and energy than would have been involved in writing a new book. I now know how not to produce the second edition of a book. But I’m quite proud of the result and I hope people like it.

     
    • Sourav Roy 2:44 pm on February 9, 2019 Permalink

      “sitting weeping in the ruins of a neatly turned out home that I’d caused to collapse in a reckless act of home repair. …printing out …going through it with a pen to rewrite, restructure and extend” :O Going through this right now. Not for any second edition though, second chapter of a first draft. :O

    • Mark 4:25 pm on February 9, 2019 Permalink

      good luck!

    • Sourav Roy 2:49 am on February 10, 2019 Permalink

      💜

  • Mark 1:47 pm on February 9, 2019 Permalink | Reply  

    But you can recognise me because I’m you 

    But you can recognize me because I’m you, mate
    It’s never too late to see deeper than the surface.
    Trust me, there’s so much more to it.
    There’s a world beyond this one
    That creeps in when your wits have gone soft
    And all your edges start shifting
    I mean it
    A world that it breathing
    Heaving its shoulders and weeping
    Bleeding through open wounds
    That’s why I’m grieving.
    Down on my knees and I am feeling everything that I’m feeling.
    So come here
    Give me your hand
    Because I know how to hold it.
    I will write every single one of you a poem
    And then I’ll set them all on fire
    Because I am stunned by how the light in your eyes resembles
    Brightening skies.
    Mate, I would fight for your life like it was mine

     

     
  • Mark 9:18 am on February 8, 2019 Permalink | Reply
    Tags: , ,   

    The governance of platforms 

    This ECPR panel looks superb. Saving here to follow up later:

    please find attached, the call for papers for a panel at the ECPR General
    Conference in Wrocław (4 – 7 September).

    Title of the panel:***The Relationship Between Digital Platforms and**
    **Government Agencies in Surveillance: Oversight of or by Platforms?*

    If you are interested in participating please submit an abstract (500
    words maximum) no later then *15 February* via the ECPR Website (ecpr.eu).

    *Abstract*
    Revelations of surveillance practices like those of the National
    Security Agency or Cambridge Analytica have shown that the digital age
    is developing into an age of surveillance. What said revelations have
    also shown is that digital platforms are significantly contributing to
    this development. As intermediaries between communications and business
    partners platforms enjoy a privileged position (Trottier 2011). Platform
    increasingly use this position by surveilling and manipulating end users
    for the sake of profit maximization (Fuchs 2011, Zuboff 2015). Platforms
    with a business model of surveillance and manipulation, seem to have
    become the most successful type of corporations of today. Already two of
    the three most valuable corporations are operating as such platforms.
    While platforms are emerging and expanding in ever more established as
    well as new markets and thus gain influence on large parts of society,
    the question arises how states are dealing with these new actors and
    their capabilities. The panel is intended to provide answers to this
    question by studying the spectrum of state-platform relations.
    As empirical examples show, the relationship between digital platforms
    and the states is multi-faced. On the one hand public institutions are
    partnering with private platforms. The data from platforms is used for
    example by intelligence agencies to combat terrorist groups, by police
    departments to search for criminal suspects and by regulatory agencies
    to counter hate speech or violations of copyrights.
    On the other hand, the capabilities of platforms can also be turned
    against the state. As the last US presidential elections showed
    platforms can be utilized to influence the electorate or to compromise
    political actors.
    From the point of view of the platforms, the state represents on the one
    hand an instance that may restricts their actions as it declares
    specific types of business activity illegal. The new general data
    protection regulation of the EU is one example.
    At the same time, states are providing the legal basis for the
    platforms’ activities. In order to promote e-commerce for example many
    European states liberalized their privacy regulation in the beginning of
    the new millennium.
    These examples illustrate the diversity in platform-state relations. The
    panel will acknowledge this diversity and will bring together works
    considering various empirical cases as well as theoretical frameworks.
    We welcome contributions focusing on different political systems as well
    as different platforms like for example social media, retail, transport
    or cloud computing platforms.
    Exemplary questions that may be addressed are:

    • Which major privacy, anti-trust or media regulations of platforms
    where enacted on the national level recently? Which types of platforms
    were addressed and which were not? In how far do these regulations
    resemble a general trend? To which degree do they effect surveillance
    practices?
    • In which areas and by which means of surveillance are platforms
    already enforcing public policies? Which kind of data is provided by
    platforms for predictive policing? How are platforms identifying and
    depublishing illegal content? When are platforms collaborating with
    intelligence agencies?
    • How can platforms be regulated efficiently? Which forms of regulations
    between hierarchical regulation and self-regulation exist and how did
    these forms emerge? In how far is oversight of platforms comparable to
    oversight by platforms?
    • Are policies of platform regulation defusing? If so, which states are
    setting the standards?
    • Which international institutions in the field of platform regulation
    were created so far? Is an international regime of platform regulation
    evolving?

     
  • Mark 8:32 am on February 5, 2019 Permalink | Reply
    Tags: , social data science   

    The Data Science & Society Institute 

    This looks like an interesting job at a new institute I’d like to keep track of:

    The Department of Science & Technology Studies at Cornell University seeks
    a Postdoctoral Researcher to play a major role in a two-year project on
    Data Science & Society. We invite applications from scholars with a recent
    Ph.D. in science & technology studies (STS) or related fields (e.g.,
    sociology, anthropology, law, media studies, information science) and an
    active research agenda on the social aspects of data science.

    The Postdoctoral Researcher will be expected to devote 50% time to his or
    her own research agenda and 50% time to working with S&TS Department
    faculty on developing the Data Science & Society Lab, a new and innovative
    course that is part of the Cornell Data Science Curriculum Initiative. The
    lab will engage undergraduate students in two components: instruction in
    theoretical tools and practical skills for analyzing social and ethical
    problems in contemporary data science (e.g., data science as ethical
    practice; fairness, justice, discrimination; privacy; openness, ownership,
    and control; or credibility of data science); and participation in
    interdisciplinary project teams that work with external partners to address
    a real-world data science & society problem.

    The Postdoctoral Researcher will have the opportunity to help launch and
    shape the initiative, to develop curriculum and engagement projects, build
    relationships with external partners and participate in teaching the
    course. S/he will work with two S&TS Department faculty members, Malte
    Ziewitz and Stephen Hilgartner, who will have primary responsibility for
    teaching the course.

    Applicants should send:

    • Cover letter summarizing the candidate’s relevant background,

    accomplishments, and fit with the position

    • CV
    • Up to two publications (or writing samples)
    • Three letters of recommendation
    • A transcript of graduate work (unofficial is acceptable)

    Required Qualifications:

    PhD in science & technology studies (STS) or related fields (e.g.,
    sociology, anthropology, law, media studies, information science) and an
    active research agenda on the social aspects of data science. ABD students
    are eligible to apply, but proof of completion of the Ph.D. degree must be
    obtained prior to beginning the position. Recent graduates who received
    their Ph.D. during the last five years are especially encouraged to apply.

    The position is available for a Summer 2019 start (as early as July 1). We
    will begin to review applications on February 28. Apply at
    https://academicjobsonline.org/ajo/jobs/13236. For further information,
    please contact Sarah Albrecht, saa9@cornell.edu.

    Diversity and inclusion are a part of Cornell University’s heritage. We are
    a recognized employer and educator valuing AA/EEO, Protected Veterans, and
    Individuals with Disabilities.

     
  • Mark 2:23 pm on February 4, 2019 Permalink | Reply
    Tags: , , interpassivity, passivity,   

    The tragedy of the academic commons: when abstraction makes us passive 

    Using the communal kitchen at the Faculty of Education last Friday, I noticed that the lid had fallen off the bin and was sitting on the floor. In the middle of something and keen to get home, I didn’t stop to pick it up. I just came back from the same kitchen on Monday afternoon and noticed it was still on the floor. “Ah the tragedy of the commons” I said internally while stroking my chin and nodding sagely, before beginning to walk out of the room. At which point I realised how absurd I was being and stopped to pick the lid up from the floor, immediately wishing I’d done it on Friday.

    It left me wondering how certain forms of abstraction, stepping back from a concrete phenomenon and subsuming it into a general category, make action less likely. There’s something about that moment of understanding, recognising a fragment of the general in the mundanity of the particular, liable to induce passivity. It’s hard to argue a counter factual but I suspect I would have immediately picked up the lid if I hadn’t experienced that moment of abstract recognition. However I’m aware I’m doing exactly the same thing in writing this blog, recognising a general propensity in a particular instance, encouraging others to do the same by raising it discursively in a public forum.

     
    • Mehret Biruk 7:09 pm on February 4, 2019 Permalink

      Living with roommates, this is a particularly common occurrence. Oh, everything that remains untouched for an infinite period of time as we all make room around it to carry on with our daily tasks.

  • Mark 8:25 pm on February 2, 2019 Permalink | Reply  

    Breathe Me 

     
    • Sourav Roy 2:10 am on February 3, 2019 Permalink

      Perfect song when you are flailing with the structure of chapter on a Sunday morning.

    • Mark 1:48 pm on February 4, 2019 Permalink

      perfect for many things 🙂

  • Mark 8:53 pm on February 1, 2019 Permalink | Reply
    Tags: ,   

    Mark Fisher on using social media rather than being used by it 

    Image-1-4“In sum, the obsession with the web, its monopolisation of any idea of the new, has served capitalist realism rather than undermined it. Which does not mean, naturally, that we should abandon the web, only that we should find out how to develop a more instrumental relationship with it. Put simply, we should use it – as a means of dissemination, communication and distribution – but not live inside it. The problem is that this goes against the tendencies of handhelds. We all recognise the by now cliched image of a train carriage full of people pecking at their tiny screens, but have we really registered how miserable this really is, and how much it suits capital for these pockets of socialisation to be closed down?” – Mark Fisher, Abandon hope (summer is coming) 

     
  • Mark 6:07 pm on February 1, 2019 Permalink | Reply
    Tags:   

    Experts, knowledge and criticality in the age of ‘alternative facts’: re-examining the contribution of higher education 

    This event looks fantastic. More details and registration here.

    Chair: Dr Neil Harrison, University of Oxford

    In their seminal works of the early 1990s, both Ulrich Beck and Anthony Giddens predicted that one manifestation of late modernity would be a popular suspicion of experts and scepticism about expertise.  Since then, the rise of the individual’s ability to have their voice heard through mass social media has eroded traditional patterns of cognitive authority – including in academia.

    On the one hand, this democratisation of knowledge is to be welcomed, as it has enabled new critical voices to emerge and new discourses to develop, especially among groups that have historically been voiceless. However, it has also created an environment of confusion – a crowded forum of competing voices where volume, integrity and quality are often out of balance.  This confusion has allowed those with power to obfuscate, especially when the weight of evidence is against them.  In recent times, we have seen former UK Education Secretary Michael Gove claim that the public are ‘tired of experts’, while US President Donald Trump’s infamous refrain of ‘fake news’ is used to sideline inconvenient facts and opinions.

    Universities have traditionally been seen as authoritative sites for both the creation and transmission of knowledge.  Academics are positioned as experts whose work enriches public life through scientific, social and cultural advances, with expertise that is passed to students through a variety of teaching practices as part of a consensual corpus of knowledge. More recently, universities have increasingly promoted the idea of their graduates as globally-aware and values-led problem-solvers, with the knowledge to tackle ‘wicked issues’ like climate change, public health crises and economic instability.

    This event will showcase a diverse collection of papers from a special issue of Teaching in Higher Education journal. They are bound together by a focus on how universities can and should respond to the ‘post-truth’ world where experts and expertise are under attack, but where knowledge and theory-based practice continue to offer the hope of a fairer, safer and more rewarding world.  Specifically, the papers touch on the contributions that can be made by information literacies, public intellectualism, curriculum reform, interdisciplinarity and alternative pedagogies.

     

    Presenters:

    Elizabeth Hauke (Imperial College, London): “Understanding the world today: the roles of knowledge and knowing in higher education”

    Gwyneth Hughes (University College London): “Developing student research capability for a ‘post-truth’ world: three challenges for integrating research across taught programmes”

    Rita Hordósy (University of Manchester) and Tom Clark (University of Sheffield): “Undergraduate experiences of the research/teaching nexus across the whole student lifecycle”

    Mark Brooke (National University of Singapore): “The analytical lens: developing undergraduate students’ critical dispositions in English for Academic Purposes writing courses”

    Alison MacKenzie (Queen’s University, Belfast): “Just Google it: digital literacy and the epistemology of ignorance”

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel