This is Jaron Lanier’s memorable description of social media in his new book Ten Arguments For Deleting Your Social Media Accounts Right Now. Social media is a technology for asshole amplification. To be clearly seen in the fact that “since social media took off, assholes are having more of a say in the world” (pg 43). His point is not that social media is a haven for trolls because it’s “not helpful to think of the world as being divided into assholes and non-assholes or if you prefer trolls and victims”. On pg 44 he cautions that each of us has our own inner troll:

It’s like an ugly alien living inside you that you long ago forgot about. Don’t let your inner troll take control! If it happens when you’re in a particular situation, avoid that situation! It doesn’t matter if it’s an online platform, a relationship, or a job. Your character is like your health, more valuable than anything you can buy. Don’t throw it away. But why, why is the inner troll there at all? It’s such a common problem that it must be a deep, primal business, a tragedy of our inheritance, a stupid flaw at the heart of the human condition. But saying that doesn’t get us anywhere. What exactly is the inner troll? Sometimes the inner troll takes charge, sometimes it doesn’t. My working hypothesis has long been that there’s a switch deep in every human personality that can be set in one of two modes. We’re like wolves. We can either be solitary or members of a pack of wolves. I call this switch the Solitary/ Pack switch. When we’re solitary wolves, we’re more free. We’re cautious, but also capable of more joy. We think for ourselves, improvise, create. We scavenge, hunt, hide. We howl once in a while out of pure exuberance. When we’re in a pack, interactions with others become the most important thing in the world. I don’t know how far that goes with wolves, but it’s dramatic in people. When people are locked in a competitive, hierarchical power structure, as in a corporation, they can lose sight of the reality of what they’re doing because the immediate power struggle looms larger than reality itself.

The evolutionary language here can seem off-putting to a sociologist. But it can be recast in terms of internal and external goods. Sometimes we are driven by the rewards internal to what we are doing while at other times we are driven by rewards external to what we are doing. What makes social media platforms so insidious is their tendency to, as Lanier puts it, make “social status and intrigues become more than immediate than the larger reality” (pg 49). I don’t agree with his account of why this is so but I think the underlying direction of his argument is correct. Social media is asshole amplification technology because it lends such force and vivacity to external goods, particularly recognition and reputation, leaving internal goods hard to sustain.

We often do sustain our relationship with these goods, as can be seen in the continued existence of thoughtful and intelligent exchange online. But we do so in spite of rather than because of the asshole amplification architecture of social media. It’s grasping the bivalent nature of this relationship, as internal and external goods co-mingle within platform architectures which are continually modulating in response to our (ambivalent) actions, which is crucial if we want to understand and perhaps even overcome the asshole amplification propensities of social media.

There’s a fascinating mea culpa in Jaron Lanier’s new book Ten Arguments for Deleting Your Social Media Accounts Right Now. On loc 411 he describes how early design decisions, inspired by the libertarian ethos taking hold within the tech community, created the openings for the global monopolies we now see emerging:

Originally, many of us who worked on scaling the internet hoped that the thing that would bring people together—that would gain network efect and lock-in—would be the internet itself. But there was a libertarian wind blowing, so we left out many key functions. The internet in itself didn’t include a mechanism for personal identity, for instance. Each computer has its own code number, but people aren’t represented at all. Similarly, the internet in itself doesn’t give you any place to store even a small amount of persistent information, any way to make or receive payments, or any way to find other people you might have something in common with. Everyone knew that these functions and many others would be needed. We figured it would be wiser to let entrepreneurs fill in the blanks than to leave that task to government. What we didn’t consider was that fundamental digital needs like the ones I just listed would lead to new kinds of massive monopolies because of network efects and lock-in. We foolishly laid the foundations for global monopolies. We did their hardest work for them. More precisely, since you’re the product, not the customer of social media, the proper word is “monopsonies.” Our early libertarian idealism resulted in gargantuan, global data monopsonies.

If I understand him correctly, he is suggesting that these functions could have been built into the infrastructure of the internet itself rather than becoming services fulfilled by corporate providers. This passage reminded me of a recent keynote by danah boyd, reflecting on how utopian dreams concerning digital technology have come to seem untenable with time:

A decade ago, academics that I adore were celebrating participatory culture as emancipatory, noting that technology allowed people to engage with culture in unprecedented ways. Radical leftists were celebrating the possibilities of decentralized technologies as a form of resisting corporate power. Smart mobs were being touted as the mechanism by which authoritarian regimes could come crashing down.

Now, even the most hardened tech geek is quietly asking:

What hath we wrought?

This intellectual utopianism concerned the products of the original digital utopians themselves, innovators who sought to “disrupt the status quo, but weren’t at all prepared for what it would mean when they controlled the infrastructure underlying democracy, the economy, the media, and communication”. Recognising the role of dreams in shaping technology isn’t just a matter of how they inspire people to create but also recognising what happens when they go wrong. These aren’t just a froth of naiveté on the surface of a dark materiality lurking beneath. They are rather a force in their own right, changing the world they sought to improve as the ambitions underlying them curdle in the darkening reality they have contributed to.

In the last year, I find myself obsessing ever more fequently about agency and platforms. Given I spent six years writing a PhD about human agency, it is inevitable that this would be the lens I bring to the analysis of platforms. But it also reflects a sustained weakness in how the role of agency in platforms is conceptualised, as well as the political implications which are seen to flow from this. It is one which we can make sense of using Margaret Archer’s notion of conflation, developed to make sense of how different theorists have sought to solve the problem of structure and agency.

I want to suggest we can find a fundamental ambiguity about platforms which plays out at both political and ontological levels. This ambiguity reflects a failure to make sense of how platforms exercise a causal influence over human beings and how human beings exercise a causal influence over platforms. Platform structuralism takes many forms but it fundamentally sees human behaviour as moulded by platforms, leveraging insights into the social, psychological and/or neuro constitution of human beings to condition their behaviour in predictable and explicable ways. It takes the platform as the horizon of human action, framing human beings as responding to the incentives and disincentives to be found within its architecture. It is often tied to a politics which sees platforms as generating pathological, even addictive, behaviours. It conflates downwards and takes agency as an epiphenomenon of (platform) structure.

Critiques informed by platform structuralism often seem to have put their finger on something important, while remaining overstated in a way that is hard to pin down specifically. My suggestion is this overstatement is a failure to come to terms with the fundamental relation between the platform and the user. How do platforms exercise a causal influence over their users? Their interventions are individualised in a statistical way, rather than a substantive one. These are instruments which are simultaneously precise yet blunt. While they might be cumulatively influential, particular instances are liable to be crude and ineffective, often passing unnoticed in the life of the user. For this reason we have to treat the causal powers of platforms over their users extremely careful. It is also something which varies immensely between platforms and the ontology of platforms designed for multi-sided markets is a more complex issue for another post.

Platform voluntarism is often a response to the overstatement of platform structuralism. Denying the capacity of platforms to mould their users, platforms are framed as simply providing incentives and disincentives, able to be ignored by users as readily as they are embraced. The platform is simply a stage upon which actors act, perhaps facilitating new categories of action but doing nothing to shape the characteristics of the agents themselves. It conflates upwards, treating platform (structure) as a straight forward expression of the aggregate intentions of their users. Both platform voluntarism and platform structuralism tend to reify platforms, cutting them off in different ways from both users and the wider social context in which they use. What gets lost is human agency and the ways in which these infrastructures shape and are shaped by human agents.

Another reason it is so crucial to retain agency as a category is because these platforms are designed in purposive ways. Unless we have an account of how they have the characteristics they do because people have sought to develop them in specific ways, we risk lapsing into a form of platform structuralism which we take platforms as an a priori horizon within which human beings act. They are simply given. We might inquire into the characteristics of platforms in other capacities, including as business models, but we won’t link this to our account of how platforms conditions the social action of users taking place within and through them. We will miss the immediate reactivity of platforms to their users, as well as the many human, rather than merely algorithmic, mechanisms at work. But more broadly, we will take the conditioning influences as a given rather than as something to be explained. In such a case, we treat user agency and engineering agency as unrelated to each other and fragment a phenomenon which we need to treat in a unified way.

If we want to draw out these connections, it becomes necessary to understand how engineers design platforms in ways encoding an understanding of users and seeking to influence their action. If we can provide thick descriptions of these projects, capturing the perspective of engineers as they go about their jobs, it becomes much easier to avoid the oscillation between platform structuralism and platform voluntarism. Central to this is the question of how platform engineers conceive of their users and how they act on these conceptions. What are the vocabularies through which they make sense of how their users act and how their actions can be influenced? Once we recover these considerations, it becomes harder to support the politics which often flows from platform structuralism. As Jaron Lanier writes on loc 282 of his Ten Arguments for Deleting Your Social Media Accounts Right Now:

There is no evil genius seated in a cubicle in a social media company performing calculations and deciding that making people feel bad is more “engaging” and therefore more profitable than making them feel good. Or at least, I’ve never met or heard of such a person. The prime directive to be engaging reinforces itself, and no one even notices that negative emotions are being amplified more than positive ones. Engagement is not meant to serve any particular purpose other than its own enhancement, and yet the result is an unnatural global amplification of the “easy” emotions, which happen to be the negative ones.

He suggests we must replace terms like “engagement” with terms like “addiction” and “behavior modification”. Only then can we properly confront the political ramifications of this technology because our description of the problems will no longer be sanitised by the now familiar discourse of Silicon Valley. But this political vocabulary would be unhelpful for sociological analysis because it takes us further away from the lifeworld of big tech. It is only if we can establish a rich understanding of the agency underlying the reproduction and transformation of platforms that we can overcome the contrasting tendencies towards platform structuralism and platform voluntarism. But this political vocabulary would be unhelpful for sociological analysis because it takes us further away from the lifeworld of big tech. It is only if we can establish a rich understanding of the agency underlying the reproduction and transformation of platforms that we can overcome conflationism in our approach to platforms.

To me a book is not just a particular file. It’s connected with personhood. Books are really, really hard to write. They represent a kind of a summit of grappling with what one really has to say. And what I’m concerned with is when Silicon Valley looks at books, they often think of them as really differently as just data points that you can mush together. They’re divorcing books from their role in personhood.

I’m quite concerned that in the future someone might not know what author they’re reading. You see that with music. You would think in the information age it would be the easiest thing to know what you’re listening to. That you could look up instantly the music upon hearing it so you know what you’re listening to, but in truth it’s hard to get to those services.

– Jaron Lanier