In the last year, I find myself obsessing ever more fequently about agency and platforms. Given I spent six years writing a PhD about human agency, it is inevitable that this would be the lens I bring to the analysis of platforms. But it also reflects a sustained weakness in how the role of agency in platforms is conceptualised, as well as the political implications which are seen to flow from this. It is one which we can make sense of using Margaret Archer’s notion of conflation, developed to make sense of how different theorists have sought to solve the problem of structure and agency.
I want to suggest we can find a fundamental ambiguity about platforms which plays out at both political and ontological levels. This ambiguity reflects a failure to make sense of how platforms exercise a causal influence over human beings and how human beings exercise a causal influence over platforms. Platform structuralism takes many forms but it fundamentally sees human behaviour as moulded by platforms, leveraging insights into the social, psychological and/or neuro constitution of human beings to condition their behaviour in predictable and explicable ways. It takes the platform as the horizon of human action, framing human beings as responding to the incentives and disincentives to be found within its architecture. It is often tied to a politics which sees platforms as generating pathological, even addictive, behaviours. It conflates downwards and takes agency as an epiphenomenon of (platform) structure.
Critiques informed by platform structuralism often seem to have put their finger on something important, while remaining overstated in a way that is hard to pin down specifically. My suggestion is this overstatement is a failure to come to terms with the fundamental relation between the platform and the user. How do platforms exercise a causal influence over their users? Their interventions are individualised in a statistical way, rather than a substantive one. These are instruments which are simultaneously precise yet blunt. While they might be cumulatively influential, particular instances are liable to be crude and ineffective, often passing unnoticed in the life of the user. For this reason we have to treat the causal powers of platforms over their users extremely careful. It is also something which varies immensely between platforms and the ontology of platforms designed for multi-sided markets is a more complex issue for another post.
Platform voluntarism is often a response to the overstatement of platform structuralism. Denying the capacity of platforms to mould their users, platforms are framed as simply providing incentives and disincentives, able to be ignored by users as readily as they are embraced. The platform is simply a stage upon which actors act, perhaps facilitating new categories of action but doing nothing to shape the characteristics of the agents themselves. It conflates upwards, treating platform (structure) as a straight forward expression of the aggregate intentions of their users. Both platform voluntarism and platform structuralism tend to reify platforms, cutting them off in different ways from both users and the wider social context in which they use. What gets lost is human agency and the ways in which these infrastructures shape and are shaped by human agents.
Another reason it is so crucial to retain agency as a category is because these platforms are designed in purposive ways. Unless we have an account of how they have the characteristics they do because people have sought to develop them in specific ways, we risk lapsing into a form of platform structuralism which we take platforms as an a priori horizon within which human beings act. They are simply given. We might inquire into the characteristics of platforms in other capacities, including as business models, but we won’t link this to our account of how platforms conditions the social action of users taking place within and through them. We will miss the immediate reactivity of platforms to their users, as well as the many human, rather than merely algorithmic, mechanisms at work. But more broadly, we will take the conditioning influences as a given rather than as something to be explained. In such a case, we treat user agency and engineering agency as unrelated to each other and fragment a phenomenon which we need to treat in a unified way.
If we want to draw out these connections, it becomes necessary to understand how engineers design platforms in ways encoding an understanding of users and seeking to influence their action. If we can provide thick descriptions of these projects, capturing the perspective of engineers as they go about their jobs, it becomes much easier to avoid the oscillation between platform structuralism and platform voluntarism. Central to this is the question of how platform engineers conceive of their users and how they act on these conceptions. What are the vocabularies through which they make sense of how their users act and how their actions can be influenced? Once we recover these considerations, it becomes harder to support the politics which often flows from platform structuralism. As Jaron Lanier writes on loc 282 of his Ten Arguments for Deleting Your Social Media Accounts Right Now:
There is no evil genius seated in a cubicle in a social media company performing calculations and deciding that making people feel bad is more “engaging” and therefore more profitable than making them feel good. Or at least, I’ve never met or heard of such a person. The prime directive to be engaging reinforces itself, and no one even notices that negative emotions are being amplified more than positive ones. Engagement is not meant to serve any particular purpose other than its own enhancement, and yet the result is an unnatural global amplification of the “easy” emotions, which happen to be the negative ones.
He suggests we must replace terms like “engagement” with terms like “addiction” and “behavior modification”. Only then can we properly confront the political ramifications of this technology because our description of the problems will no longer be sanitised by the now familiar discourse of Silicon Valley. But this political vocabulary would be unhelpful for sociological analysis because it takes us further away from the lifeworld of big tech. It is only if we can establish a rich understanding of the agency underlying the reproduction and transformation of platforms that we can overcome the contrasting tendencies towards platform structuralism and platform voluntarism. But this political vocabulary would be unhelpful for sociological analysis because it takes us further away from the lifeworld of big tech. It is only if we can establish a rich understanding of the agency underlying the reproduction and transformation of platforms that we can overcome conflationism in our approach to platforms.