I found this review of Trump and the Media by Nicholas Carr in the LA Review of Books immensely thought-provoking. His focus is on the book’s historical contribution, contextualising the enthusiasm with which social media was greeted in terms of long term concerns about the centralisation of mass media. We can’t understand the ideal of a radically decentralised media without understanding the anxieties provoked by its initial centralisation:

Trump’s twitter stream may be without precedent, but the controversy surrounding social media’s political impact has a history stretching back nearly a century. During the 1930s, the spread of mass media was accompanied by the rise of fascism. To many observers at the time, the former helped explain the latter. By consolidating control over news and other information, radio networks, movie studios, and publishing houses enabled a single voice to address and even command the multitudes. The very structure of mass media seemed to reflect and reinforce the political structure of the authoritarian state.

It is against this backdrop that social scientists began to “imagine a decentralized, multimedia communication network that would encourage the development of a ‘democratic personality,’ providing a bulwark against fascist movements and their charismatic leaders”. Fred Turner traces these initial speculations from their originators, through the 1960s counterculture and the incipient computer industry, before it became an article of faith within present day Silicon Valley:

In the early years of this century, as the internet subsumed traditional media, the ideal became a pillar of Silicon Valley ideology. The founders of companies like Google and Facebook, Twitter and Reddit, promoted their networks as tools for overthrowing mass-media “gatekeepers” and giving individuals control over the exchange of information. They promised, as Turner writes, that social media would “allow us to present our authentic selves to one another” and connect those diverse selves into a more harmonious, pluralistic, and democratic society.

Carr frames Trump and the Media as “orbiting” around “the wreckage of techno-progressive orthodoxy”. These are the terms in which I’ve recently tried to analyse ‘fake news’ and ‘post-truth’, as solutionist framings by technological, media and political elites which circumscribe a much broader set of transformations and shape likely responses to them. It’s often struck me that these represent a peculiarly populist form of reasoning in their own right: isolating an incoming element which is seen to undermine a previously stable system, whether this is ‘populism’ or ‘social media’ itself. In the process, the claims of populists and social media firms are taken at face value, vastly inflating the power they have:

One contentious question is whether social media in general and Twitter in particular actually changed the outcome of the vote. Keith N. Hampton, of Michigan State University, finds “no evidence” that any of the widely acknowledged malignancies of social media, from fake news to filter bubbles, “worked in favor of a particular presidential candidate.” Drawing on exit polls, he shows that most demographic groups voted pretty much the same in 2016 as they had in the Obama-Romney race of 2012. The one group that exhibited a large and possibly decisive shift from the Democratic to the Republican candidate were white voters without college degrees. Yet these voters, surveys reveal, are also the least likely to spend a lot of time online or to be active on social media. It’s unfair to blame Twitter or Facebook for Trump’s victory, Hampton suggests, if the swing voters weren’t on Twitter or Facebook.

This is not to say that social media doesn’t exercise influence, only to dispute the assumption that it works through one-to-many communication. The media elites bemoaning the rise of fake news and filter bubbles in the dawning post-truth age are themselves complicit in the dynamic they see as being ‘out there’:

What Hampton overlooks are the indirect effects of social media, particularly its influence on press coverage and public attention. As the University of Oxford’s Josh Cowls and Ralph Schroeder write, Trump’s Twitter account may have been monitored by only a small portion of the public, but it was followed, religiously, by journalists, pundits, and policymakers. The novelty and frequent abrasiveness of the tweets — they broke all the rules of decorum for presidential campaigns — mesmerized the chattering class throughout the primaries and the general election campaign, fueling a frenzy of retweets, replies, and hashtags. Social media’s biggest echo chamber turned out to be the traditional media elite.

What this short review suggested to me is the necessity of revisiting basic concepts (such as centralisation, gatekeepers, publics and influence) in response to the wreckage of techno-progressive orthodoxy. We need a bleak social theory for bleak times and if it doesn’t begin by examining the assumptions inherited in core concepts, as well as their implications for making sense of the present conjuncture, it is unlikely to get very far.

There’s an interesting anecdote on loc 3960-3972 of Bad Blood, John Carreyrou’s gripping account of the Theranos scandal, recounting a follow up meeting between Rupert Murdoch and Elizabeth Holmes which sealed the former’s investment in the latter’s company. I thought it was a vivid account of the distinctive corporate culture which had emerged within Theranos and how this ran contrary to the expectations that could be found within other sectors, even amongst international elites who could be expected to have seen a great deal in terms of the personal entourage of fellow elites and the business expectations they bring with them into potential collaborations:

They met again a few weeks later at the media mogul’s Northern California ranch. Murdoch, who had only one bodyguard, was surprised by the size of the security detail Holmes arrived with. When he asked her why she needed it, she replied that her board insisted on it. Over a lunch served by the ranch’s staff, Holmes pitched Murdoch on an investment, emphasizing that she was looking for long-term investors. Don’t expect any quarterly reports for a while, she warned him, and certainly not an initial public offering. The investment packet that was later delivered to Murdoch’s Manhattan office reiterated that message. Its cover letter stated in the first paragraph that Theranos planned to remain private for the “long term” and went on to repeat those two words no fewer than fifteen times.

There are financial consequences to start-ups remaining private for longer, in some cases seeking to avoid going public altogether. Furthermore, it is only a possibility because of the ready availability of capital driven by low interest rates and the (illusory) promise of exponential growth. But what are the cultural consequences of firms remaining private for longer? How were the cultural pathologies of Theranos facilitated by their status? How did this corporate culture shape how the firm was seen by outsiders? Was the corporate culture of Theranos exceptional or can we see extreme manifestations there of tendencies which can be identified in other firms?

To frame the commercialisation of space as being somehow related to ‘platform capitalism’ risks misunderstanding. It is certainly the case that Jeff Bezos, owner of Blue Origin, owes his wealth to Amazon but this has become a platform over time rather than being founded as one. Elon Musk, owner of SpaceX, owes his early success to PayPal, a finance platform which was purchased at great expense by a peer-to-peer commerce platform, but he is far from the quintessential platform capitalist. Meanwhile, there are other players in the commercial space industry, such as Microsoft co-founder Paul Allen and brand-for-hire Richard Branson, who have little to do with what we talk about when we use a term like platform capitalism.

Therefore what I mean when I talk about the interplanetary horizons of platform capitalism is not the commercial history of the founders of these companies, though they have accumulated their wealth over the period where platforms have become ubiquitous and tech firms have become the most highly valued commercial entities on the planet. This has certainly facilitated their development, with Bezos largely self-financing his company until recently and Musk cross-fertilising his reputation and leveraging the Silicon Valley cult of the founder to win attention, overcome incumbents and force his way into the lucrative field of state contracts. But we miss what is most interesting about the commercialisation of space if we focus exclusively on these figures.

What interests me is how the platform, as an operable business model but also a heuristic working analogically to collapse the vast array of future opportunities into specifiable strategies, frames the new phase of space travel we are beginning to enter into. This is something Microsoft co-founder Paul Allen explicitly invokes on pg 266-267 of Christian Davenport’s The Space Barons:

Allen also saw parallels between the space frontier and the Internet. “When such access to space is routine, innovation will accelerate in ways beyond what we can currently imagine,” he said. “That’s the thing about new platforms: when they become easily available, convenient, and affordable, they attract and enable other visionaries and entrepreneurs to realize more new concepts.… “Thirty years ago, the PC revolution put computing power into the hands of millions and unlocked incalculable human potential. Twenty years ago, the advent of the web and the subsequent proliferation of smartphones combined to enable billions of people to surmount the traditional limitations of geography and commerce. Today, expanding access to LEO [low Earth orbit] holds similar revolutionary potential.”

The same case is made by Jeff Bezos is in terms of infrastructure. These firms are building the infrastructure which make commercial innovation in space feasible, creating facilitates and crafting pipelines which other players will be able to use. The ambition here is vast, seeking to save capitalism from itself by moving it into space. For Musk, hope lies with Mars and the extension of technological civilisation there to move beyond the confines of a dying earth. For Bezos, we must move industry beyond Earth and preserve our habitat as the place to live while commerce, mining and manufacturing expand outwards to the stars. There is a civilisational vision in both cases, necessary to recognise even if we don’t take it seriously.

It is easy to dismiss this as hubris, the outsized dreams of billionaires with too few restraints on how they spend their vast wealth. It is perhaps more fair, even if inaccurate, if we see it as an ideological front to cover expansion into the largest area of state spending which until recently remained untouched by private commerce. But I’m increasingly convinced there’s more going on here than either explanation can recognise. Platform capitalism has interplanetary horizons which we should take seriously because they make a difference, even if they prove logistically or technologically unfeasible in the longer term. This is the frontier of how digital elites think about capitalism and its future, liable to exercise an enormous influence upon our collective world in which these figures have near untrammelled power.

There’s a fascinating mea culpa in Jaron Lanier’s new book Ten Arguments for Deleting Your Social Media Accounts Right Now. On loc 411 he describes how early design decisions, inspired by the libertarian ethos taking hold within the tech community, created the openings for the global monopolies we now see emerging:

Originally, many of us who worked on scaling the internet hoped that the thing that would bring people together—that would gain network efect and lock-in—would be the internet itself. But there was a libertarian wind blowing, so we left out many key functions. The internet in itself didn’t include a mechanism for personal identity, for instance. Each computer has its own code number, but people aren’t represented at all. Similarly, the internet in itself doesn’t give you any place to store even a small amount of persistent information, any way to make or receive payments, or any way to find other people you might have something in common with. Everyone knew that these functions and many others would be needed. We figured it would be wiser to let entrepreneurs fill in the blanks than to leave that task to government. What we didn’t consider was that fundamental digital needs like the ones I just listed would lead to new kinds of massive monopolies because of network efects and lock-in. We foolishly laid the foundations for global monopolies. We did their hardest work for them. More precisely, since you’re the product, not the customer of social media, the proper word is “monopsonies.” Our early libertarian idealism resulted in gargantuan, global data monopsonies.

If I understand him correctly, he is suggesting that these functions could have been built into the infrastructure of the internet itself rather than becoming services fulfilled by corporate providers. This passage reminded me of a recent keynote by danah boyd, reflecting on how utopian dreams concerning digital technology have come to seem untenable with time:

A decade ago, academics that I adore were celebrating participatory culture as emancipatory, noting that technology allowed people to engage with culture in unprecedented ways. Radical leftists were celebrating the possibilities of decentralized technologies as a form of resisting corporate power. Smart mobs were being touted as the mechanism by which authoritarian regimes could come crashing down.

Now, even the most hardened tech geek is quietly asking:

What hath we wrought?

This intellectual utopianism concerned the products of the original digital utopians themselves, innovators who sought to “disrupt the status quo, but weren’t at all prepared for what it would mean when they controlled the infrastructure underlying democracy, the economy, the media, and communication”. Recognising the role of dreams in shaping technology isn’t just a matter of how they inspire people to create but also recognising what happens when they go wrong. These aren’t just a froth of naiveté on the surface of a dark materiality lurking beneath. They are rather a force in their own right, changing the world they sought to improve as the ambitions underlying them curdle in the darkening reality they have contributed to.

How good does this look? So much of this chimes with the paper I’m currently struggling to finish

The Cultural Life of Machine Learning: An Incursion into Critical AI Studies
Preconference Workshop, #AoIR2018 Montréal, Canada
Urbanisation Culture Société Research Centre, INRS (Institut national de la recherche scientifique)
Wednesday October 10th 2018

Machine learning (ML), deep neural networks, differentiable programming and related contemporary novelties in artificial intelligence (AI) are all leading to the development of an ambiguous yet efficient narrative promoting the dominance of a scientific field—as well as a ubiquitous business model. Indeed, AI is very much in full hype mode. For its advocates, it represents a ‘tsunami’ (Manning, 2015) or ‘revolution’ (Sejnowski, 2018)—terms indicative of a very performative and promotional, if not self-fulfilling, discourse. The question, then, is: how are the social sciences and humanities to dissect such a discourse and make sense of all its practical implications? So far, the literature on algorithms and algorithmic cultures has been keen to explore both their broad socio-economic, political and cultural repercussions, and the ways they relate to different disciplines, from sociology to communication and Internet studies. The crucial task ahead is understanding the specific ways by which the new challenges raised by ML and AI technologies affect this wider framework. This would imply not only closer collaboration among disciplines—including those of STS for instance—but also the development of new critical insights and perspectives. Thus a helpful and precise pre-conference workshop question could be: what is the best way to develop a fine-grained yet encompassing field under the name of Critical AI Studies? We propose to explore three regimes in which ML and 21st-century AI crystallize and come to justify their existence: (1) epistemology, (2) agency, and (3) governmentality—each of which generates new challenges as well as new directions for inquiries.

In terms of epistemology, it is important to recognize that ML and AI are situated forms of knowledge production, and thus worthy of empirical examination (Pinch and Bijker, 1987). At present, we only have internal accounts of the historical development of the machine learning field, which increasingly reproduce a teleological story of its rise (Rosenblatt, 1958) and fall (Minsky and Papert 1968; Vapnik 1998) and rise (Hinton 2006), concluding with the diverse if as-yet unproven applications of deep learning. Especially problematic in this regard is our understanding of how these techniques are increasingly hybridized with large-scale training datasets, specialized graphics-processing hardware, and algorithmic calculus. The rationale behind contemporary ML finds its expression in a very specific laboratory culture (Forsythe 1993), with a specific ethos or model of “open science”. Models trained on the largest datasets of private corporations are thus made freely available, and subsequently détourned for the new AI’s semiotic environs of image, speech, and text—promising to make the epistemically recalcitrant landscapes of unruly and ‘unstructured’ data newly “manageable”.

As the knowledge-production techniques of ML and AI move further into the fabric of everyday life, it creates a particularly new form of agency. Unlike the static, rule-based systems critiqued in a previous generation by Dreyfus (1972), modern AI models pragmatically unfold as a temporal flow of decontextualized classifications. What then does agency mean for machine learners (Mackenzie, 2017)? Performance in this particular case relates to the power of inferring and predicting outcomes (Burell, 2016); new kinds of algorithmic control thus emerge at the junction of meaning-making and decision-making. The implications of this question are tangible, particularly as ML becomes more unsupervised and begins to impact on numerous aspects of daily life. Social media, for instance, are undergoing radical change, as insightful new actants come to populate the world: Echo translates your desires into Amazon purchases, and Facebook is now able to detect suicidal behaviours. In the general domain of work, too, these actants leave permanent traces—not only on repetitive tasks, but on the broader intellectual responsibility.

Last but not least, the final regime to explore in this preconference workshop is governmentality. The politics of ML and AI are still largely to be outlined, and the question of power for these techniques remains largely unexplored. Governmentality refers specifically to how a field is organised—by whom, for what purposes, and through which means and discourses (Foucault, 1991). As stated above, ML and AI are based on a model of open science and innovation, in which public actors—such as governments and universities—are deeply implicated (Etzkowitz and Leydesdorff, 2000). One problem, however, is that while the algorithms themselves may be openly available, the datasets on which they rely for implementation are not—hence the massive advantages for private actors such as Google or Facebook who control the data, as well as the economic resources to attract the brightest students in the field. But there is more: this same open innovation model makes possible the manufacture of military AI with little regulatory oversight, as is the case for China, whose government is currently helping to fuel an AI arms race (Simonite 2017). What alternatives or counter-powers could be imagined in these circumstances? Could ethical considerations stand alone without a proper and fully developed critical approach to ML and AI? This workshop will try to address these pressing and interconnected issues.

We welcome all submissions which might profitably connect with one or more of these three categories of epistemology, agency, and governmentality; but we welcome other theoretically and/or empirically rich contributions.

Interested scholars should submit proposal abstracts, of approximately 250 words, by 11:59pm EDT on June 30th, 2018 to CriticalAI2018 [at] gmail [dot] com. Proposals may represent works in progress, short position papers, or more developed research. The format of the workshop will focus on paper presentations and keynotes, with additional opportunities for group discussion and reflection.

This preconference workshop will be held at the Urbanisation Culture Société Research Centre of INRS (Institut national de la recherche scientifique). The Centre is located at 385 Sherbrooke St E, Montreal, QC, and is about a 20-minute train ride from the Centre Sheraton on the STM Orange Line (enter at the Bonaventure stop, exit at Sherbrooke), or about a 30-minute walk along Rue Sherbrooke.

For information on the AoIR (Association of Internet Researchers) conference, see https://aoir.org/aoir2018/ ; for other preconference workshops at AoIR 2018, see https://aoir.org/aoir2018/preconfwrkshop/.

Organizers: Jonathan Roberge (INRS), Michael Castelle (University of Warwick), and Thomas Crosbie (Royal Danish Defence College).

In a fascinating account of the private space programs of Jeff Bezos and Elon Musk, Christian Davenport explains how the Advanced Research Projects Agency (ARPA) has its origins in the geopolitics of the Cold War. From pg 59:

Eisenhower entered the room at 10: 31 a.m., and decided to get right to it, asking, “Do you have any questions for me?” The very first question he faced, from United Press International, was blunter than he was used to: “Mr. President, Russia has launched an Earth satellite. They also claim to have had a successful firing of an intercontinental ballistic missile, none of which this country has done. I ask you, sir, what are we going to do about it?” In the midst of the Cold War, the Soviets’ launches were seen as acts of aggression, expressions of military superiority. In a memo to the White House, C. D. Jackson, a former special assistant to the president who had served in the Office of Strategic Services, wrote that it was “an overwhelming important event—against our side.… This will be the first time they have achieved a big scientific jump on us, ostensibly for peaceful scientific purposes, yet with tremendous military overtones. Up to now, it has generally been the other way around.” If the Soviet Union could put a satellite into orbit, it’s hold the ultimate high ground and could, many feared, rain down missiles on American cities from space. Life magazine compared Sputnik to the shots fired at Lexington and Concord and urged the country to “respond as the Minutemen had done then.” Then Texas senator Lyndon Johnson fretted that “soon they will be dropping bombs on us from space like kids dropping rocks onto cars from freeway overpasses.”

This emerged from what the new agency called “traumatic experience of technological surprise” and constituted “a sort of elite special force within the Pentagon made of its best and brightest scientists and engineers” which cut across the entrenched barriers of the established services within the military. I would like to better understand the significance of DARPA in this context, as well as what it might tell us about how techno-nationalism might in future lead to the condensation of funding priorities into new agencies. As Davenport describes it on pg 128:

DARPA was tasked with looking into the future to envision what sorts of technologies the United States would need for the future of war: “To cast a javelin into the infinite spaces of the future” was its motto, a quote from Hungarian composer Franz Liszt. Walled off from the rest of the giant Pentagon bureaucracy so that it could innovate freely, the agency strove for nothing short of revolutionary advancement and “engineering alchemy” that would pierce the realm of science fiction. It had been given the authority to hire as it needed, as it sought “extraordinary individuals who are at the top of their fields and are hungry for the opportunity to push the limits of their disciplines.”

It has contributed to the development of a remarkable range of technologies, as detailed by Davenport on pg 128:

During Gise’s time, DARPA, then known as ARPA, was focused on preventing nuclear war and winning the space race. It even helped develop NASA’s Saturn V rocket, which took the Apollo astronauts to the moon. Since then, its reach and influence had broadened. In the late 1960s it started work on what would become ARPANET (Advanced Research Projects Agency Network), a network of computers in different geographic locations that became a precursor to the Internet.

Over the years, it helped develop all sorts of technological advancements that have transformed war, and, in some cases, everyday life. DARPA helped give birth to the Global Positioning System (GPS), stealth technology, cloud computing, early versions of artificial intelligence, and autonomous aerial vehicles. As early as the late 1970s, it was working on a “surrogate travel system” that created something like a Google Street View map of Aspen, Colorado. More recently, its work was focused on underwater drones, geckolike gloves designed to enable soldiers to climb walls, humanoid robots, bullets that can change direction, and a blood-cleansing “artificial spleen” to help treat sepsis.

What does this tell us about the future? Probably not very much in itself, though it is interesting to note that the DARPA budget is growing, from $2.97 billion in 2015 to a budget request of $3.44 billion for 2019. If anyone has suggestions of good places to read about developing trends in government funding of technology research, particularly in relation to national security, I’d like to read them. My point in writing this post here is not to lionise ARPA or call for the ‘disruption’ of the military but simply to observe the relationship between geopolitical concerns and technological innovation. If developments such as artificial intelligence, crypto-currency and platformization have increasingly vast geopolitical ramifications then what will this mean for the climate of state investment in emerging technologies? In many ways, the point is an obvious one but making it leaves us squarely within a terrain so mired in ideology (concerning free-markets and technology on the one hand, national security interests on the others) that the full significance of the observation will often be lost.

These notes are for the fifth and final week of the CPGJ platform capitalism intensive reading group. One of the themes running through the readings over the five weeks has been the political valence of platforms and its relationship to our analysis of them. My own instinct is that valorising platforms in an a priori way impedes our analysis of them but that an a political framing of platform capitalism is neither possible nor desirable. Rather than being an outright contradiction, I believe this leaves a small space for analysis which I hoped the readings for this week would help open up. The essay by Helen Margetts takes issue with the gloomy interpretations of recent developments with social media, contrasting to the now antiquated sense of excitement with which they were once greeted. As she put it in a lecture in Cambridge I helped organise in November, “social media have had a bad press recently”:

They are held responsible for pollution of the democratic environment through fake news, junk science, computational propaganda and aggressive micro-targeting. In turn, these phenomena have been blamed for the rise of populism, political polarization, far-right extremism and radicalisation, waves of hate against women and minorities, post-truth, the end of representative democracy, fake democracy and ultimately, the death of democracy. It feels like the tirade of relatives of the deceased at the trial of the murderer. It is extraordinary how much of this litany is taken almost as given, the most gloomy prognoses as certain visions of the future.

Her point is not to reassert tech-utopianism but simply to stress that “we know rather little about the relationship between social media and democracy”. After ten years in which the internet has challenged our previous assumptions about democracy, it is imperative that we do not rush to judgement in lieu of understanding how social media have “injected volatility and instability into political systems, bringing a continual cast of unpredictable events”. There is barely a feature of political life that has been untouched by these changes, posing profound questions for our conceptual, empirical and normative understanding of democracy. But as much as these platforms generate transactional data which could in principle help us to understand these changes, in reality “Most of this data is proprietary and inaccessible to researchers –  the revolution in big data and data science has passed by democracy research”.

Her essay responds to this epistemic void by laying out a concise thought systematic account of what we _do_ know about social media and its relationship to politics. The positive part of this account rests on the value of what she terms “tiny acts” such as “Following, liking, tweeting, retweeting, sharing text or images relating to a political issue or signing up to a digital campaign” which have no equivalent prior to social media and extend “below the bottom rung of the ladder of participation, which stretches from small acts such as signing a petition, through voting, to attending a political meeting, and donating money to a political cause, right up to political violence or armed struggle”. These tiny acts bring new people into politics but the same characteristics which enable political activity to take place outside of organised groups render the ensuing actions unstable and unpredictable. The resulting pattern is akin to that of earthquakes, argues Margetts, with many trivial eruptions and a few enormous ones. These patterns of engagement challenge two democratic features (political identity and institutions) and render politics more unpredictable than ever before. Drawing an analogy with the stages of grief, Margetts identifies Denial, Bargaining, Anger, Depression and Resistance as stages of response to the profound change which has been brought about in democratic politics. This includes the interesting contradiction that ‘clicktavism’ is disdained while social media is also claimed to have massive pathological effects upon organised politics. Which is it? The final stage of acceptance entails the recognition that social media are here to stay and the ensuing difficult work of institutionalising them:

There is an alternative response to the role of social media in politics – to accept that they are part of our democratic system, the political weather, and that political systems must accommodate the change, through a process of institutional catch up. Most social media platforms did not exist 10 years ago, and they have been at the heart of our political systems for far less than that. So it is understandable that political institutions have failed to adjust, and the new institutions of democracy – social media corporations – have proceeded unchecked and unregulated, particularly given the power of the original cyber-utopian dream.


We have been using the terminology of ‘platforms’ through this reading group but have we paid enough attention to the implications of this? A number of the readings we have used make a strong case about the analytical value of the term, identifying it as a mode of organisation with ramifications for capitalism as a whole. But what should we make of the readiness with which companies adopt the terminology to describe their own services. Should this make us suspicious? This is the argument Tarleton Gillespie makes in the politics of platforms. This is a term which, as Gillespie puts it, is “increasingly familiar term in the description of the online services of content intermediaries, both in their self- characterizations and in the broader public discourse of users, the press, and commentators”. Understood as a discursive strategy, it is a crucial part of how these firms “establish a long-term position in a fluctuating economic and cultural terrain”. Gillespie insists we must unpack these strategic considerations, in order to analyse how firms seek “to position themselves both to pursue current and future profits, to strike a regulatory sweet spot between legislative protections that benefit them and obligations that do not, and to lay out a cultural imaginary within which their service makes sense”. To put it crudely: it is part of the self-branding of platforms and this should surely give us pause for thought. Nonetheless, analysing this self-positioning can help us make sense of the how these firms understanding themselves, what they see as their interests and how they intend to develop their businesses over the coming years.

Platform is a structural metaphor akin to ‘network,’ ‘broadcast,’ or ‘channel’ which “depends on a semantic richness that, though it may go unnoticed by the casual listener or even the speaker, gives the term discursive resonance”. Gillespie identifies four senses in which the term platform is used, expressed through fifteen entries in the dictionary: computational (providing an infrastructure), architectural (surfaces upon which people can stand), figurative (a foundation upon which we can build) and political (a body of commitments upon which a party and/or individual seeks election). These sense intermingle, such that “being raised, level, and accessible are ideological features as much as physical ones” conveying certain qualities in the system or entity which is designated as a platform. The computational meaning of platform precedes the current preoccupation with social media. This tracks a shift in the meaning, such that the quality of being a platform is identified “not necessarily because they allow code to be written or run, but because they afford an opportunity to communicate, interact, or sell”. Reflecting on the case of YouTube, Gillespie explains how the increasingly dominant sense of platform uses the discursive force of the trope to politicisation the facilitation of user generated content:

This more conceptual use of ‘platform’ leans on all of the term’s connotations: computational, something to build upon and innovate from; political, a place from which to speak and be heard; figurative, in that the opportunity is an abstract promise as much as a practical one; and architectural, in that YouTube is designed as an open-armed, egalitarian facilitation of expression, not an elitist gatekeeper with normative and technical restrictions. This fits neatly with the long-standing rhetoric about the democratizing potential of the Internet, and with the more recent enthusiasm for user-generated content (UGC), amateur expertise, popular creativity, peer-level social networking, and robust online commentary.

This positions YouTube as “unlike the mainstream broadcasters, film studios, and publishers” and rejecting the “role of gatekeeper, not even curators: they would be mere facilitators, supporters, hosts”. In spite of the prominence of their advertising model, much of the user-generated content cannot be paired with ads because concern of being paired with the wrong content is so widespread while YouTube itself is concerned about accidentally profiting from copyright infringement. YouTube have therefore sought commercial partnerships from the outset, dominating the platform in spite of being a minority of the content to be found on it. This entails a delicate balancing act and the terminology of the platform can help unify what might otherwise be competing accounts of YouTube and its role:

The business of being a cultural intermediary is a complex and fragile one, oriented as it is to at least three constituencies: end users, advertisers, and professional content producers. This is where the discursive work is most vital. Intermediaries like YouTube must present themselves strategically to each of these audiences, carve out a role and a set of expectations that is acceptable to each and also serves their own financial interests, while resolving or at least eliding the contradictions between them.

In the case of YouTube, it allows them to “make a bid to be the new television, convincing media producers to provide their valuable content and advertisers to buy valuable consumer attention, on the back of user-generated content and all its democratic, egalitarian connotations, offered to them as television’s antidote“. 
These discursive strategies have a legal as well as marketing component. As Gillespie observe, “what we call such things, what precedents we see as most analogous, and how we characterize its technical workings drives how we set conditions for it”. Firms seek “a regulatory paradigm that gives them the most leeway to conduct their business, imposes the fewest restrictions on their service provision, protects them from liability for things they hope not to be liable for, and paints them in the best light in terms of the public interest” with self-characterisation being a potent means through which this can be pursued. He deftly illustrates how the terminology of the platform can be used to avoid responsibility by defining themselves as technical companies rather than publishers. This has crucial significance within US law because under the Digital Millennium Copyright Act as “long as you are a neutral distributor of information and are not aware of specific violations, you are not liable for the violations of users”. He draws an important comparison to the regulatory environment which the telephone companies used to be subject to:

For instance, before their deregulation the telephone companies were bound by two obligations: first, they must act as a ‘common carrier,’ agreeing to provide service to the entire public without discrimination. Second, they can avoid liability for the information activities of their users, to the extent that they serve as ‘conduit,’ rather than as producers of content themselves. Both metaphors, common carrier and conduit, make a similar (but not identical) semantic claim as does platform. Both suggest that the role of distributing information is a neutral one, where the function is merely the passage of any and all content without discrimination.

The business model of YouTube doesn’t leave them with the traditional interests of publishers but it does leave them with interests in what they publish. They unavoidably make choices which shape the production, circulation and reception of material accessible through the service and these choices have implications beyond the scope of the service itself. The terminology of platform obfuscates in the face of this responsibility and this is why we must recognises the strategic conduct underpinning it:

A term like ‘platform’ does not drop from the sky, or emerge in some organic, unfettered way from the public discussion. It is drawn from the available cultural vocabulary by stakeholders with specific aims, and carefully massaged so as to have particular resonance for particular audiences inside of particular discourses. These are efforts not only to sell, convince, persuade, protect, triumph, or condemn, but to make claims about what these technologies are and are not, and what should and should not be expected of them. In other words, they represent an attempt to establish the very criteria by which these technologies will be judged, built directly into the terms by which we know them.

If we do this, it becomes easier to recognise the similarities between platform businesses and traditional media, as well as the interest they have in obscuring this commonality. Gillespie’s argument is that the discourse of ‘platform’ actively works against us in trying to analyse their position and how they represent their actions.

*Call for Papers – (In)Equalities and Social (In)Visibilities in the
Digital Age – Journal Interações*

The influence of new technologies in public and private spheres of society,
rather than a reformulation, has given rise to a new social field and
directly interferes with how we perceive the world, relate to it and
others. In Pierre Bourdieu’s (2001) theory, field arises as a configuration
of socially distributed relations.

Progressively, a universe of socialisation has emerged and consolidated:
cyberspace. Although virtual, it exists and produces effects. It can be
defined as the space boosted by the different digital communication
platforms and assumes itself as an individual communication model, allowing
the receiver to be simultaneously emitter. Space of flows (Castells, 1996),
cyberspace translates the social dimension of the Internet enabling the
diffusion of communication/information on a global scale. This causes an
intense process of inclusion and exclusion of people in the network.

The reference to info-inclusive and info-excluded societies of the digital
scenario is imperative when it is reflected in the geography of the new
socio-technological spaces. The dynamics of these territories are directly
associated with the way social, demographic, economic and technological
variables condition each other, revealing the potential for dissemination
of information and knowledge through technologies.

In this special issue of the journal Interações we propose a reflection on
(In)Equalities and Social (In)Visibilities in the Digital Age. Unpublished
works that present research results and/or theoretical reflection on this
theme are accepted (although this special issue is not limited to these
topics):

– Digital and social and economic inequalities in different geographical
contexts;
– Promoting equality by digital;
– Visibilities and social invisible created by movements of exclusion or
social inclusion, digital, media, economic, etc.;
– Invisible social groups in the digital age;
– Digital literacy and vulnerable social groups;
– Digital as a geographical barrier;
– Conditioning created by technology to the individual in a social context.

Deadline for submission of articles: June 25
Notification of acceptance: July 10
Publication: July 31

The articles must be sent via email: interacoes@ismt.pt

Any questions should be addressed to the same email.

Guidelines and other instructions for authors can be found on the journal’s
website: http://www.interacoes-ismt.com/index.php/revista

This looks fascinating:

FROM THE HRC-SCHOLARS LISTSERV:

Dear members,

Please find attached for the call for papers from my institution’s anniversary conference. My institution being TILT (The Institute for Law, Technology and Society in Tilburg, The Netherlands), you might find this one a bit out there but we have several tracks for which we secifically hope to bring together a very interdisciplinary crowd. The track that I wanted to bring to your attention is “AI, Robotics and Responsibility”, I copy-pasted the text below. 

This is the website: https://www.tilburguniversity.edu/upload/11d6299f-4ea0-4b39-bc88-a4631c328875_CALL_FOR_PAPERS_TILTing%202019.pdf

PLease feel free to spread the word!

Aviva.

Track: AI, Robotics and Responsibility

The complexity and autonomous character of Artificial Intelligent (AI) technologies and robotics challenge conventional ideas about responsibility. How will responsibility be distributed if self- driving cars no longer require the active attention of the driver? Can operators or commanders be held responsible for the targeting decisions of autonomous lethal weapons? To what extent can human beings be accountable for administrative decisions made by machine-learning
algorithms? Some scholars have argued that the more autonomous technologies become, the less we can reasonably hold human beings responsible. This particular conclusion is perhaps a bit too premature, but it does underline that these (envisioned) technologies require a rethinking of our conceptions of responsibility and associated concepts, such as accountability, liability, trust, autonomy, agency, and control.

In this track we want to explore how developments in AI and robotics affect established ways of
distributing responsibility and how concerns about responsibility can be addressed. We consider
responsibility in a broad sense as pertaining to various different kinds of responsibility, including
accountability, liability, role responsibility, professional responsibility or moral responsibility. As
such, AI and robotics have raised a range of questions and concerns. Are our existing concepts
of liability and accountability equipped to deal with machine learning algorithms? Should artificial
agents and robots at one point in the future be held liable or be considered moral agents? To
what extent can and should the outputs of AI algorithms be explained, for example to hold human
beings accountable for automated decisions? What does it mean to have meaningful control over
an AI technology? How do increasingly autonomous technologies mediate how we experience our
(moral) responsibility, for instance in terms of how they interact with feelings of guilt, regret or
duty? These different questions bring together a number of current and related discussions that
we want to connect in this track to examine how the changing relationship between human beings
and digital technologies affects the role of responsibility in the governance and regulation of AI and
robotics. We, therefore, welcome contributions from a range of different disciplines, including law,
philosophy, social science, cognitive science and computer science, on topics related to AI, robotics
and responsibility.

For questions about possible presentation topics for this track,
please contact Dr. Merel Noorman: M.E.Noorman@uvt.nl

Notes for week 4 of the CPGJ Platform Capitalism Reading Group

I thought this short talk by danah boyd was really powerful in linking the utopian dreams of internet radicals to the anxieties and outcomes of work. Framing the future of work in terms of automation, as if that says everything which is needed to be said, obscures “the broader anxiety about identities that’s shaping both technology and work”. It’s important we reclaim this a focus of our analysis because people who can no longer “find their identity through their working environment” and realise they are in a situation “where institutions and information intermediaries no longer have their back” will not stand inertly as the rug is pulled out from beneath their feet. Their responses may be self-destructive (the opioid crisis), socially destructive (religious extremism) or socially transformational (activism). However it’s important to recognise how the activism through which people find this meaning might come to be destructive (and disruptive) in turn:

People often find themselves by engaging with others through collective action, but collective action isn’t always productive. Consider this in light of the broader conversation about media manipulation: for those who have grown up gaming, running a raid on America’s political establishment is thrilling. It’s exhilarating to game the media to say ridiculous things. Hacking the attention economy produces a rush. It doesn’t matter whether or not you memed the president into being if you believe you did. It doesn’t even matter if your comrades were foreign agents with a much darker agenda.

These people are responding to an environment which looks the way it does because of a past activism, intended to “create a public that was more broadly accessible, but ended up enabling a new wave of corrosive populism to take hold”. These people wants to “disrupt the status quo, but weren’t at all prepared for what it would mean when they controlled the infrastructure underlying democracy, the economy, the media, and communication”. Platform capitalism was “birthed out of idealism” yet became something profoundly different, now “emblematic of corrosive neoliberalism and libertarianism run amok”. Early adopters saw themselves as marginal (“geeks, freaks, and queers”) and “turned to technology to build solidarity and feel less alone”. As boyd observes, it wasn’t so long ago that this utopianism seemed tenable to many,

A decade ago, academics that I adore were celebrating participatory culture as emancipatory, noting that technology allowed people to engage with culture in unprecedented ways. Radical leftists were celebrating the possibilities of decentralized technologies as a form of resisting corporate power. Smart mobs were being touted as the mechanism by which authoritarian regimes could come crashing down.

Now, even the most hardened tech geek is quietly asking:

What hath we wrought?

I thought this talk setup questions rather than answered them. How do the cultural frames promulgated by technologists lock in the outcomes their innovations have made possible? How do we politicise technology in a way that recognises the ever-present possibility of corruption and abuse? How can we ensure technologists take responsibility for what they produce? Can the instinct to disrupt the status quo through technology take a positive form or should the lesson of the last couple of decades be that this will inevitably lead us to dark places? The talk also does something foundational to how I approach platform capitalism: it brings the agents back in without losing the focus on the technology.

In this week’s CPGJ platform capitalism reading group, we turn towards education for the first time with a paper by José van Dijck and Thomas Poell looking at the influence of social media platforms on education, particularly within schools. Much of the literature has addressed social media as tools, with varying interpretations offered about how these might harm or hinder teaching and learning. The ubiquity of social media is often cited as a reason to try and integrate their use into the curriculum, with some arguing they could play a crucial role in helping with particular tasks such as information retrieval. Others frame social media as a disruptive force within the classroom, undermining existing routines and creating problems for teachers. Optimists and pessimists are united in their “social media-as-tools approach: social media are considered as technical tools that may either enhance or disrupt learning experiences”. In contrast, van Dijck and Poell insist on framing these as platforms, which are “driven by a complex interplay between technical architectures, business models, and mass user activity” and “introduce new mechanisms in social life”.

This helps broaden the focus of our analysis, away from “student behaviour and teaching practices” towards “the organization of schools and universities and, one might argue, (public) education as such”. Their analysis rests upon two distinct mechanisms: datafication and commodification. In doing so, they draw on work which has explored social mediain terms of a transformation of the landscape within which young people become civic actors, creating a range of possibilities for how education might change. The development of this perspective by van Dijck and Poell involves seeing social media as “more than mere technical facilitators: they are simultaneously technological, economic, and socio-cultural frameworks for managing online social traffic”. The main focus of their paper is upon how ratification and commodification reshape the organisation of education at primary and secondary levels.

  • Datafication is “the tendency to quantify all aspects of social interaction and turn them into code”. This incorporates two aspects: quantification and digitisation. The affordances of digital technology facilitate quantification to an extent that would not otherwise be possible. This can have descriptive and predicative dimensions to it: tracking developments in real time but also producing predictions which feed back into practice. In a sector like education, “emerging digital policy instruments transfer the assessment of didactic and pedagogical values from teachers and classrooms to (commercial) online platforms deploying real-time and predictive analytics techniques”. But datafication will have a similar tendency in others sectors because it circumvents the situational judgement of professionals by creating an analytic apparatus which operates in the background. There might be a degree of variability in how much leeway the professional continues to enjoy (consider for instance the way data can be used to enhance the performance of elites) but the broader trend is towards the diminution of agential prerogative.In the educational context, mechanisms of datafication includes data trackers and dashboards, facilitating personalisation of a sort similar to that found in content-streaming platforms like Netflix. As they write of AltSchool, it “favors technology over teachers; online personalized learning takes over classroom instruction; and the primacy of predictive analytics downgrades teachers’ professional judgment”. Digitalising a process, rendering it data and quantitative, imposes epistemic constraints on the ensuing knowledge, creating a bias towards the immediate and the atomistic. The specificity of educational is eviscerated by a generic architecture of likes and upvotes.
  • Commodification involves the “monetization of online social traffic through business models and governance structures” and is closely connected to datification. A limited number of business models all revolve around how data can be used to generate profit, incentivising continual expansion of datafication and economies of scale giving rise to fewer and larger data actors. It is hoped that was is datafied can be commodified.Data-driven commodification facilitates the unbundling of education. As the authors write, “[t]he conventional business model reflects the ideology of higher education as a curriculum-based, comprehensive experience that offers an education at a price that includes not only lectures or course content but certification, advising, tutoring, and testing”. The market for educational data, coupled with the near-zero marginal costs of digital communications, means that the curriculum can (technically) be delivered purely as content and there is a (financial) motivation for doing so. The potential implications of this educational data have barely been recognised, with the authors plausibly suggesting they might in future replace CVs in the eyes of employers.

Their analysis refuses to separate off education platforms from the wider ecosystem in which they emerge, dominated as it is by the major actors of Facebook, Google, Apple, Amazon and Microsoft. While education platforms might not threat existing institutions in the manner of Uber and taxi firms, Dijck and Poell identify three features which might lead to systemic change:

  1. Principles of social media architecture have primacy over pedagogical principles on educational platforms. When young people are “growing up immersed in the compelling social interaction these platforms offer in terms of connecting, liking, rating and following each other” and free education services (e.g. Google Scholar, Google Docs, Gmail) offered by major players like Google already play a prominent role in young people’s educational lives. This ubiquity is liable to be reinforced by continued growth in use amongst young people and funding shortfalls leaving organisation’s looking to free services which enables costs to be cut. The result is that “corporate platforms such as Google, Facebook, LinkedIn, and Microsoft are able to position themselves strategically, at the gateways of educational infrastructures such as libraries, administrative and communication systems”.
  2. The capacity of education platforms to solve what are perceived as endemic problems of education is widely assumed yet little demonstrated. This reflects the broader influence of ‘solutionism’ (seeing technical fixes to social problems) and the narrative of sustained failures within the education system. These platforms are presented as emerging from off-stage to solve the problem, riding to the rescue of something their own emergence is intimately bound up in the creation of as part of the political economy of late capitalism.
  3. The growth of online educational globally might lead to a cultural shift in the understanding of education as a public good. They suggest we can identify “how education is increasingly defined as a technological challenge developed by tech companies and decreasingly as a service carried out by dedicated teachers and funded by taxes”. The scaleable and free logic of digital education seems enticing against a backdrop of austerity politics and a drive towards the retrenchment of the welfare state.

The second paper analysis the platform as evaluative infrastructure. They are evaluative in the sense of deploying a wide array of ranking mechanisms to establish orders of worth. They are infrastructure because they provide the background conditions which makes interaction possible. An infrastructure consists of “technical artefacts, institutional arrangements, cultural habits and social conventions” (“people, language, numbers, categories, cultures, practices, artefacts but also pipes and hard-wired circuits”) to produce material forms which facilitate exchange over time and space. Power within them operates through protocols (rules and standards governing behaviour within networks) rather than familiar hierarchical forms of influence. Evaluative infrastructure “consists of an ecology of devices that disclose values of actions, events and objects in heterarchically organized systems (such as platforms) through the maintenance of protocol”. Their mechanisms co-ordinate and condition interaction which takes place between distributed parties, with the platform being the means through the platform owner facilitates the interaction and seeks to profit from it. Evaluative infrastructures facilitate platform owners to operate distinctive types of platform organisation. The evaluative infrastructure is what makes platform capitalism possible.

An immense amount of activity takes place on them: “as of 2014 eBay had 165 million active users,3 Uber was hosting over 1 million rides per day, and Airbnb was facilitating 155 million guest stays annually, surpassing the Hilton Worldwide by 22 percent”. The evaluative infrastructure establishes shared orders of worth which makes this interaction meaningful, stabilising expectations and generating trust between parties who do not stand in a prior relation to each other or have much context in common. In doing so, they “relate and recombine people, ideas, and things” through “the invisible infrastructures that coordinate and control platform activities”. Their operation rests on a “an ecology of accounting devices in the form of rankings, lists, classifications, stars and other symbols (‘likes, ‘links’, tags, and other traces left through clicks) which relate buyers, sellers, and objects”. The value creation this gives rise to takes place horizontally across the platform, defying any traditional vertical attempts to organise it by the platform-owner, necessitating a new accounting regime on the part of the platform owners and new concepts for social scientists to analyse their operation. Part of the challenge stems from the capacity of these infrastructures to bring new worlds into being rather than capturing the traces of what is already there.

Community plays a significant role in this, with the eBay founder once saying that “eBay’s success as a company de- pends upon the success of the community”. What I take them to be saying, in slightly different theoretical lingo to the one I’d used, concerns the capacity of platforms to generate relationality within groups. It produces thick relations through the mechanisms designed to counter the fact thin relations are the starting point. In doing so, the interests of the platform are effectively baked into the relational web, as much as it remains possible for its evaluative orientation to run counter to the problem in exceptional cases. Users can resist a platform but they do so in spite of their status as users. Recognising this will be crucial to understanding the lived experience of platform participation, generating thick descriptions of actions within and through infrastructures which “constantly link events, actions, behaviours, decisions (clicks), assessments and other traces left unintentionally and unconsciously (such as speed of typing, time of access, or browser used to access site) all of which are used to build a web of context around objects and subjects”. The power of platform owners operates under these conditions “through its infrastructural design, maintaining standards, imposing what counts and how to count, excluding users, and introducing rules” so as to structure the field of possibilities, rather than guiding actors within it.

Questions for discussion:

  1. What is at stake in whether we define social media as platforms or tools?
  2. What does it mean to say “All platforms are equally defined by a set of mechanisms”?
  3. Where are the agents behind evaluative infrastructures?

The Cultural Matters Group at the Department of Sociology, Uppsala University on Sept. 27-28 organizes a symposium called Dis / Connection: Conflicts, Activism and Reciprocity Online and Beyond and we look forward to receiving your papers!

Deadline for submissions is June 18, 2018.

The symposium focuses on a fundamental aspect of social relationships, namely the idea of connection. We invite abstract submissions on the possibilities of connectivity, but also the problems and promises of the act of disconnection. Digital networks embedded in everyday lives have transformed virtually every aspect of social life – from intimate relations with family and friends to the collective acts of digital activism. Digital relations and connections are our starting point for a broader discussion of notions of connectivity and how they are developing, failing, or simply being reproduced. Therefore, we also focus on the idea of disconnection as a voluntary act to take control over one’s use of digital technologies, as an act of resistance and of saying no to the opaque structures of power and control in the networked society. The goal is to further the discussion on the gains, costs and possibilities of ethical life in the culture of hyperconnectivity. The symposium also aims to address destructive functions of connected living, such as surveillance, trolls, selfies, fake news, fake news accusations, sexual harassment, click baiting, commercialism, terrorism, viruses, spam, and the colonization of private life. In other words, we are interested in how connectivity and disconnectivity can give rise to and facilitate social inclusion and democratic processes, as well as exclusion, isolation and conflict.

Confirmed keynote speaker Adam Fish is a cultural anthropologist and senior Lecturer at the Sociology Department at Lancaster University. He will give a talk entitled “Anthropology, Atmosphere, Anthropocene: Drones, Disruptive Justice, and the Disruption of the Earth.” The talk will include insights from his latest research on the use of drones in various contexts: surveillance, environmental protection, and war. Fish is the author of Technoliberalism and the end of participatory culture in the United States (2017, Palgrave) and After the internet (2017, Polity), together with Ramesh Srinivasan, and is currently working on a book called Hacker States.

Further speakers to be announced.

Mako Ishizuka, Japanese artist based in Paris, will conduct a performance art piece in parallel with the talks and paper sessions. She has exhibited her work in solo and group exhibitions internationally, including France, Japan and the Netherlands. For more info see: http://makois.com<http://makois.com/>

Papers could address but are not limited to the following themes:

*   What is ‘connectivity’ and how can we analyze it?
*   Sexism, peer pressure and online harassment
*   Digital activism / hacktivism
*   Bad connection – communication networks gone awry
*   The digital underclass – outsourced journalists, click farm and troll factory workers, and networking refugees
*   “Someone is very wrong on the internet” – risk, edgework, and the rise of the internet as a polarized political space
*   Digital afterlives and post-humanism
*   Digital intimacies
*   Online radicalization and extremism
*   Epistemic enclosures (so called “Google Bubbles”) and the future of representative democracy
*   The colonization of attention and attention economy
*   The self promotion video as a technique of the self
*   1337 h4x0rz
*   Politicians’ tweets, dick picks, inappropriate tagging and other downsides of being connected
*   Analog relationships offline – embodied relationships between humans (and other animals) as a way to opt out of the digital
*   Voluntary disconnection

Abstract submission and fees: The symposium will be free of charge, including coffee and a dinner in the evening of the 27th September. The number of participants is limited.

Please submit your abstract to disconnection@soc.uu.se<mailto:disconnection@soc.uu.se> together with your name, affiliation and contact details. Use the same address for queries, and if you want to attend the conference without presenting a paper. Abstracts should not exceed 250 words. You will be notified concerning your participation in the beginning of July.

Deadline for submissions is June 18, 2018.

We’ re looking forward to receiving your papers!

On behalf of the Cultural Matters Group,

/Magdalena

Notes for week 2 of the CPGJ Platform Capitalism reading group 

Both readings for this week treat utopian hopes of the internet bolstering democracy as anachronistic relics, looking in different ways to the murky reality of the politics which platform capitalism is giving rise to. Tufekci accepts some of the claims made about the affordances of digital technology while stressing the new inequalities which come with these developments, as we operate within a “data–analytic environment that favors the powerful, data–rich incumbents, and the technologically adept”. This is what Mark Andrejevic has elsewhere described as the ‘big-data divide‘. New strategies take advantage of this divide, entrenching it in the process through the collection of data and the development of techniques, giving rise to “more effective — and less transparent — “engineering of consent” (Bernays, 1947) in the public sphere”. This is neatly conceptualised by Tufekci as  computational politics:

As a normative (but contested) ideal, the public sphere is envisioned by Habermas (1989) as the location and place in which rational arguments about matters concerning the public, especially regarding issues of governance and the civics can take place, freed from constraints of status and identity. The public sphere should be considered at once a “normative ideal” as well as an institutional analysis of historical practice (Calhoun, 1993). As actual practice, the public sphere pertains to “places” — intersections and commons — where these civic interactions take place, and which are increasingly online. This shift to a partially online public sphere, which has brought about the ability to observe, surveil and collect these interactions in large datasets, has given rise to computational politics, the focus of this paper.

Computational politics refers applying computational methods to large datasets derived from online and off–line data sources for conducting outreach, persuasion and mobilization in the service of electing, furthering or opposing a candidate, a policy or legislation. Computational politics is informed by behavioral sciences and refined using experimental approaches, including online experiments, and is often used to profile people, sometimes in the aggregate but especially at the individual level, and to develop methods of persuasion and mobilization which, too, can be individualized. Thus, computational politics is a set of practices the rise of which depends on, but is not solely defined by, the existence of big data and accompanying analytic tools and is defined by the significant information asymmetry — those holding the data know a lot about individuals while people don’t know what the data practitioners know about them (U.S. Federal Trade Commission, 2014).

Tufekci is careful to note that the use of ‘big data’ for politics and marketing predates the internet. But her concern is digital data facilitates “significantly more individualized profiling and modeling, much greater data depth, and can be collected in an invisible, latent manner and delivered individually”. The possibilities for interacting individually, privately and asymmetrically increase enormously, leading to a qualitative change in how public the public sphere will tend to be. Engineering consent is not a new ambition but the tools now available to undertake this are radically different to what has come before:

  1. availability of big data
  2. shift to individual targeting
  3. the potential and opacity of modeling
  4. the rise of behavioral science in the service of persuasion
  5. dynamic experimentation
  6. and the growth of new power brokers on the Internet who control the data and algorithms

I suggested Kate Crawford’s essay for this week because it highlights public understanding of computational politics. What happens when there is direct and indirect awareness of this? We might hope this will take the form of organised political action but her suggestion we will see cultural response is deeply plausible. What other reactions can we imagine? How might recent revelations about Cambridge Analytica contribute to this? I haven’t got time to write proper notes about her essay now but I’ve written about it in the past, albeit briefly.

  1. Will computational politics necessarily erode the public sphere? What action can we take to prevent this? Will technocratic solutions to problems defined as ‘fake news’ and ‘computational propaganda’ help the situation or make the problem worse?  Is there any way to put computational politics back in the box? Can we have platform capitalism without computational politics?
  2. Tufekci wrote in 2014 that “there has been fairly little conceptual theory–building especially about the political and civic consequences of big data” and mainstream media “rarely goes beyond exploring big data as a hot, new topic and an exciting new tool, and rarely consider issues of power”. Is this still the case? What would such work look like? Is it ‘remastering’ existing concepts to make them digitally adequate or developing new ones?
  3. Is a coherent public understanding of computational politics taking shape? What are the consequences of this? What’s the relationship between cultural responses to computational politics and political responses to it?

On pg 57 of George Packer’s Unwinding, he describes how Oprah Winfrey’s rhetoric of authenticity and openness co-exist with a pronounced tendency to exercise control over representations of herself:

She exalted openness and authenticity, but she could afford them on her own terms. Anyone allowed into her presence had to sign away freedom of speech for life. She bought the rights to every photograph of herself and threatened to sue anyone who infringed the inviolability of her image. She withdrew her autobiography just weeks before publication after friends warned that it revealed too much about some parts of her life even as it falsified others. Her face underwent drastic alterations year by year.

At risk of stating the obvious, the capacity to make this demands is unevenly distributed. But so was the impulse to do so, at least pre-digitalisation. Only celebrities incited a sufficient proliferation of representations to make such a demand coherent. However once digitalisation becomes ubiquitous, the predicament of the celebrity begins to generalise throughout society for two reasons:

  1. More representations of individuals will tend to be produced
  2. Past representations are progressively less likely to decay

What might have been purely the prerogative of the celebrity under past conditions becomes a predicament faced by increasingly numbers of people, still with wildly different capacities to exercise control over representations of themselves. The capacity to exercise such control is going to become ever more important with each passing year, as well as ever more unequally distributed.

One of the prevailing motifs of the Trumpist era has been the recognition on all sides of the social and political costs of deindustrialization, even if this recognition is typically subsumed into a prior political stance. There’a really powerful account on pg 52 of George Packer’s Unwinding which conveys the scale of this change and the curious manner in which it remained obscure, a profound change for the worse in the lives of a vast aggregate which stubbornly resisted becoming the object of contention one might otherwise have expected:

John Russo, a former auto worker from Michigan and professor of labor studies, started teaching at Youngstown State University in 1980. When he arrived, he could look down almost every city street straight into a mill and the fire of a blast furnace. He came just in time to watch the steel industry vanish before his eyes. Russo calculated that during the decade between 1975 and 1985, fifty thousand jobs were lost in the Mahoning Valley—an economic catastrophe on an unheard-of scale. Yet, Russo said, “The idea that this was systemic didn’t occur.” As a resident expert, he would get a call from Time or Newsweek every six months, with a reporter on the line asking if Youngstown had turned the corner yet. Apparently it was impossible to imagine that so much machinery and so many men were no longer needed. It was happening in Cleveland, Toledo, Akron, Buffalo, Syracuse, Pittsburgh, Bethlehem, Detroit, Flint, Milwaukee, Chicago, Gary, St. Louis, and other cities across a region that in 1983 was given a new name: the Rust Belt. But it happened in Youngstown first, fastest, and most completely, and because Youngstown had nothing else, no major-league baseball team or world-class symphony, the city became an icon of deindustrialization, a song title, a cliché. “It was one of the quietest revolutions we’ve ever had,” Russo said. “If a plague had taken away this many people in the Midwest, it would be considered a huge historical event.” But because it was caused by the loss of blue-collar jobs, not a bacterial infection, Youngstown’s demise was regarded as almost normal.

It highlights the relationship between deindustrialisation as a socio-economic process and individualisation as a cultural phenomenon. A growing tendency to resist structural explanation, interpretation causes and consequences in terms of individuals and their lives, constrains attempts to collectively force these changes onto the political agenda as a matter of contestation.

Notes for week 1 of the CPGJ Platform Capitalism reading group. The notes below relate to Evgeny Morozov’s lecture below: 

The question of ‘who owns digital capitalism?’ was posed for the conference but it was one which Morozov felt uncomfortable with because it implied a separation between ‘digital capitalism’ and financialised capitalism. To illustrate the problem with this assumption, he cites Apple’s status as the largest trader of private corporate bonds in the world, currently with $180 billion of privately issued corporate bonds. They are not seen as a finance company but they have their own financial trading firm. In parallel to this, we can see public stock and private investments in technology firms coming primarily from financial institutions, particularly sovereign wealth funds. Some of the largest owners of tech firms are sovereign wealth funds of countries such as Norway, the country most exposed to the technology industry in the United States.

These firms are now central to capitalism. The total increase of stock value in the big five (Apple, Microsoft, Google, Amazon and Facebook) from January 1st 2017 to November 2017 has been $950 billion. That’s bigger than the GDP of Norway, Denmark and Sweden combined. In China, two big firms have added around $450 billion to their valuations over a similar timescale. Every single sovereign wealth fund on the planet is trying to get in on the act, both established companies through the stock markets and unlisted companies through private investment. Investment funds are now raising money from governments, with the intention of channeling this money into new ventures which are far from being listed on public markets. These investments sometimes involve national collaboration, such as the Russia-China investment fund, lending this activity a geopolitical significance. Morozov stresses the importance of understanding these flows of capital because of the corporate possibilities entailed by them. He cites the example of Uber’s valuation of $60 billion and the freedom which comes from being able to attract such huge investments without going to capital markets. For this reason, firms themselves are increasingly disinterested in going public any more, representing a significant transformation in the financial system. Underlying this trend is the lack of returns which are viable through investments in other domains, representing a mechanism through which we can begin to recover the political economy of digital capitalism which has been suppressed by epochal thinking and technological hype:

Morozov argues there are three deficiencies in how we understand digital capitalism at present:

  1. We are quick to imagine digital capitalism as something recent, unique, exceptional and driven primarily by technological change. In contrast, Morozov argues that the capacity of platform organisations to scale to a global level, extracting value from all corners of the world, without significant capital investment has to be traced back to two previous crises of capitalism: the crisis of profitability in the 1970s and the financial crisis of 2007/8. Previous crises have invited a solution  (financialisation: a move from an economy based on manufacturing and production) which has led to accumulating problems of its own, leading to the present predicament following from 2007/8. Asset-based welfare or asset Keynesianism has sought to compensate for welfare retrenchment and stagnant wages by inflating private assets. It’s in this context that something like AirBnb can operate as a lifeline to generate stable income for many in Southern Europe. Digital technology has introduced savings in consumer expenses (facilitating the enjoyment of services without paying their full costs, because vast influxes of capital represent a near-term subsidy by institutional investors) while also offering people ways to seek to make a living, often leveraging assets like homes and cars. Our entire digital infrastructure is underwritten by firms which use advertising to underwrite the delivery of free services which would otherwise be costed. How would things look if we could add these costs into our national accounting?
  2. We have a very hard time periodising the history of digital capitalism. We tend to think of it as driven by trends that are permanent, driven by factors such as advertising which are then projected forward into our expectations of how the corporation will make money in future. However this model is a vulnerable one, susceptible to disruptions through ad blockers, national regulation or global stagnation. Given the centrality of advertising based services to digital capitalism in its current form, we can expect such a transition in business models. Morozov makes as plausible case that this will pivot on the deployment of the data created for advertising to the development of artificial intelligence which can be offered to there other sectors of the economy. The competitive advantage these companies have in artificial intelligence is pretty much unassailable on a number of levels, as well as their obvious capacity to hoover up artificial intelligence researchers and startups. This has geopolitical implications.There are far higher profit margins on these services than there are on advertising. We can see a precursor to this movement in IBM’s transition into providing consultancy services. The consequences of this for the public sphere are enormous.
  3. We need a much more ambitious approach to political and policy interventions. How do we address the ownership of the data collected for purposes of advertising? How do we address who owns artificial intelligence services that have been built with this data? This involves moving beyond simply thinking about privacy concerns, important though they are. Likewise banning firms, which misses the structural dependence which has been inculcated on platforms like AirBnb. We need to move beyond city level regulation. What is it that we can do if we move beyond this? Given the centrality to data to the future model of tech firms, getting the regulatory regime right for data becomes more crucial than ever. Might this involve collective rights to data? Municipal ownership of data?

Questions for discussion:

  • Should we talk about digital capitalism or platform capitalism? Does it matter?
  • When did platform capitalism start? How do we contextualise it in terms of a longer history of capitalism?
  • Is advertising the primary business model of platform capitalism? What other business models are there? Under what conditions might they be superseded and what might replace them?
  • What might a political agenda for regulating platform capitalism look like? How might this vary regionally and why? What are the geopolitical implications of this?

ALW2: 2nd Workshop on Abusive Language Online
EMNLP 2018 (Brussels, Belgium), October 31st or November 1st, 2018
Submission deadline: July 20th, 2018
Website: https://sites.google.com/view/alw2018 <https://sites.google.com/view/alw2018>
Submission link: https://www.softconf.com/emnlp2018/ALW2/ <https://www.softconf.com/emnlp2018/ALW2/>

Overview
Interaction amongst users on social networking platforms can enable constructive and insightful conversations and civic participation; however, on many sites that encourage user interaction, verbal abuse has become commonplace, leading to negative outcomes such as cyberbullying, hate speech, and scapegoating. In online contexts, aggressive behavior may be more frequent than in face-to-face interaction, which can poison the social climates within online communities. The last few years have seen a surge in such abusive online behavior, leaving governments, social media platforms, and individuals struggling to deal with the consequences.

For instance, in 2015, Twitter’s CEO publicly admitted that online abuse on their platform was resulting in users leaving the platform, and in some cases even having to leave their homes. More recently, Facebook, Twitter, YouTube and Microsoft pledged to remove hate speech from their platforms within 24 hours in accordance with the EU commission code of conduct and face fines of up to €50M in Germany if they systematically fail to remove abusive content within 24 hours. While governance demands the ability to respond quickly and at scale, we do not yet have effective human or technical processes that can address this need. Abusive language can often be extremely subtle and highly context dependent. Thus we are challenged to develop scalable computational methods that can reliably and efficiently detect and mitigate the use of abusive language online within variable and evolving contexts.

As a field that works directly with computational analysis of language, NLP (Natural Language Processing) is in a unique position to address this problem. Recently there have been a greater number of papers dealing with abusive language in the computational linguistics community. Abusive language is not a stable or simple target: misclassification of regular conversation as abusive can severely impact users’ freedom of expression and reputation, while misclassification of abusive conversations as unproblematic on the other hand maintains the status quo of online communities as unsafe environments. Clearly, there is still a great deal of work to be done in this area. More practically, as research into detecting abusive language is still in its infancy, the research community has yet to agree upon a suitable typology of abusive content as well as upon standards and metrics for proper evaluation, where research in media studies, rhetorical analysis, and cultural analysis can offer many insights.

In this second edition of this workshop, we continue to emphasize the computational detection of abusive language as informed by interdisciplinary scholarship and community experience. We invite paper submissions describing unpublished work from relevant fields including, but not limited to: natural language processing, law, psychology, network analysis, gender and women’s studies, and critical race theory.

Paper Topics
We invite long and short papers on any of the following general topics:
related to developing computational models and systems:

NLP models and methods for detecting abusive language online, including, but not limited to hate speech, cyberbullying etc.
Application of NLP tools to analyze social media content and other large data sets
NLP models for cross-lingual abusive language detection
Computational models for multi-modal abuse detection
Development of corpora and annotation guidelines
Critical algorithm studies with a focus on abusive language moderation technology

Human-Computer Interaction for abusive language detection systems
Best practices for using NLP techniques in watchdog settings

or related to legal, social, and policy considerations of abusive language online:

The social and personal consequences of being the target of abusive language and targeting others with abusive language
Assessment of current non-NLP methods of addressing abusive language
Legal ramifications of measures taken against abusive language use
Social implications of monitoring and moderating unacceptable content
Considerations of implemented and proposed policies for dealing with abusive language online and the technological means of dealing with it.

In addition, in this one-day workshop, we will have a multidisciplinary panel discussion and a forum for plenary discussion on the issues that researchers and practitioners face in efforts to work with abusive language detection. We are also looking into the possibility of publishing a special issue journal to this iteration of the workshop.

We seek to have a greater focus on policy aspects of online abuse through invited speakers and panels.

Submission Information
We will be using the EMNLP 2018 Submission Guidelines. Authors are invited to submit a full paper of up to 8 pages of content with up to 2 additional pages for references. We also invite short papers of up to 4 pages of content, including 2 additional pages for references.

Accepted papers will be given an additional page of content to address reviewer comments.  We also invite papers which describe systems. If you would like to present a demo in addition to presenting the paper, please make sure to select either “full paper + demo” or “short paper + demo” under “Submission Category” in the START submission page.

Previously published papers cannot be accepted. The submissions will be reviewed by the program committee. As reviewing will be blind, please ensure that papers are anonymous. Self-references that reveal the author’s identity, e.g., “We previously showed (Smith, 1991) …”, should be avoided. Instead, use citations such as “Smith previously showed (Smith, 1991) …”.

We have also included conflict of interest in the submission form. You should mark all potential reviewers who have been authors on the paper, are from the same research group or institution, or who have seen versions of this paper or discussed it with you.
We will be using the START conference system to manage submissions.

Important Dates
Submission due: July 20, 2018
Author Notification: August 18, 2018
Camera Ready: August 31, 2018
Workshop Date: Oct 31st or Nov 1st, 2018
Submission link: https://www.softconf.com/emnlp2018/ALW2/ <https://www.softconf.com/emnlp2018/ALW2/>

One of the most prominent tropes of social media is the crowd. As the cyber-utopian Clay Shirky put it: here comes everybody. This endlessly repeated motif sees social media in terms of the people. Where once there were a few commentators who dominated the airwaves, now everybody has their say online. Where once there were a few musical superstars, now we have a ‘long tail’ of productive musicians. Where once a few critics exploited their position, now everybody has their voice heard through online review sites. In any given sector, we can see the same motif repeated, with digital platforms seen to have replaced the few with the many in a way implicitly assumed to be democratic. In many cases, we can be reasonably certain that these claims are empirically false e.g. musical superstars dominate more than ever in a global culture mediated by algorithmic discovery.

But there’s more to their plausibility than the mistaken belief they are true. They embody what I think of as the demotic imaginary: the conviction that introducing more people into a sphere, in a loud and noisy way, represents a democratic game. There can be a kernel of truth to these claims, supplemented by the affective force of the demotic imaginary to help the complexity fade away. Thus the ambiguities which qualify our judgements retreat into the background, as we are taken in by superficial realities. For instance, as Leigh Gallagher describes on loc 2004 of The Airbnb Story: 

When the attorney general’s report came out, it said that 72 percent of Airbnb’s “private” listings in New York were in violation of state law. And it said that while 94 percent of hosts had just 1 or 2 listings, the other 6 percent were so-called commercial hosts —those who had 3 or more listings regularly through Airbnb —and they accounted for more than a third of bookings and revenue. It said that one hundred hosts had 10 or more listings. The top dozen hosts had anywhere from 9 to 272 listings and made more than $ 1 million per year each. The biggest user, at 272 listings, had revenue of $ 6.8 million. 2 It wasn’t so much the illegal activity that was new —after all, given the 2010 law, any Airbnb listing for a full apartment was illegal (unless it was in a house with fewer than three units), and both then and now, thousands of hosts and guests either don’t know about the law or willfully ignore it. What was new was that this report —marking the first time a party outside Airbnb had any access to the company’s data —revealed the scope of the multiproperty activity on the site. It dovetailed with previous reports that suggested a small percentage of hosts was responsible for a disproportionate share of the company’s New York business. Airbnb called the data incomplete and outdated. It said that New York’s current rules lacked clarity, and it wanted to work together with the city on creating new regulations to stop bad actors while putting in place “clear, fair rules for home sharing.”

It is undeniable that Airbnb has introduced more providers into the rental sector. But the demotic imaginary leads us to conceive of this influx as intrinsically democratic, as a disaggregated mass of equally situated actors. The reality is rather more complex.

What is the relationship between platforms and their users? I’ve been thinking about this all morning while reading The Know‑It‑Alls: The Rise of Silicon Valley as a Political Powerhouse and Social Wrecking Ball, by Noam Cohen. On loc 277 he writes:

In fact, tech companies believe that through artificial intelligence tools they understand their users’ state of mind in a way few other companies can, and far better than any regulator. They can track, measure, and analyze the billions of decisions their users make, and they can detect even the most minor feature that may be turning them off. And rather than wait for problems, these companies can compel their users to express a preference by staging so-called A/ B testing, which involves showing groups of users slightly different versions of the site and measuring which group stays longer and is thus happier with the experience. Google famously went so far as to prepare forty-one shades of blue to test which was the best color for displaying links in its Gmail service.

This epistemic privilege is inflated but it nonetheless has to be taken seriously. There are forms of knowledge about users which platforms have unique access to, discerning real-time behaviour (including responses to planned stimuli) with a degree of granularity that would be difficult to match in any other context. What matters is how this epistemic relation is raised into a political claim: if we know our users better than any external party, how could regulation be anything other than incompetent?

This relies on a reduction of the salient characteristics of the user to their actions which register within the confines of the platform, representing the core of what I’ve written about in an upcoming chapter as the evisceration of the human: the reduction of real agency to its empirical traces. Furthermore, it is bound up with the conviction of transcending the murky mess of self-interpretation, offering apparent insight into what OK Cupid data scientist Christian Rudder memorably described as Who We Are (When We Think No One’s Looking) in the subtitle to his book Dataclysm. This is bound up in a political economy which Mark Andrejevic identifies on loc 870 of his InfoGlut:

In this regard the digital era opens up a new form of digital divide: that between those with access to the databases and those without. For those with access, the way in which data is understood and used will be fundamentally transformed. There will be no attempt to read and comprehend all of the available data – the task would be all but impossible. Correlations can be unearthed and acted upon, but only by those with access to the database and the processing power. Two different information cultures will come to exist side by side: on the one hand, the familiar, “old- fashioned” one in which people attempt to make sense of the world based on the information they can access: news reports, blog posts, the words of others and the evidence of their own experience. On the other hand, computers equipped with algorithms that can “teach” themselves will advance the instrumental pragmatics of the database: the ability to use tremendous amounts of data without understanding it.

Does anyone know of ethnographic work which looks at how this epistemic relation is talked about in everyday labour within these firms? It must presumably be invoked constantly, in an everyday manner, during user interface design and similar activities. This could help elucidate the micro-structure for the inflation of epistemic privilege which I suspect Cohen is correct to identify as one source of hostility to regulation.