The singularity is a speculative notion referring to the point at which exponential innovation generates a fundamental transformation of human civilisation. As Murray Shanahan puts it in on loc 78 of his book The Technological Singularity:

In physics, a singularity is a point in space or time, such as the center of a black hole or the instant of the Big Bang, where mathematics breaks down and our capacity for comprehension along with it. By analogy, a singularity in human history would occur if exponential technological progress brought about such dramatic change that human affairs as we understand them today came to an end. 1 The institutions we take for granted—the economy, the government, the law, the state—these would not survive in their present form. The most basic human values—the sanctity of life, the pursuit of happiness, the freedom to choose—these would be superseded. Our very understanding of what it means to be human—to be an individual, to be alive, to be conscious, to be part of the social order—all this would be thrown into question, not by detached philosophical reflection, but through force of circumstances, real and present.

How we should interpret this notion remains controversial. My own instinct is to see this as a form of techno-religion, delineating the point at which we transcend through our technological creations. But it is also something I feel we need to take seriously in order to understand, particularly how it is a framework for the future shaped by the conditions of late capitalism. It is in this sense that I was intrigued to see acceleration so explicitly invoked as a force which could be harnessed in order to drive this innovation. From pg 44 of the same book:

The last of these options raises the possibility of a whole virtual society of artificial intelligences living in a simulated environment. Liberated from the constraints of real biology and relieved of the need to compete for resources such as food and water, certain things become feasible for a virtual society that are not feasible for a society of agents who are confined to wetware. For example, given sufficient computing resources, a virtual society could operate at hyper-real speeds. Every millisecond that passed in the virtual world could be simulated in, say, one-tenth of a millisecond in the real world. 

If a society of AIs inhabiting such a virtual world were to work on improving themselves or on creating even more intelligent successors, then from the standpoint of the real world their progress would be duly accelerated. And if they were able to direct their technological expertise back out to the real world and help improve the computational substrate on which they depended, then the rate of this acceleration would in turn be accelerated. This is one route to a singularity-like scenario. The result would be explosive technological change, and the consequences would be unpredictable.

My point is not to dispute the scientific plausibility of this but rather to ask how the notions in play come to acquire the resonance they do for those advocating and exploring the prospect of the singularity. 

There’s a fascinating mea culpa in Jaron Lanier’s new book Ten Arguments for Deleting Your Social Media Accounts Right Now. On loc 411 he describes how early design decisions, inspired by the libertarian ethos taking hold within the tech community, created the openings for the global monopolies we now see emerging:

Originally, many of us who worked on scaling the internet hoped that the thing that would bring people together—that would gain network efect and lock-in—would be the internet itself. But there was a libertarian wind blowing, so we left out many key functions. The internet in itself didn’t include a mechanism for personal identity, for instance. Each computer has its own code number, but people aren’t represented at all. Similarly, the internet in itself doesn’t give you any place to store even a small amount of persistent information, any way to make or receive payments, or any way to find other people you might have something in common with. Everyone knew that these functions and many others would be needed. We figured it would be wiser to let entrepreneurs fill in the blanks than to leave that task to government. What we didn’t consider was that fundamental digital needs like the ones I just listed would lead to new kinds of massive monopolies because of network efects and lock-in. We foolishly laid the foundations for global monopolies. We did their hardest work for them. More precisely, since you’re the product, not the customer of social media, the proper word is “monopsonies.” Our early libertarian idealism resulted in gargantuan, global data monopsonies.

If I understand him correctly, he is suggesting that these functions could have been built into the infrastructure of the internet itself rather than becoming services fulfilled by corporate providers. This passage reminded me of a recent keynote by danah boyd, reflecting on how utopian dreams concerning digital technology have come to seem untenable with time:

A decade ago, academics that I adore were celebrating participatory culture as emancipatory, noting that technology allowed people to engage with culture in unprecedented ways. Radical leftists were celebrating the possibilities of decentralized technologies as a form of resisting corporate power. Smart mobs were being touted as the mechanism by which authoritarian regimes could come crashing down.

Now, even the most hardened tech geek is quietly asking:

What hath we wrought?

This intellectual utopianism concerned the products of the original digital utopians themselves, innovators who sought to “disrupt the status quo, but weren’t at all prepared for what it would mean when they controlled the infrastructure underlying democracy, the economy, the media, and communication”. Recognising the role of dreams in shaping technology isn’t just a matter of how they inspire people to create but also recognising what happens when they go wrong. These aren’t just a froth of naiveté on the surface of a dark materiality lurking beneath. They are rather a force in their own right, changing the world they sought to improve as the ambitions underlying them curdle in the darkening reality they have contributed to.

How good does this look? So much of this chimes with the paper I’m currently struggling to finish

The Cultural Life of Machine Learning: An Incursion into Critical AI Studies
Preconference Workshop, #AoIR2018 Montréal, Canada
Urbanisation Culture Société Research Centre, INRS (Institut national de la recherche scientifique)
Wednesday October 10th 2018

Machine learning (ML), deep neural networks, differentiable programming and related contemporary novelties in artificial intelligence (AI) are all leading to the development of an ambiguous yet efficient narrative promoting the dominance of a scientific field—as well as a ubiquitous business model. Indeed, AI is very much in full hype mode. For its advocates, it represents a ‘tsunami’ (Manning, 2015) or ‘revolution’ (Sejnowski, 2018)—terms indicative of a very performative and promotional, if not self-fulfilling, discourse. The question, then, is: how are the social sciences and humanities to dissect such a discourse and make sense of all its practical implications? So far, the literature on algorithms and algorithmic cultures has been keen to explore both their broad socio-economic, political and cultural repercussions, and the ways they relate to different disciplines, from sociology to communication and Internet studies. The crucial task ahead is understanding the specific ways by which the new challenges raised by ML and AI technologies affect this wider framework. This would imply not only closer collaboration among disciplines—including those of STS for instance—but also the development of new critical insights and perspectives. Thus a helpful and precise pre-conference workshop question could be: what is the best way to develop a fine-grained yet encompassing field under the name of Critical AI Studies? We propose to explore three regimes in which ML and 21st-century AI crystallize and come to justify their existence: (1) epistemology, (2) agency, and (3) governmentality—each of which generates new challenges as well as new directions for inquiries.

In terms of epistemology, it is important to recognize that ML and AI are situated forms of knowledge production, and thus worthy of empirical examination (Pinch and Bijker, 1987). At present, we only have internal accounts of the historical development of the machine learning field, which increasingly reproduce a teleological story of its rise (Rosenblatt, 1958) and fall (Minsky and Papert 1968; Vapnik 1998) and rise (Hinton 2006), concluding with the diverse if as-yet unproven applications of deep learning. Especially problematic in this regard is our understanding of how these techniques are increasingly hybridized with large-scale training datasets, specialized graphics-processing hardware, and algorithmic calculus. The rationale behind contemporary ML finds its expression in a very specific laboratory culture (Forsythe 1993), with a specific ethos or model of “open science”. Models trained on the largest datasets of private corporations are thus made freely available, and subsequently détourned for the new AI’s semiotic environs of image, speech, and text—promising to make the epistemically recalcitrant landscapes of unruly and ‘unstructured’ data newly “manageable”.

As the knowledge-production techniques of ML and AI move further into the fabric of everyday life, it creates a particularly new form of agency. Unlike the static, rule-based systems critiqued in a previous generation by Dreyfus (1972), modern AI models pragmatically unfold as a temporal flow of decontextualized classifications. What then does agency mean for machine learners (Mackenzie, 2017)? Performance in this particular case relates to the power of inferring and predicting outcomes (Burell, 2016); new kinds of algorithmic control thus emerge at the junction of meaning-making and decision-making. The implications of this question are tangible, particularly as ML becomes more unsupervised and begins to impact on numerous aspects of daily life. Social media, for instance, are undergoing radical change, as insightful new actants come to populate the world: Echo translates your desires into Amazon purchases, and Facebook is now able to detect suicidal behaviours. In the general domain of work, too, these actants leave permanent traces—not only on repetitive tasks, but on the broader intellectual responsibility.

Last but not least, the final regime to explore in this preconference workshop is governmentality. The politics of ML and AI are still largely to be outlined, and the question of power for these techniques remains largely unexplored. Governmentality refers specifically to how a field is organised—by whom, for what purposes, and through which means and discourses (Foucault, 1991). As stated above, ML and AI are based on a model of open science and innovation, in which public actors—such as governments and universities—are deeply implicated (Etzkowitz and Leydesdorff, 2000). One problem, however, is that while the algorithms themselves may be openly available, the datasets on which they rely for implementation are not—hence the massive advantages for private actors such as Google or Facebook who control the data, as well as the economic resources to attract the brightest students in the field. But there is more: this same open innovation model makes possible the manufacture of military AI with little regulatory oversight, as is the case for China, whose government is currently helping to fuel an AI arms race (Simonite 2017). What alternatives or counter-powers could be imagined in these circumstances? Could ethical considerations stand alone without a proper and fully developed critical approach to ML and AI? This workshop will try to address these pressing and interconnected issues.

We welcome all submissions which might profitably connect with one or more of these three categories of epistemology, agency, and governmentality; but we welcome other theoretically and/or empirically rich contributions.

Interested scholars should submit proposal abstracts, of approximately 250 words, by 11:59pm EDT on June 30th, 2018 to CriticalAI2018 [at] gmail [dot] com. Proposals may represent works in progress, short position papers, or more developed research. The format of the workshop will focus on paper presentations and keynotes, with additional opportunities for group discussion and reflection.

This preconference workshop will be held at the Urbanisation Culture Société Research Centre of INRS (Institut national de la recherche scientifique). The Centre is located at 385 Sherbrooke St E, Montreal, QC, and is about a 20-minute train ride from the Centre Sheraton on the STM Orange Line (enter at the Bonaventure stop, exit at Sherbrooke), or about a 30-minute walk along Rue Sherbrooke.

For information on the AoIR (Association of Internet Researchers) conference, see https://aoir.org/aoir2018/ ; for other preconference workshops at AoIR 2018, see https://aoir.org/aoir2018/preconfwrkshop/.

Organizers: Jonathan Roberge (INRS), Michael Castelle (University of Warwick), and Thomas Crosbie (Royal Danish Defence College).

It occurs to me when confronted with this that there are ever more contexts in which contemporary capitalism undermines the ability to plan ahead. This is striking because much of financialised capitalism is predicated on ensuring the calculability of the future through instruments like futures and securities which lock in certain expectations of future outcomes (and cause chaos when those outcomes can no longer be ensured, as happened in 07/08 when trust evaporated in the face of obviously untenable securities). If we accept the classical Weberian thesis about capitalism, its emergence was dependent upon an orientation which saw present actions as leading to future outcomes. But what happens when this relation to the future is broken? For those on zero hours contracts, each week becomes a unit unto itself, disconnected from the past and the future. For those on fixed term contracts, it will be a matter of years rather than weeks but it remains episodic. As consumers, flexibility about when we act is induced through apparent discounts against stagnant wages and declining purchasing power. Until we can’t even preserve predictably by paying more.

In the last few years, I’ve fallen into the habit of using the term chronopolitics without properly defining what I mean by it. But the sketchy thoughts above are at the nub of my concern. For instance the strategic planning of the rail company concerning their ‘operational complications’ curtails the strategic planning of their travellers. Or to give another example which often comes to my mind during the slow/fast scholarship debate, the desire of a senior professor to avoid spending their time on tedious paperwork leaves a junior colleague or a clerical assistant spending their time on a task. Power can operate through temporality, in the mundane sense of imposed tasks having temporal extension but also in the more subtle sense of imposing one party’s temporal horizon to the exclusion of the other’s. The exercise of temporal power is a powerful means through which existing social arrangements can be locked in (what we might call temporal hegemony) but it capitalism relies upon temporality for its own legitimation then this lock in might ultimately undercut itself and provide the conditions within which its demise could be fermented. However it might also simply lead to ever busier and more distracted people, dimly aware something is fundamentally wrong but too occupied by the intensity of their own lives to have the capacity to act on it.

This looks fantastic! 

CALL FOR PAPERS:

MORAL MACHINES? THE ETHICS AND POLITICS OF THE DIGITAL WORLD

6-8 March 2019, Helsinki Collegium for Advanced Studies, University of Helsinki

With confirmed keynotes from N. Katherine Hayles (Duke University, USA) and Bernard Stiegler (IRI: Institut de Recherche et d’Innovation at the Centre Pompidou de Paris)

As our visible and invisible social reality is getting increasingly digital, the question of the ethical, moral and political consequences of digitalization is ever more pressing. Such issue is too complex to be met only with instinctive digiphilia or digiphobia. No technology is just a tool, all technologies mark their users and environments. Digital technologies, however, mark them much more intimately than any previous ones have done since they promise to think in our place – so that they do not only enhance the homo sapiens’ most distinctive feature but also relieve them from it. We entrust computers with more and more functions, and their help is indeed invaluable especially in science and technology. Some fear or dream that in the end, they become so invaluable that a huge Artificial Intelligence or Singularity will take control of the whole affair that humans deal with so messily.

The symposium “Moral Machines? The Ethics and Politics of the Digital World” welcomes contributions addressing the various aspects of the contemporary digital world. We are especially interested in the idea that despite everything they can do, the machines do not really think, at least not like us. So, what is thinking in the digital world? How does the digital machine “think”? Our both confirmed keynote speakers, N. Katherine Hayles and Bernard Stiegler, have approached these fundamental questions in their work, and one of our aims within this symposium is to bring their approaches together for a lively discussion. Hayles has shown that, for a long time, computers were built with the assumption that they imitate human thought – while in fact, the machine’s capability of non-embodied and non-conscious cognition sets it apart from everything we call thinking. For his part, Bernard Stiegler has shown how technics in general and digital technologies in particular are specific forms of memory that is externalized and made public – and that, at the same time, becomes very different from and alien to individual human consciousness.

We are seeking submissions from scholars studying different aspects of these issues. Prominent work is done in many fields ranging from philosophy and literary studies to political science and sociology, not forgetting the wide umbrella of digital humanities. We hope that the symposium can bring together researchers from multiple fields and thus address the ethics and politics of the digital world in an interdisciplinary and inspiring setting. In addition to the keynotes, our confirmed participants already include Erich Hörl, Fréderic Neyrat and François Sebbah, for instance.

We encourage approaching our possible list of topics (see below) from numerous angles, from philosophical and theoretical to more practical ones. For example, the topics could be approached from the viewpoint of how they have been addressed within the realm of fiction, journalism, law or politics, and how these discourses possibly frame or reflect our understanding of the digital world.

The possible list of topics, here assembled under three main headings, includes but is not limited to:

  • Thinking in the digital world:
  • What kind of materiality conditions the digital cognition?
  • How does nonhuman and nonconscious digital world differ from the embodied human thought?
  • How do the digital technologies function as technologies of memory and thought
  • What kind of consequences might their usage in this capacity have in the long run?
  • The morality of machines:
  • Is a moral machine possible?
  • Have thinking machines made invalid the old argument according to which a technology is only as truthful and moral as its human user? Or can truthfulness and morals be programmed (as the constructors of self-driving cars apparently try to do)?
  • How is war affected by new technologies?
  • The ways of controlling and manipulating the digital world:
  • Can and should the digital world be politically controlled, as digital technologies are efficient means of both emancipation and manipulation?
  • How can we control our digital traces and data gathered of us?
  • On what assumptions are the national and global systems (e.g., financial system, global commerce, national systems of administration, health and defense) designed and do we trust them?
  • What does it mean that public space is increasingly administered by technical equipment made by very few private companies whose copyrights are secret?

“Moral Machines? The Ethics and Politics of the Digital World” is a symposium organized by two research fellows, Susanna Lindberg and Hanna-Riikka Roine at the Helsinki Collegium for Advanced Studies, University of Helsinki. The symposium is free of charge, and there will also be a public evening programme with artists engaging the digital world. Our aim is to bring together researchers from all fields addressing the many issues and problems of the digitalization of our social reality, and possibly contribute towards the creation of a research network. It is also possible that some of the papers will be invited to be further developed for publication either in a special journal issue or an edited book.

The papers to be presented will be selected based on abstracts which should not exceed 300 words (plus references). Add a bio note (max. 150 words) that includes your affiliation and email address. Name your file [firstname lastname] and submit it as a pdf. If you which to propose a panel of 3-4 papers, include a description of the panel (max. 300 words), papers (max. 200 words each), and bio notes (max. 150 words each).

Please submit your proposal to moralmachines2019@gmail.com by 31 August 2018. Decisions on the proposals will be made by 31 October 2018.

For further information about the symposium, feel free to contact the organizers Susanna Lindberg (susanna.e.lindberg@gmail.com) and Hanna-Riikka Roine (hanna.roine@helsinki.fi).

The symposium web site: https://blogs.helsinki.fi/moralmachines/.

This looks fascinating:

FROM THE HRC-SCHOLARS LISTSERV:

Dear members,

Please find attached for the call for papers from my institution’s anniversary conference. My institution being TILT (The Institute for Law, Technology and Society in Tilburg, The Netherlands), you might find this one a bit out there but we have several tracks for which we secifically hope to bring together a very interdisciplinary crowd. The track that I wanted to bring to your attention is “AI, Robotics and Responsibility”, I copy-pasted the text below. 

This is the website: https://www.tilburguniversity.edu/upload/11d6299f-4ea0-4b39-bc88-a4631c328875_CALL_FOR_PAPERS_TILTing%202019.pdf

PLease feel free to spread the word!

Aviva.

Track: AI, Robotics and Responsibility

The complexity and autonomous character of Artificial Intelligent (AI) technologies and robotics challenge conventional ideas about responsibility. How will responsibility be distributed if self- driving cars no longer require the active attention of the driver? Can operators or commanders be held responsible for the targeting decisions of autonomous lethal weapons? To what extent can human beings be accountable for administrative decisions made by machine-learning
algorithms? Some scholars have argued that the more autonomous technologies become, the less we can reasonably hold human beings responsible. This particular conclusion is perhaps a bit too premature, but it does underline that these (envisioned) technologies require a rethinking of our conceptions of responsibility and associated concepts, such as accountability, liability, trust, autonomy, agency, and control.

In this track we want to explore how developments in AI and robotics affect established ways of
distributing responsibility and how concerns about responsibility can be addressed. We consider
responsibility in a broad sense as pertaining to various different kinds of responsibility, including
accountability, liability, role responsibility, professional responsibility or moral responsibility. As
such, AI and robotics have raised a range of questions and concerns. Are our existing concepts
of liability and accountability equipped to deal with machine learning algorithms? Should artificial
agents and robots at one point in the future be held liable or be considered moral agents? To
what extent can and should the outputs of AI algorithms be explained, for example to hold human
beings accountable for automated decisions? What does it mean to have meaningful control over
an AI technology? How do increasingly autonomous technologies mediate how we experience our
(moral) responsibility, for instance in terms of how they interact with feelings of guilt, regret or
duty? These different questions bring together a number of current and related discussions that
we want to connect in this track to examine how the changing relationship between human beings
and digital technologies affects the role of responsibility in the governance and regulation of AI and
robotics. We, therefore, welcome contributions from a range of different disciplines, including law,
philosophy, social science, cognitive science and computer science, on topics related to AI, robotics
and responsibility.

For questions about possible presentation topics for this track,
please contact Dr. Merel Noorman: M.E.Noorman@uvt.nl

Notes for week 4 of the CPGJ Platform Capitalism Reading Group

I thought this short talk by danah boyd was really powerful in linking the utopian dreams of internet radicals to the anxieties and outcomes of work. Framing the future of work in terms of automation, as if that says everything which is needed to be said, obscures “the broader anxiety about identities that’s shaping both technology and work”. It’s important we reclaim this a focus of our analysis because people who can no longer “find their identity through their working environment” and realise they are in a situation “where institutions and information intermediaries no longer have their back” will not stand inertly as the rug is pulled out from beneath their feet. Their responses may be self-destructive (the opioid crisis), socially destructive (religious extremism) or socially transformational (activism). However it’s important to recognise how the activism through which people find this meaning might come to be destructive (and disruptive) in turn:

People often find themselves by engaging with others through collective action, but collective action isn’t always productive. Consider this in light of the broader conversation about media manipulation: for those who have grown up gaming, running a raid on America’s political establishment is thrilling. It’s exhilarating to game the media to say ridiculous things. Hacking the attention economy produces a rush. It doesn’t matter whether or not you memed the president into being if you believe you did. It doesn’t even matter if your comrades were foreign agents with a much darker agenda.

These people are responding to an environment which looks the way it does because of a past activism, intended to “create a public that was more broadly accessible, but ended up enabling a new wave of corrosive populism to take hold”. These people wants to “disrupt the status quo, but weren’t at all prepared for what it would mean when they controlled the infrastructure underlying democracy, the economy, the media, and communication”. Platform capitalism was “birthed out of idealism” yet became something profoundly different, now “emblematic of corrosive neoliberalism and libertarianism run amok”. Early adopters saw themselves as marginal (“geeks, freaks, and queers”) and “turned to technology to build solidarity and feel less alone”. As boyd observes, it wasn’t so long ago that this utopianism seemed tenable to many,

A decade ago, academics that I adore were celebrating participatory culture as emancipatory, noting that technology allowed people to engage with culture in unprecedented ways. Radical leftists were celebrating the possibilities of decentralized technologies as a form of resisting corporate power. Smart mobs were being touted as the mechanism by which authoritarian regimes could come crashing down.

Now, even the most hardened tech geek is quietly asking:

What hath we wrought?

I thought this talk setup questions rather than answered them. How do the cultural frames promulgated by technologists lock in the outcomes their innovations have made possible? How do we politicise technology in a way that recognises the ever-present possibility of corruption and abuse? How can we ensure technologists take responsibility for what they produce? Can the instinct to disrupt the status quo through technology take a positive form or should the lesson of the last couple of decades be that this will inevitably lead us to dark places? The talk also does something foundational to how I approach platform capitalism: it brings the agents back in without losing the focus on the technology.

In a recent paper, I’ve argued we find a cultural project underpinning ‘big data’: a commitment to reducing human being, in all its embodied affective complexity, stripping it of any reality beyond the behavioural traces which register through digital infrastructure. Underlying method, methodology and theory there is a vision of how human beings are constituted, as well as how they can be influenced. In some cases, this is explicitly argued but it is often simply implicit, lurking beneath the surface of careful choices which nonetheless exceed their own stated criteria.

It’s an argument I’m keen to take further than I have at present and reading Who Cooked Adam Smith’s Dinner by Katrine Marçal  has left me interested in exploring the parallels between homo economicus (and why we are invested in him) and the emerging homo digitalis. Marçal writes on pg 162 of the allure of the former, misunderstood if we see it as nothing more than an implausible theoretical construct or a mechanism to exercise influence over political decision-making:

Many have criticized economic man’s one-dimensional perspective. He lacks depth, emotions, psychology and complexity, we think. He’s a simple, selfish calculator. A caricature. Why do we keep dragging this paper doll around? It’s ridiculous. What does he have to do with us? But his critics are missing something essential. He isn’t like us, but he clearly has emotions, depth, fears and dreams that we can completely identify with. Economic man can’t just be a simple paper doll, a run-of-the-mill psychopath or a random hallucination. Why, if he were, would we be so enchanted? Why would we so desperately try to align every part of existence with his view of the world, even though collected research shows that this model of human behaviour doesn’t cohere with reality? The desperation with which we want to align all parts of our lives with the fantasy says something about who we are. And what we are afraid of. This is what we have a hard time admitting to ourselves. Economic man’s parodically simple behaviour doesn’t mean that he isn’t conjured from deep inner conflicts

What makes homo economicus so compelling? This allure has its roots in a denial of human dependence, describing on pg 155 how our fascination with “his self-sufficiency, his reason and the predictable universe that he inhabits” reflect discomfort with our once having been utterly dependent on others, “at the mercy of their hopes, demands, love, neuroses, traumas, disappointments and unrealized lives”, as well as the inevitability that we will be so again at the other end of the life-course. But he also embodies a vision of what life should be like between the two poles of dependency, as she writes on pg 163:

His identity is said to be completely independent of other people. No man is an island, we say, and think that economic man’s total self-sufficiency is laughable. But then we haven’t understood his nature. You can’t construct a human identity except in relation to others. And whether economic man likes it or not –this applies to him as well. Because competition is central to his nature, his is an identity that is totally dependent on other people. Economic man is very much bound to others. But bound to them in a new way. Bound to them. Downright chained to them. In competition. If economic man doesn’t compete, he is nothing, and to compete he needs other people. He doesn’t live in a world without relationships. He lives in a world where all relationships are reduced to competition. He is aggressive and narcissistic. And he lives in conflict with himself. With nature and with other people. He thinks that conflict is the only thing that creates movement. Movement without risk. This is his life: filled with trials, tribulations and intense longing. He is a man on the run.

If I’m right about the existence of homo digitalis, a clear vision of human constitution underpinning ‘big data’*, we can ask similar questions about this truncated, eviscerated, predictable monad. So complex when we look up close, so simple when we gaze down from on high. Our individuality melts away in the aggregate, leaving us no longer overwhelming but simply overwhelmed. Manageable, knowable, stripped back. Why might this be an appealing vision of human kind? Who might it be appealing to? I’m sure many can guess where I’m going with this, but it’s a topic for another post.

*A term I use to encompass digital social science, commercial and academic, as well as the organisations and infrastructures which it facilitates.

Digital Infrastructures: Poetics, Politics and Personhood – AAA San Jose 14-18 November 2018
Lorraine Weekes (Stanford University)
Gertjan Plets (Utrecht University)

Government databases, digital archives, online voting systems, and e-portals enabling the submission of everything from insurance claims to income tax returns increasingly define mundane engagements between citizen-users and a suite of public and private institutions across social arenas. Because of efficiency and transparency digital technologies are seen as affording, reliance on digital infrastructures has become widely supported on the ground. At the same time, sociopolitical structures and assumptions encoded in many of these infrastructures—and the entanglements they produce—have received little attention. The tendency of infrastructure to remain invisible until something goes wrong is perhaps especially acute in digital and high-tech contexts where the scale, technological complexity, and physical diffusion encourages black boxing. By putting the politics and poetics of digital infrastructure into the limelight, this panel will consider the historical and ethnographic dimensions of digital infrastructures and how they produce individual subjectivities, mediate power relationships and further existing reifications of the social across the globe. By bringing the theoretical insights of the burgeoning anthropology of infrastructure and bureaucracy to bear on the digital networks and assemblages, the papers in this panel endeavor to make the materiality, social-embeddedness, and historical contingency of digital infrastructure visible.

Please submit an abstract before April 2 or send enquiries to g.f.j.plets@uu.nl<mailto:g.f.j.plets@uu.nl>

One of the most prominent tropes of social media is the crowd. As the cyber-utopian Clay Shirky put it: here comes everybody. This endlessly repeated motif sees social media in terms of the people. Where once there were a few commentators who dominated the airwaves, now everybody has their say online. Where once there were a few musical superstars, now we have a ‘long tail’ of productive musicians. Where once a few critics exploited their position, now everybody has their voice heard through online review sites. In any given sector, we can see the same motif repeated, with digital platforms seen to have replaced the few with the many in a way implicitly assumed to be democratic. In many cases, we can be reasonably certain that these claims are empirically false e.g. musical superstars dominate more than ever in a global culture mediated by algorithmic discovery.

But there’s more to their plausibility than the mistaken belief they are true. They embody what I think of as the demotic imaginary: the conviction that introducing more people into a sphere, in a loud and noisy way, represents a democratic game. There can be a kernel of truth to these claims, supplemented by the affective force of the demotic imaginary to help the complexity fade away. Thus the ambiguities which qualify our judgements retreat into the background, as we are taken in by superficial realities. For instance, as Leigh Gallagher describes on loc 2004 of The Airbnb Story: 

When the attorney general’s report came out, it said that 72 percent of Airbnb’s “private” listings in New York were in violation of state law. And it said that while 94 percent of hosts had just 1 or 2 listings, the other 6 percent were so-called commercial hosts —those who had 3 or more listings regularly through Airbnb —and they accounted for more than a third of bookings and revenue. It said that one hundred hosts had 10 or more listings. The top dozen hosts had anywhere from 9 to 272 listings and made more than $ 1 million per year each. The biggest user, at 272 listings, had revenue of $ 6.8 million. 2 It wasn’t so much the illegal activity that was new —after all, given the 2010 law, any Airbnb listing for a full apartment was illegal (unless it was in a house with fewer than three units), and both then and now, thousands of hosts and guests either don’t know about the law or willfully ignore it. What was new was that this report —marking the first time a party outside Airbnb had any access to the company’s data —revealed the scope of the multiproperty activity on the site. It dovetailed with previous reports that suggested a small percentage of hosts was responsible for a disproportionate share of the company’s New York business. Airbnb called the data incomplete and outdated. It said that New York’s current rules lacked clarity, and it wanted to work together with the city on creating new regulations to stop bad actors while putting in place “clear, fair rules for home sharing.”

It is undeniable that Airbnb has introduced more providers into the rental sector. But the demotic imaginary leads us to conceive of this influx as intrinsically democratic, as a disaggregated mass of equally situated actors. The reality is rather more complex.

What is the relationship between platforms and their users? I’ve been thinking about this all morning while reading The Know‑It‑Alls: The Rise of Silicon Valley as a Political Powerhouse and Social Wrecking Ball, by Noam Cohen. On loc 277 he writes:

In fact, tech companies believe that through artificial intelligence tools they understand their users’ state of mind in a way few other companies can, and far better than any regulator. They can track, measure, and analyze the billions of decisions their users make, and they can detect even the most minor feature that may be turning them off. And rather than wait for problems, these companies can compel their users to express a preference by staging so-called A/ B testing, which involves showing groups of users slightly different versions of the site and measuring which group stays longer and is thus happier with the experience. Google famously went so far as to prepare forty-one shades of blue to test which was the best color for displaying links in its Gmail service.

This epistemic privilege is inflated but it nonetheless has to be taken seriously. There are forms of knowledge about users which platforms have unique access to, discerning real-time behaviour (including responses to planned stimuli) with a degree of granularity that would be difficult to match in any other context. What matters is how this epistemic relation is raised into a political claim: if we know our users better than any external party, how could regulation be anything other than incompetent?

This relies on a reduction of the salient characteristics of the user to their actions which register within the confines of the platform, representing the core of what I’ve written about in an upcoming chapter as the evisceration of the human: the reduction of real agency to its empirical traces. Furthermore, it is bound up with the conviction of transcending the murky mess of self-interpretation, offering apparent insight into what OK Cupid data scientist Christian Rudder memorably described as Who We Are (When We Think No One’s Looking) in the subtitle to his book Dataclysm. This is bound up in a political economy which Mark Andrejevic identifies on loc 870 of his InfoGlut:

In this regard the digital era opens up a new form of digital divide: that between those with access to the databases and those without. For those with access, the way in which data is understood and used will be fundamentally transformed. There will be no attempt to read and comprehend all of the available data – the task would be all but impossible. Correlations can be unearthed and acted upon, but only by those with access to the database and the processing power. Two different information cultures will come to exist side by side: on the one hand, the familiar, “old- fashioned” one in which people attempt to make sense of the world based on the information they can access: news reports, blog posts, the words of others and the evidence of their own experience. On the other hand, computers equipped with algorithms that can “teach” themselves will advance the instrumental pragmatics of the database: the ability to use tremendous amounts of data without understanding it.

Does anyone know of ethnographic work which looks at how this epistemic relation is talked about in everyday labour within these firms? It must presumably be invoked constantly, in an everyday manner, during user interface design and similar activities. This could help elucidate the micro-structure for the inflation of epistemic privilege which I suspect Cohen is correct to identify as one source of hostility to regulation.

What is the relationship between social media and individualism? It is often claimed that these platforms engender a preoccupation with the self, easily cast in terms of individualism. But it is a preoccupation which is just as often claimed to be profoundly social, in so far as that it involves a concern with how many facets of the self are perceived by others, as mediated through social media platforms. It occurs to me that de Tocqueville’s distinction between individualism and egotism could be useful in helping clarify this issue. Though egotism and individualism are commonly assumed to go together, such that individualism as a cultural force will foster egotism in individuals, de Tocqueville saw the distinction rather differently:

Our [European] fathers were only acquainted with egotism. Egotism is a passionate and exaggerated love of self, which leads a man to connect everything with his own person, and prefer himself to everything in the world. Individualism is a mature and calm feeling, which disposes each member of the community to sever himself from [others] so that, after he has thus formed a little circle of his own, he willingly leaves society at large to itself. Egotism originates in blind instinct: individualism proceeds from erroneous judgment … Egotism blights the germ of all virtue; individualism, at first, only saps the virtues of public life; but, in the long run, it attacks and destroys all others, and is at length absorbed in downright egotism.

A few weeks ago, I saw a collaborator of mine give a talk in which he outlined a position on social media which was roundly cast as anti-technological by those in the room i.e. reflecting an unsustainable blanket judgment of social media as a category of technology. I could see where they were coming from and my point in writing this isn’t to criticise them, only to observe the performativity of these judgments. His argument overlapped hugely with one I’ve made myself in public situations, unsurprising as it has emerged from a collaboration between the two of us. No one has ever accused me of being anti-technological when making it. Rather as if the property of being pro-technological and anti-technological is a matter of how an argument is performed, as well as how that performance is received, rather than part of the conceptual logic of the argument itself.

In her wonderful weekly newsletter, Audrey Watters writes about how these categories play out in media coverage of educational technology and how people respond to her work:

# Data Worlds? Public Imagination and Public Experimentation with Data Infrastructures

## Convenors

– Jonathan Gray (King’s College London)
– Noortje Marres (University of Warwick)
– Carolin Gerlitz (University of Siegen)
– Tommaso Venturini (École Normale Supérieure Lyon)

## Short abstract

How do data infrastructures distribute participation across society and culture? Do they participate in world-making, and if so how? Could they be utilised not just to close discussions, but also to open up public debate, imagination and experimentation?

## Long abstract

This panel is about the world-making capacities of emerging data infrastructures – including their epistemic, social and political possibilities and limitations. It examines how emerging data infrastructures may distribute and redistribute participation in knowledge and world-making across society and culture – from the platform data of big technology companies to open data from public institutions, citizen data, sensor data and associated forms of journalism and activism. How are digital technologies entangled with social practices of classifying, counting, reasoning, narrating and making decisions? Rather than just extending the reach of certain pre-authorised ways of knowing, seeing and dealing with things, how might digital technologies support more substantive forms of interactivity and participation in order to open up public conversations, imagination and experimentation about how data is made and put to work?

One of the key strengths of research methods and design approaches developed across STS, participatory design and digital social research has always been their experimentality – the ways in which they seek to combine knowing and doing – representing and intervening in social life – in potentially new, creative ways. This session asks: What distinctive forms of engagement with data infrastructures do these methods and approaches enable, and what is their capacity to contribute to data world-making? This panel organised by the Public Data Lab (http://publicdatalab.org<http://publicdatalab.org/>) will explore the capacity of STS, design and digital methods to take on the challenges outlined above, with the aim of identifying priorities for exploring and intervening around emerging data infrastructures today.

Link to panel: https://nomadit.co.uk/easst/easst2018/conferencesuite.php/panels/6260
Call for papers: https://easst2018.easst.net/call-for-papers
EASST2018
https://easst2018.easst.net/home/

Call for Papers: AMCIS 2018
Minitrack: Social Theory in Information Systems Research (STIR ’18)
Track: Social Inclusion (SIGSI)24th Americas Conference on Information Systems (AMCIS), Aug. 16-18, 2018
New Orleans, LA, USA

This minitrack solicits papers using social theory to critically examine ICTs and their roles in enabling and constraining social inclusion. What can be done to improve access to computing for underrepresented groups? In what ways do new technologies impact digital divides? What are the social, cultural, political, and economic implications of the Internet of things? These are examples of the kinds of questions we are interested in exploring in this minitrack. We are particularly interested in completed or emerging research using social theory to address the conference theme, Digital Disruption, critically examining the ways in which emerging technologies are changing the sociotechnical landscape in ways that narrow or widen the digital divide.

This will be the 18th consecutive year for STIR, and we hope to continue a tradition of high quality papers, and thought-provoking and lively discussion for IS researchers using social theory in their work.

In addition to research aligned with the conference theme we are also interested in high quality empirical and conceptual work that uses social theory to investigate issues such as (but not limited to):

• Improving access to computing for underrepresented minorities
• Reengineering the pipeline in STEM education for greater inclusiveness and diversity
• Critically assessing the ways in which ICTs and information systems can be used to privilege some and exclude others
• Understanding the impacts of the Internet of Things on the digital divide
• Assessing the unintended consequences of technology implementation and use in organizations and in social life
• Reflection on the ways in which ICT assemblages support and challenge political, cultural, and economic hegemonies.

Mini-Track chairs

Howard Rosenbaum, Indiana University  hrosenba@indiana.edu
Pnia Fichman, Indiana University  fichman@indiana.edu

Submission Instructions:

https://amcis2018.aisnet.org/submissions/call-for-papers/

Important Dates:

January 15, 2018: Manuscript submissions open
February 28, 2018: Deadline for paper submissions
April 18, 2018: Authors will be notified of decisions
April 25, 2018: Camera-ready submissions due

ICA Pre-Conference “ARTICULATING VOICE. THE EXPRESSIVITY AND PERFORMATIVITY OF MEDIA PRACTICES”

Sponsored by the Philosophy, Theory and Critique (PTC) Division of the International Communication Association

Event date: 24 May 2018, 9:00 AM to 5:00 PM, Prague, Czech Republic

Deadline for proposals: 10 January 2018 (300-500 words abstract)

Location: Main Conference Hotel

Organizers: Christian Pentzold (University of Bremen), Kenzie Burchell (University of Toronto), Olivier Driessens (University of Cambridge), Alice Mattoni (Scuola Normale Superiore), John Postill (RMIT University), Cara Wallis (Texas A&M University)

“Media matter most when they seem not to matter at all.” (Wendy Chun) But how can we understand the practices through which innovations in media and digital data move from being unexpected, novel, and impactful to the negotiated, embedded, and habitual?

The pre-conference takes issue with the mundane yet pervasive nature of media habits, rituals, and customs. It assesses the purchase of practice-based approaches in order to see under what conditions and with what consequences they enter studies in communication and media. In particular, we invite participants to consider the expressive and performative dimension of what people actually do and say in relation to media and to the wider communication ecologies in which these articulations take place. We are especially interested in contributions that examine how voices are expressed, represented, or muted and that study the ways practices of voice combine, overlap, or collide with other mediated activities in contemporary societies. With this, we strive for an explanation and critical appreciation of media practices whose accomplishment is a perennial exercise in which we find ourselves immersed.

We welcome theoretical and/or empirical contributions on questions including:

How can we theorize and study the interplay between media-related practices and technologies, discourses, or institutions? How are these constellations created, maintained, and transformed? How do praxeological approaches correspond to other inquiries into speech acts, media rituals, or media habits?
What resources and skills are mobilized in order to perform voices? What is the meaning of the work that goes into activities of voicing? How do they contribute to or undermine the constitution of public spheres, privacy, and civic life in past and contemporary societies?
How do we grasp media practices empirically, and how do we analyze them across modes of expression, across cultures, different times, and ages? How can we challenge and advance the kinds of translation and transformation happening in-between the situated enactment of media practices and the descriptions and stories of scholarly accounts?
How can we understand the ways through which media practices are accomplished in social fields? How are they deployed in struggles for gaining voice and visibility as in political communication and journalism, participation and mobilization, health communication, or science communication? How have media practices changed over time and in relation to innovations in digitization and datafication?
Responses to the contributions will be given by Elisenda Ardèvol (Universitat Oberta de Catalunya); Maria Bakardjieva (University of Calgary), S. Elizabeth Bird (University of Southern Florida); Nick Couldry (London School of Economics and Political Science).

Please email a 300-500 words proposal to Christian Pentzold (christian.pentzold[at]uni-bremen.de) by January 10, 2018.

Authors will be notified of their acceptance before January 31, 2018.

Please direct any questions to: Alice Mattoni (alice.mattoni[at]sns.it) or Christian Pentzold (christian.pentzold[at]uni-bremen.de).

More info can be found here: http://www.zemki.uni-bremen.de/de/veranstaltungen/tagungen/articulating-voice-the-expressivity-and-performativity-of-media-practices.html

This is a wonderful expression I just picked up from Machine, Platform, Crowd by Andrew McAfee and Erik Brynjolfsson. As they describe on pg 112-113, suitcase words jumble together multiple meanings in a way which renders questions more obscure than they would otherwise be:

Is generative-design software really “creative?” It’s a hard question because creativity is a prime example of what AI pioneer Marvin Minsky called a “suitcase word.” As he put it, “Most words we use to describe our minds (like ‘consciousness,’ ‘learning,’ or ‘memory’) are suitcase-like jumbles of different ideas.” We see just such a jumble in different definitions of creativity. The Oxford English Dictionary, for example, states that creativity is “the use of imagination or original ideas, especially in the production of an artistic work.”

In a lecture today I argued that our debates about the meaning of the human are prone to this, relying on contested terms without properly defining them. It’s when we confront suitcase words that social ontology becomes invaluable, offering us techniques for unpacking these terms and ensuring the debate proceeds in terms of the contents of the suitcase rather than the suitcase itself. If we are clear about this purpose then it invites us to undertake ontological reasoning in a focused way, orientated towards the clarification of questions through the delineation of entities and characteristics.

Earlier today at the British Academy’s Social Listening event, Paul Crayston used this extract from Edmund Burke to illustrate a point about the tendency of social media users to mistake the noise they make within their own milieux for the activity taking place on the platform as a whole.

Because half a dozen grasshoppers under a fern make the field ring with their importunate chink, whilst thousands of great cattle, reposed beneath the shadow of the British oak, chew the cud and are silent, pray do not imagine that those who make the noise are the only inhabitants of the field

We often take the activity proximate to us as indicative of a much broader sweep of social reality, implicitly framing our own experience as a reliable guide to wider processes. This is a mistake on social media and it is a mistake beyond it. How we imagine the world beyond our own microcosm, particularly our immediate expectations rather than considered impressions, usually reflects our own preconceptions more than it does a wider reality.