Socialising the machines

I’m saving these two CfPs for my own reference but I suspect others might find them interesting:

Call for Papers

Envisioning Social Robotics: Current Challenges and New Interdisciplinary Methodologies
Special issue of Interaction Studies
Guest Editors: Glenda Hannibal & Astrid Weiss

Submission Deadline: November 1st, 2018

We find in social robotics many so-called “wicked problems” – problems that are extremely complex and resist complete definition and resolution. To work out these problems, it is necessary to critically discuss the underpinning logic, or line of reasoning, that motivates social robotics and to develop new interdisciplinary methods to make social robots more “socially robust”. This special issue aims to address current challenges in social robotics by bringing methodological discussions to the foreground and therefore calls for submissions focusing on new methodologies in social robotics by reflecting on, developing, and demonstrating interdisciplinary research. Researchers with various disciplinary backgrounds and professions (e.g. engineering, physiotherapy, philosophy, law, biology, art, STS, psychology, medicine, HRI, sociology, computer science, nursing, industry, education, anthropology, HCI, management etc.) are strongly encouraged to contribute.

Successful submissions to the special issue should therefore focus on (a) defining current challenges from a methodological point of view and (b) presenting in response different models, approaches or methods for interdisciplinary social robotics. If possible, c) they should also provide illustrations of implementation of the suggested model, approach or method with a critical discussion thereof. All articles will be peer-reviewed.

Authors should consider the interdisciplinary readership of INTERACTION STUDIES and prepare their manuscripts accordingly. Authors may either submit full papers or short research reports. See the journal’s homepage for guidelines on the suggested length and formatting of the manuscripts: https://www.benjamins.com/catalog/is/guidelines

Please submit the journal manuscript(s) via the online submission platform provided by Interaction Studies: https://www.editorialmanager.com/is/default.aspx

Remember to select the name of the special issue when submitting the manuscript(s).

For more information please contact Glenda Hannibal at Vienna University of Technology (TU Wien) at  glenda.hannibal@tuwien.ac.at.

And the other one:

The Cultural Life of Machine Learning: An Incursion into Critical AI Studies
Preconference Workshop, #AoIR2018 Montréal, Canada
Urbanisation Culture Société Research Centre, INRS (Institut national de la recherche scientifique)
Wednesday October 10th 2018

Machine learning (ML), deep neural networks, differentiable programming and related contemporary novelties in artificial intelligence (AI) are all leading to the development of an ambiguous yet efficient narrative promoting the dominance of a scientific field—as well as a ubiquitous business model. Indeed, AI is very much in full hype mode. For its advocates, it represents a ‘tsunami’ (Manning, 2015) or ‘revolution’ (Sejnowski, 2018)—terms indicative of a very performative and promotional, if not self-fulfilling, discourse. The question, then, is: how are the social sciences and humanities to dissect such a discourse and make sense of all its practical implications? So far, the literature on algorithms and algorithmic cultures has been keen to explore both their broad socio-economic, political and cultural repercussions, and the ways they relate to different disciplines, from sociology to communication and Internet studies. The crucial task ahead is understanding the specific ways by which the new challenges raised by ML and AI technologies affect this wider framework. This would imply not only closer collaboration among disciplines—including those of STS for instance—but also the development of new critical insights and perspectives. Thus a helpful and precise pre-conference workshop question could be: what is the best way to develop a fine-grained yet encompassing field under the name of Critical AI Studies? We propose to explore three regimes in which ML and 21st-century AI crystallize and come to justify their existence: (1) epistemology, (2) agency, and (3) governmentality—each of which generates new challenges as well as new directions for inquiries.

In terms of epistemology, it is important to recognize that ML and AI are situated forms of knowledge production, and thus worthy of empirical examination (Pinch and Bijker, 1987). At present, we only have internal accounts of the historical development of the machine learningfield, which increasingly reproduce a teleological story of its rise (Rosenblatt, 1958) and fall (Minsky and Papert 1968; Vapnik 1998) and rise (Hinton 2006), concluding with the diverse if as-yet unproven applications of deep learning. Especially problematic in this regard is our understanding of how these techniques are increasingly hybridized with large-scale training datasets, specialized graphics-processing hardware, and algorithmic calculus. The rationale behind contemporary ML finds its expression in a very specific laboratory culture (Forsythe 1993), with a specific ethos or model of “open science”. Models trained on the largest datasets of private corporations are thus made freely available, and subsequently détourned for the new AI’s semiotic environs of image, speech, and text—promising to make the epistemically recalcitrant landscapes of unruly and ‘unstructured’ data newly “manageable”.

As the knowledge-production techniques of ML and AI move further into the fabric of everyday life, it creates a particularly new form of agency. Unlike the static, rule-based systems critiqued in a previous generation by Dreyfus (1972), modern AI models pragmatically unfold as a temporal flow of decontextualized classifications. What then does agency mean for machine learners (Mackenzie, 2017)? Performance in this particular case relates to the power of inferring and predicting outcomes (Burell, 2016); new kinds of algorithmic control thus emerge at the junction of meaning-making and decision-making. The implications of this question are tangible, particularly as ML becomes more unsupervised and begins to impact on numerous aspects of daily life. Social media, for instance, are undergoing radical change, as insightful new actants come to populate the world: Echo translates your desires into Amazon purchases, and Facebook is now able to detect suicidal behaviours. In the general domain of work, too, these actants leave permanent traces—not only on repetitive tasks, but on the broader intellectual responsibility.

Last but not least, the final regime to explore in this preconference workshop is governmentality. The politics of ML and AI are still largely to be outlined, and the question of power for these techniques remains largely unexplored. Governmentality refers specifically to how a field is organised—by whom, for what purposes, and through which means and discourses (Foucault, 1991). As stated above, ML and AI are based on a model of open science and innovation, in which public actors—such as governments and universities—are deeply implicated (Etzkowitz and Leydesdorff, 2000). One problem, however, is that while the algorithms themselves may be openly available, the datasets on which they rely for implementation are not—hence the massive advantages for private actors such as Google or Facebook who control the data, as well as the economic resources to attract the brightest students in the field. But there is more: this same open innovation model makes possible the manufacture of military AI with little regulatory oversight, as is the case for China, whose government is currently helping to fuel an AI arms race (Simonite 2017). What alternatives or counter-powers could be imagined in these circumstances? Could ethical considerations stand alone without a proper and fully developed critical approach to ML and AI? This workshop will try to address these pressing and interconnected issues.

We welcome all submissions which might profitably connect with one or more of these three categories of epistemology, agency, and governmentality; but we welcome other theoretically and/or empirically rich contributions.

Interested scholars should submit proposal abstracts, of approximately 250 words, by 11:59pm EDT on June 30th, 2018 to CriticalAI2018 [at] gmail [dot] com. Proposals may represent works in progress, short position papers, or more developed research. The format of the workshop will focus on paper presentations and keynotes, with additional opportunities for group discussion and reflection.

This preconference workshop will be held at the Urbanisation Culture Société Research Centre of INRS (Institut national de la recherche scientifique). The Centre is located at 385 Sherbrooke St E, Montreal, QC, and is about a 20-minute train ride from the Centre Sheraton on the STM Orange Line (enter at the Bonaventure stop, exit at Sherbrooke), or about a 30-minute walk along Rue Sherbrooke.

For information on the AoIR (Association of Internet Researchers) conference, see https://aoir.org/aoir2018/ ; for other preconference workshops at AoIR 2018, see https://aoir.org/aoir2018/preconfwrkshop/.

Organizers: Jonathan Roberge (INRS), Michael Castelle (University of Warwick), and Thomas Crosbie (Royal Danish Defence College).

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.