CfP: AI, Robotics and Responsibility

This looks fascinating:

FROM THE HRC-SCHOLARS LISTSERV:

Dear members,

Please find attached for the call for papers from my institution’s anniversary conference. My institution being TILT (The Institute for Law, Technology and Society in Tilburg, The Netherlands), you might find this one a bit out there but we have several tracks for which we secifically hope to bring together a very interdisciplinary crowd. The track that I wanted to bring to your attention is “AI, Robotics and Responsibility”, I copy-pasted the text below. 

This is the website: https://www.tilburguniversity.edu/upload/11d6299f-4ea0-4b39-bc88-a4631c328875_CALL_FOR_PAPERS_TILTing%202019.pdf

PLease feel free to spread the word!

Aviva.

Track: AI, Robotics and Responsibility

The complexity and autonomous character of Artificial Intelligent (AI) technologies and robotics challenge conventional ideas about responsibility. How will responsibility be distributed if self- driving cars no longer require the active attention of the driver? Can operators or commanders be held responsible for the targeting decisions of autonomous lethal weapons? To what extent can human beings be accountable for administrative decisions made by machine-learning
algorithms? Some scholars have argued that the more autonomous technologies become, the less we can reasonably hold human beings responsible. This particular conclusion is perhaps a bit too premature, but it does underline that these (envisioned) technologies require a rethinking of our conceptions of responsibility and associated concepts, such as accountability, liability, trust, autonomy, agency, and control.

In this track we want to explore how developments in AI and robotics affect established ways of
distributing responsibility and how concerns about responsibility can be addressed. We consider
responsibility in a broad sense as pertaining to various different kinds of responsibility, including
accountability, liability, role responsibility, professional responsibility or moral responsibility. As
such, AI and robotics have raised a range of questions and concerns. Are our existing concepts
of liability and accountability equipped to deal with machine learning algorithms? Should artificial
agents and robots at one point in the future be held liable or be considered moral agents? To
what extent can and should the outputs of AI algorithms be explained, for example to hold human
beings accountable for automated decisions? What does it mean to have meaningful control over
an AI technology? How do increasingly autonomous technologies mediate how we experience our
(moral) responsibility, for instance in terms of how they interact with feelings of guilt, regret or
duty? These different questions bring together a number of current and related discussions that
we want to connect in this track to examine how the changing relationship between human beings
and digital technologies affects the role of responsibility in the governance and regulation of AI and
robotics. We, therefore, welcome contributions from a range of different disciplines, including law,
philosophy, social science, cognitive science and computer science, on topics related to AI, robotics
and responsibility.

For questions about possible presentation topics for this track,
please contact Dr. Merel Noorman: M.E.Noorman@uvt.nl

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.