This looks fascinating:

FROM THE HRC-SCHOLARS LISTSERV:

Dear members,

Please find attached for the call for papers from my institution’s anniversary conference. My institution being TILT (The Institute for Law, Technology and Society in Tilburg, The Netherlands), you might find this one a bit out there but we have several tracks for which we secifically hope to bring together a very interdisciplinary crowd. The track that I wanted to bring to your attention is “AI, Robotics and Responsibility”, I copy-pasted the text below. 

This is the website: https://www.tilburguniversity.edu/upload/11d6299f-4ea0-4b39-bc88-a4631c328875_CALL_FOR_PAPERS_TILTing%202019.pdf

PLease feel free to spread the word!

Aviva.

Track: AI, Robotics and Responsibility

The complexity and autonomous character of Artificial Intelligent (AI) technologies and robotics challenge conventional ideas about responsibility. How will responsibility be distributed if self- driving cars no longer require the active attention of the driver? Can operators or commanders be held responsible for the targeting decisions of autonomous lethal weapons? To what extent can human beings be accountable for administrative decisions made by machine-learning
algorithms? Some scholars have argued that the more autonomous technologies become, the less we can reasonably hold human beings responsible. This particular conclusion is perhaps a bit too premature, but it does underline that these (envisioned) technologies require a rethinking of our conceptions of responsibility and associated concepts, such as accountability, liability, trust, autonomy, agency, and control.

In this track we want to explore how developments in AI and robotics affect established ways of
distributing responsibility and how concerns about responsibility can be addressed. We consider
responsibility in a broad sense as pertaining to various different kinds of responsibility, including
accountability, liability, role responsibility, professional responsibility or moral responsibility. As
such, AI and robotics have raised a range of questions and concerns. Are our existing concepts
of liability and accountability equipped to deal with machine learning algorithms? Should artificial
agents and robots at one point in the future be held liable or be considered moral agents? To
what extent can and should the outputs of AI algorithms be explained, for example to hold human
beings accountable for automated decisions? What does it mean to have meaningful control over
an AI technology? How do increasingly autonomous technologies mediate how we experience our
(moral) responsibility, for instance in terms of how they interact with feelings of guilt, regret or
duty? These different questions bring together a number of current and related discussions that
we want to connect in this track to examine how the changing relationship between human beings
and digital technologies affects the role of responsibility in the governance and regulation of AI and
robotics. We, therefore, welcome contributions from a range of different disciplines, including law,
philosophy, social science, cognitive science and computer science, on topics related to AI, robotics
and responsibility.

For questions about possible presentation topics for this track,
please contact Dr. Merel Noorman: M.E.Noorman@uvt.nl

Soon after becoming Finance Minister of Greece, Yanis Varoufakis found himself surrounded by civil servants whose loyalties he could not assume and staff parachuted in by a political party with which he had little prior affiliation. In his political memoir, Adults In The Room, he recounts his impulse to find “a minder whose loyalties would not be shared with any of my new Syriza comrades, let alone the deputy PM”. He turned to an old friend from university to serve this purpose, describing on loc 2873 the risks he sought protection from:

‘To keep me out of jail, Wassily,’ I replied. He understood. Ministers of finance are at the mercy of their minders. They sign dozens of documents, decrees, contracts and appointments daily. It is humanly impossible to examine closely everything they sign. All it takes is a hostile or absent-minded aide, and suddenly the minister faces the wrath of the public or a summons to court.

What is the danger here? The pace at which he is forced to work, the number of documents which he must formally assess, preclude a meaningful engagement with their content. This is something which could be exploited by those able to exercise an influence over what goes into his in-tray. The specific risks he faced were unique to his role as Finance Minister, as well as the times and circumstances under which he served.

However is there a broader lesson here about distraction and culpability? To what extent do our moral and legal notions of culpability rest on an assumption of the considered evaluation of our actions? If this is the case, it follows that distraction is something which political philosophers ought to take seriously. It has consequences at the moral level, in terms of how we attribute responsibility to persons. But it is also something we should consider in legal terms, if the attribution of culpability rests on assumptions about the socio-temporal conditions for evaluation which were absent in practice.

There’s a helpful summary on Wikipedia of the degrees of culpability recognised in criminal law in the United States:

  • A person causes a result purposely if the result is his/her goal in doing the action that causes it,
  • A person causes a result knowingly if he/she knows that the result is virtually certain to occur from the action he/she undertakes,
  • A person causes a result recklessly if he/she is aware of and disregards a substantial and unjustifiable risk of the result occurring from the action, and
  • A person causes a result negligently if there is a substantial and unjustifiable risk he/she is unaware of but should be aware of.

https://en.wikipedia.org/wiki/Culpability

If we accept the argument that distraction is socially and culturally produced, should this lead us to qualify the third and fourth dimensions of culpability? I want to sustain the argument that recklessness and negligence are in an important sense liable to be produced systematically, even if it remains extremely difficult to quantify such a claim. What does distraction mean for political theory and political philosophy?

At an event in Liverpool last week, I was asked by Steve Fuller about what I understood responsibility to mean in a sociological sense. He was sceptical that I could support claims of responsibility given my understanding of human agency as situationally performative but biographically continuous. In essence I understood him to be asking: do I think there’s something about the human being in relation to which responsibility can be assigned? This is a question I’d never really thought about explicitly, though once I began to I’ve realised that it actually knits together the full range of my interests.

Part of my difficulty with the question is that I think ‘responsibility’ encompasses a number of different things which we need to unpack:

  1. Responsibility as moral agency: how an individual comes, through internal and/or external conversation, to assume a stance of responsibility towards their own actions. To me it seems obvious that this is a matter of what Charles Taylor calls disengaged agency. It’s a mode of engagement with the world that usually involves stepping back from social encounters in order to reflect on one’s own actions within them, though I do believe sometimes we confront these questions when in the flow of the social situation.
  2. Responsibility as interpersonal ascription: how an individual comes, through social interaction, to be held accountable for their actions. This can, but by no means necessarily does, lead to the first sense of responsibility as moral agency. This is about social judgement, holding someone to account in terms of putatively shared standards in relation to which their behaviour can be evaluated.
  3. Responsibility as structural enforcement: how an individual comes to be formally held responsible for their actions, in relation to codified rules and regulations which are sufficiently durable to be both enforceable and recognised as binding. Legal systems are the obvious example of this but I’d include disciplinary proceedings within workplaces within this category as well. The point is the process is formalised and the rules are codified. It’s not tied to the social situation, a term I use in Goffman’s sense, in the same way as the earlier forms of responsibility.

These are interconnected in complex ways. But by analytically distinguishing between them, we’re able to recognise how they can vary independently. Under contemporary social conditions, I would argue that we have seen the following changes:

  1. People are more likely to over-actively exercise moral agency, often to the point of blaming themselves for personal outcomes that are systemically produced. This individualisation contributes to the fragmentation of normative consensus, as individual reasoning acts as a vector of deviance amplification: the more intensively people think about these things, through the filter provided by their own particularity, the less likely they are to straight forwardly reproduce ‘common sense’.
  2. The interpersonal ascription of responsibility is becoming more contentious because of this fragmentation of normative consensus. If we can’t take ‘common sense’ for granted, interventions of this sort will tend to be experienced as arbitrary impositions of power. This leave them experienced as something inherently contentious, which I’ve written about as the ‘paradox of incivility’: when consensus breaks down, attempts to enforce civility are actually experienced as rude and aggressive.
  3. ‘Common sense’ supplies the intuitions upon which enforcement is grounded. In its absence, normativity comes to seem less binding, incentivising alternative penalty-based enforcement that doesn’t attempt to seek grounding in moral agency. Margaret Archer describes this as ‘anormative regulation’ in an upcoming paper.

Having only recently grasped quite how interesting case law is, thanks to the conversation with Steve and Joseph, I’d now like to start to refine the outline I’ve sketched above and apply it to thinking through the challenges posed by emerging technologies.