The birth of machinology

My notes on Rahwan, I. et al. (2019) Machine Behaviour. Nature, 568, 477–486

The proliferation of intelligent machines, ranging from machine learning systems through to their embodiment in robotics, raises the question of how their behaviour should be studied and understood. In this agenda setting paper, the team of authors suggest this now requires the deliberate formation of a new field, bringing together many parts which have already taken shape. From 477:

This field overlaps with, but is distinct from, computer science and robotics. It treats machine behaviour empirically. This is akin to how ethology and behavioural ecology study animal behaviour by integrating physiology and biochemistry—intrinsic properties—with the study of ecology and evolution—properties shaped by the environment. Animal and human behaviours cannot be fully understood without the study of the contexts in which behaviours occur. Machine behaviour similarly cannot be fully understood without the integrated study of algorithms and the social environments in which algorithms operate

These AI agents, virtual and embodied, tend to be studied by their creators with a narrow focus on whether they are performing their intended tasks. However a broader approach is necessary as they “increasingly integrate into our society and are already involved in a variety of activities, such as credit scoring, algorithmic trading, local policing, parole decisions, driving, online dating and drone warfare” (477). This would respond to growing concerns over the lack of predictability surrounding their consequences, the loss of human oversight and the potential harms arising from once human led tasks being increasingly taken over by AI agents. They cite three main motivations for this:

  1. The ubiquity of algorithms has reached an unprecedented point, operating across fields as diverse as news, credit, entertainment, travel, security and dating. They are increasingly likely to play a role in raising the young, caring for the old and coordinating collective behaviour.
  2. The complexity and opacity of algorithms is increasingly rapidly, leading to black boxes in which the mechanism linking inputs and outputs is uncertain, even if the initial code was well understood because the training data and training process have uncertain results. This is compounded by the proprietary character of the work underlying them. Their feedbacks with diverse environments adds another layer of complexity to this process.
  3. Their capacity to have beneficial or detrimental effect on humanity necessitates scrutiny because “with the power to nudge human behaviours in positive or intended ways comes the risk that human behaviours may be nudged in costly or unintended ways” (478).

This is a necessarily interdisciplinary field which is currently in a nascent period of integration, operating in an ad hoc way. There is a skills gap on both sides of the groups with a stake in this, described on 478:

These scientists may be expert mathematicians and engineers; however, they are typically not trained behaviourists. They rarely receive formal instruction on experimental methodology, population-based statistics and sampling paradigms, or observational causal inference, let alone neuroscience, collective behaviour or social theory. Conversely, although behavioural scientists are more likely to possess training in these scientific methods, they are less likely to possess the expertise required to proficiently eval- uate the underlying quality and appropriateness of AI techniques for a given problem domain or to mathematically describe the properties of particular algorithms.

But their work will necessarily move beyond optimisation of AI agents against benchmarks, necessitating “a broader set of indicators, much as social scientists explore a wide range of human behaviours in the realm of social, political or economic interactions” (479). It’s notable how there’s no room left here for interpretation of AI agents, human responses to them and the relation between the two, despite the potential value which a broader methodological repertoire could offer. The fact they later in the paper cite a paper from 2018 to say how lab based research has begun on human machine interaction, speaks volumes even on the most charitable reading that they’re aware of HCI but see it as a different endeavour. From 479:

As such, scholars of machine behaviour spend considerable effort in defining measures of micro and macro outcomes to answer broad questions such as how these algorithms behave in different environments and whether human interactions with algorithms alter societal outcomes. Randomized experiments, observational inference and population-based descriptive statistics—methods that are often used in quantitative behavioural sciences—must be central to the study of machine behaviour. Incorporating scholars from out- side of the disciplines that traditionally produce intelligent machines can provide knowledge of important methodological tools, scientific approaches, alternative conceptual frameworks and perspectives on the economic, social and political phenomena that machines will increas- ingly influence.

The theoretical approach they suggest is a taxonomic one, drawing on a parallel with ethology. As they put it on 480, “Machines have mechanisms that produce behaviour, undergo development that integrates environmental information into behaviour, produce functional consequences that cause specific machines to become more or less common in specific environments and embody evolutionary histories through which past environments and human decisions continue to influence machine behaviour”.

They see mechanisms for generating behaviour as emerging from the interplay between an algorithm and its environment, even if a particular algorithm is embedded in an interlocking structure to constitute a complex AI agent. Machinology (my term, not theirs) would ask how particular behaviours emerged from this interplay, whether from human engineer choices, training, interaction or feedback from the environment. These behaviours serve a function or otherwise for human stakeholders, leading to a selective force which might make some more common e.g. if useful behaviours are reproduced in further AI agents. These reference institutions as sources of incentives which shape the behaviour of human stakeholders. Institutional incentives can cause machine behaviours to spread which are socially pathological, for instance systems to maximise user engagement on social media which leads to the proliferation of ‘fake news’ while ensuring the success of the platform itself.

These are two parallel levels of explanation, one explaining ontogeny (how the entity has the characteristics it does) through the identification of causal mechanisms leading to the development of behaviour, the other explaining phylogeny (why the entity has developed these characteristics rather than others) through the analysis of adaptive value. The transmission of these characteristics is varied and open, though mediated through institutions like intellectual property laws and regulatory regimes. As they put it, “machines may exhibit very different evolutionary trajectories, as they are not bound by the mechanisms of organic evolution” (481).

They suggest three levels of analysis: individual machines, collective machines and groups of machines embedded in an environment. The first is currently undertaken by computer scientists and engineers, often looking at an individual machine solely in terms of properties arising from its source code and design. These could be conducted in a within-machine or between-machine manner, looking at variation in how one machine behaves across contexts or looking at the same behaviours as they vary between machines. The second looks at “the interactive and system- wide behaviours of collections of machine agents” (482) as can be seen in approaches such as Multiagent systems and computational game theory. Machines using simple algorithms for local interactions can aggregate into complex behaviours at a collective level, studied as aggregation dynamics, but also the forms of social learning which might take place when humans and institutions provide a potent source of environmental feedback. Thirdly, machines shape human behaviour and vice versa, necessitating study of these hybrid and embedded realities. But these dynamics are likely to operate at the same time, in complex and hybridising ways.

They end with a consideration of some of the methodological, theoretical and logistical obstacles to machinology, the most urgent of which is the uncertain over legal penalties for reverse engineering algorithms and violating terms of services & the risks involved in anthropomorphising machines (with their potentially alien intelligences) and doing so in a way which distracts from the crucial human operations involved in providing the training data.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.