Tagged: machine learning Toggle Comment Threads | Keyboard Shortcuts

  • Mark 8:01 pm on November 11, 2019 Permalink | Reply
    Tags: anti-hermeneutic, , evisceration, , human agency, machine learning, platform epistemics   

    Humans as blackboxes, machines as transparent 

    From Automating Inequality by Virginia Eubanks pg 167:

    Parents in Allegheny County helped me articulate an inchoate idea that had been echoing in my head since I started my research. In Indiana, Los Angeles, and Allegheny County, technologists and administrators explained to me that new high-tech tools in public services increase transparency and decrease discrimination. They claimed that there is no way to know what is going on in the head of a welfare caseworker, a homeless service provider, or an intake call screener without using big data to identify patterns in their decision-making. I find the philosophy that sees human beings as unknowable black boxes and machines as transparent deeply troubling. It seems to me a worldview that surrenders any attempt at empathy and forecloses the possibility of ethical development. The presumption that human decision-making is opaque and inaccessible is an admission that we have abandoned a social commitment to try to understand each other. Poor and working-class people in Allegheny County want and deserve more: a recognition of their humanity, an understanding of their context, and the potential for connection and community.

     
  • Mark 7:56 pm on November 11, 2019 Permalink | Reply
    Tags: automating inequality, machine learning, public administration,   

    A machinery for producing rationalisations 

    I thought this was extremely powerful by Virgina Eubanks in Automating Inequality. She explains on pg 121-122 how machinic learning systems can operate as a form of triage, sorting people in order to distribute scarce resources in a seemingly more rational fashion:

    COunter INTELligence PROgram of the FBI), for example, focused on civil rights activists for both their race and their political activism. But wiretaps, photography, tailing, and other techniques of old surveillance were individualized and focused. The target had to be identified before the watcher could surveil. In contrast, in new data-based surveillance, the target often emerges from the data. The targeting comes after the data collection, not before. Massive amounts of information are collected on a wide variety of individuals and groups. Then, the data is mined, analyzed, and searched in order to identify possible targets for more thorough scrutiny. Sometimes this involves old-school, in-person watching and tracking. But increasingly, it only requires finer sifting of data that already exists. If the old surveillance was an eye in the sky, the new surveillance is a spider in a digital web, testing each connected strand for suspicious vibrations. Surveillance is not only a means of watching or tracking, it is also a mechanism for social sorting. Coordinated entry collects data tied to individual behavior, assesses vulnerability, and assigns different interventions based on that valuation. “Coordinated entry is triage,” said Molly Rysman, the Housing and Homeless deputy for LA’s Third District. “All of us have thought about it like a natural disaster. We have extraordinary need and can’t meet all of that need at once. So you’ve got to figure out: How do we get folks who are going to bleed to death access to a doctor, and folks who have the flu to wait? It’s unfortunate to have to do that, but it is the reality of what we’re stuck with.” In his prescient 1993 book, The Panoptic Sort, communication scholar Oscar Gandy of the University of Pennsylvania also suggests that automated sorting of digital personal information is a kind of triage. But he pushes further, pointing out that the term is derived from the French trier, which means to pick over, cull, or grade marketable produce. “Although some metaphors speak for themselves, let me be clear,” he writes. In digital triage, “individuals and groups of people are being sorted according to their presumed economic or political value. The poor, especially poor people of color, are increasingly being treated as broken material or damaged goods to be discarded.”

    But as she goes on to write on pg 122, those systems support moral judgements which can operate as rationalisations for those we don’t help and actions we don’t take:

    But if homelessness is a human tragedy created by policy decisions and professional middle-class apathy, coordinated entry allows us to distance ourselves from the human impacts of our choice to not act decisively. As a system of moral valuation, coordinated entry is a machine for producing rationalization, for helping us convince ourselves that only the most deserving people are getting help. Those judged “too risky” are coded for criminalization. Those who fall through the cracks face prisons, institutions, or death.

     
    • landzek 12:51 am on November 13, 2019 Permalink

      Don’t people do that anyways without the assistance of machines?

      Couldn’t we even say that individual cultural groups are themselves a machine that’s sort out who is worthy and where they belong in a hierarchical organization of the group/not group?

      It’s interesting to me how some types of analysis speak to things as if it’s something new, just because it’s appearing differently.

      It’s like capitalism creating an antagonist that itself is capitalistic.

      For example war. Does it really matter, or does it have any significance at all if there were two groups of say 500,000 people who get in a war and 20,000 of them get killed, compared if there’s two groups of 2 million people and the same percentage die?

      Sometimes I wonder what the implicit morality or implicit messages in some of these analyses.

      I’m not sure what is so horrible about a machine designating and enforcing through intellectual capital what individuals are not valued or valuable beyond the fact that human beings do that all the time without machines, that is, deciding who is valuable and who is not?

      Sometimes I wonder what the world would look out like if everyone was valued equally and there was no war?

      The question itself seems to be never asked because the answer then goes back to kind of emphasize how many particular project is almost useless and basically generated just for the fact of the individual attempting to play some self in a society that itself is a machine that’s valuing and D valuing individuals and groups in particular ways.

      And I’m not being nihilistic nor pessimistic, but when I read some of these kinds of analyses, what strikes me is what implicit in the analysis that no one wants to talk about.

      So I guess I’m kind of asking you, since you posted the segment, what do you think is being left out? What is the implicit agenda, the endgame, what is the point of this segment? What, for example, are you trying to tell me?

  • Mark 7:51 pm on November 11, 2019 Permalink | Reply
    Tags: bias, , machine learning   

    How machine learning veils human bias 

    The promise of introducing machine learning into public administration is that it can counteract human bias. The latent promise of bureaucracy can be realised by systems that won’t be up-ended by the messy imperfections of their human operators. However Virginia Eubanks makes clear in Automating Inequality that the reality is something much more worrying, as the operation of machinic systems does what Andrew Pickering calls ontological veiling: rendering them unrepresentable by taking us on a detour from those aspects of reality. As Eubanks recalls on pg 166:

    Human bias has been a problem in child welfare since the field’s inception. In its earliest days, Charles Loring Brace’s orphan trains carried away so many Catholic sons and daughters that the religious minority had to create an entirely parallel system of child welfare organizations. Scientific charity workers had religious biases that tended to skew their decision-making. They believed that the children of Protestants could be redeemed by their families, but Catholics were incorrigible and had to be sent to labor on (mostly Protestant) farms in the Midwest. Today, racial disproportionality shatters the bonds of too many Black and Native American families. Some of that disproportion can certainly be traced to human discretion in child welfare decision-making. But human bias is a built-in feature of the predictive risk model, too.

    Compare to the contemporary reality depicted on pg 167:

    Once the big blue button is clicked and the AFST runs, it manifests a thousand invisible human choices. But it does so under a cloak of evidence-based objectivity and infallibility. Intake screeners reflect a variety of experiences and life paths, from the suburban white Penn State postgraduate to an African American Pittsburgh native, like Pat Gordon, with over a decade of experience. The automated discretion of predictive models is the discretion of the few. Human discretion is the discretion of the many. Flawed and fallible, yes. But also fixable.

     
  • Mark 9:29 am on April 26, 2019 Permalink | Reply
    Tags: , , computer science, machine learning   

    The birth of machinology 

    My notes on Rahwan, I. et al. (2019) Machine Behaviour. Nature, 568, 477–486

    The proliferation of intelligent machines, ranging from machine learning systems through to their embodiment in robotics, raises the question of how their behaviour should be studied and understood. In this agenda setting paper, the team of authors suggest this now requires the deliberate formation of a new field, bringing together many parts which have already taken shape. From 477:

    This field overlaps with, but is distinct from, computer science and robotics. It treats machine behaviour empirically. This is akin to how ethology and behavioural ecology study animal behaviour by integrating physiology and biochemistry—intrinsic properties—with the study of ecology and evolution—properties shaped by the environment. Animal and human behaviours cannot be fully understood without the study of the contexts in which behaviours occur. Machine behaviour similarly cannot be fully understood without the integrated study of algorithms and the social environments in which algorithms operate

    These AI agents, virtual and embodied, tend to be studied by their creators with a narrow focus on whether they are performing their intended tasks. However a broader approach is necessary as they “increasingly integrate into our society and are already involved in a variety of activities, such as credit scoring, algorithmic trading, local policing, parole decisions, driving, online dating and drone warfare” (477). This would respond to growing concerns over the lack of predictability surrounding their consequences, the loss of human oversight and the potential harms arising from once human led tasks being increasingly taken over by AI agents. They cite three main motivations for this:

    1. The ubiquity of algorithms has reached an unprecedented point, operating across fields as diverse as news, credit, entertainment, travel, security and dating. They are increasingly likely to play a role in raising the young, caring for the old and coordinating collective behaviour.
    2. The complexity and opacity of algorithms is increasingly rapidly, leading to black boxes in which the mechanism linking inputs and outputs is uncertain, even if the initial code was well understood because the training data and training process have uncertain results. This is compounded by the proprietary character of the work underlying them. Their feedbacks with diverse environments adds another layer of complexity to this process.
    3. Their capacity to have beneficial or detrimental effect on humanity necessitates scrutiny because “with the power to nudge human behaviours in positive or intended ways comes the risk that human behaviours may be nudged in costly or unintended ways” (478).

    This is a necessarily interdisciplinary field which is currently in a nascent period of integration, operating in an ad hoc way. There is a skills gap on both sides of the groups with a stake in this, described on 478:

    These scientists may be expert mathematicians and engineers; however, they are typically not trained behaviourists. They rarely receive formal instruction on experimental methodology, population-based statistics and sampling paradigms, or observational causal inference, let alone neuroscience, collective behaviour or social theory. Conversely, although behavioural scientists are more likely to possess training in these scientific methods, they are less likely to possess the expertise required to proficiently eval- uate the underlying quality and appropriateness of AI techniques for a given problem domain or to mathematically describe the properties of particular algorithms.

    But their work will necessarily move beyond optimisation of AI agents against benchmarks, necessitating “a broader set of indicators, much as social scientists explore a wide range of human behaviours in the realm of social, political or economic interactions” (479). It’s notable how there’s no room left here for interpretation of AI agents, human responses to them and the relation between the two, despite the potential value which a broader methodological repertoire could offer. The fact they later in the paper cite a paper from 2018 to say how lab based research has begun on human machine interaction, speaks volumes even on the most charitable reading that they’re aware of HCI but see it as a different endeavour. From 479:

    As such, scholars of machine behaviour spend considerable effort in defining measures of micro and macro outcomes to answer broad questions such as how these algorithms behave in different environments and whether human interactions with algorithms alter societal outcomes. Randomized experiments, observational inference and population-based descriptive statistics—methods that are often used in quantitative behavioural sciences—must be central to the study of machine behaviour. Incorporating scholars from out- side of the disciplines that traditionally produce intelligent machines can provide knowledge of important methodological tools, scientific approaches, alternative conceptual frameworks and perspectives on the economic, social and political phenomena that machines will increas- ingly influence.

    The theoretical approach they suggest is a taxonomic one, drawing on a parallel with ethology. As they put it on 480, “Machines have mechanisms that produce behaviour, undergo development that integrates environmental information into behaviour, produce functional consequences that cause specific machines to become more or less common in specific environments and embody evolutionary histories through which past environments and human decisions continue to influence machine behaviour”.

    They see mechanisms for generating behaviour as emerging from the interplay between an algorithm and its environment, even if a particular algorithm is embedded in an interlocking structure to constitute a complex AI agent. Machinology (my term, not theirs) would ask how particular behaviours emerged from this interplay, whether from human engineer choices, training, interaction or feedback from the environment. These behaviours serve a function or otherwise for human stakeholders, leading to a selective force which might make some more common e.g. if useful behaviours are reproduced in further AI agents. These reference institutions as sources of incentives which shape the behaviour of human stakeholders. Institutional incentives can cause machine behaviours to spread which are socially pathological, for instance systems to maximise user engagement on social media which leads to the proliferation of ‘fake news’ while ensuring the success of the platform itself.

    These are two parallel levels of explanation, one explaining ontogeny (how the entity has the characteristics it does) through the identification of causal mechanisms leading to the development of behaviour, the other explaining phylogeny (why the entity has developed these characteristics rather than others) through the analysis of adaptive value. The transmission of these characteristics is varied and open, though mediated through institutions like intellectual property laws and regulatory regimes. As they put it, “machines may exhibit very different evolutionary trajectories, as they are not bound by the mechanisms of organic evolution” (481).

    They suggest three levels of analysis: individual machines, collective machines and groups of machines embedded in an environment. The first is currently undertaken by computer scientists and engineers, often looking at an individual machine solely in terms of properties arising from its source code and design. These could be conducted in a within-machine or between-machine manner, looking at variation in how one machine behaves across contexts or looking at the same behaviours as they vary between machines. The second looks at “the interactive and system- wide behaviours of collections of machine agents” (482) as can be seen in approaches such as Multiagent systems and computational game theory. Machines using simple algorithms for local interactions can aggregate into complex behaviours at a collective level, studied as aggregation dynamics, but also the forms of social learning which might take place when humans and institutions provide a potent source of environmental feedback. Thirdly, machines shape human behaviour and vice versa, necessitating study of these hybrid and embedded realities. But these dynamics are likely to operate at the same time, in complex and hybridising ways.

    They end with a consideration of some of the methodological, theoretical and logistical obstacles to machinology, the most urgent of which is the uncertain over legal penalties for reverse engineering algorithms and violating terms of services & the risks involved in anthropomorphising machines (with their potentially alien intelligences) and doing so in a way which distracts from the crucial human operations involved in providing the training data.

     
  • Mark 10:56 am on December 19, 2018 Permalink | Reply
    Tags: Alison Hearn, , data mining, , Helen Kennedy, machine learning,   

    Cultural studies of data mining 

    My notes on Andrejevic, M., Hearn, A., & Kennedy, H. (2015). Cultural studies of data mining: Introduction, European Journal of Cultural Studies 18(4-5), 379-394

    In this introduction to an important special issue, Mark Andrejevic, Alison Hearn and Helen Kennedy that the ubiquity of data infrastructure in everyday life means that “we cannot afford to limit our thinking about data analysis technologies by approaching them solely as communication media” and offer a list of questions which we need to address:

    what kinds of data are gathered, constructed and sold; how these processes are designed and implemented; to what ends data are deployed; who gets access to them; how their analysis is regulated (boyd and Crawford, 2012) and what, if any, possibilities for agency and better accountability data mining and analytics open up. (pg 380)

    This creates a problem for cultural studies because data mining challenges established forms of representation, “promising to discern patterns that are so complex that they are beyond the reach of human perception, and in some cases of any meaningful explanation or interpretation”. It is not “not only a highly technical practice, it also tends to be non-transparent in its applications, which are generally privately owned and controlled”. It poses an ontological challenge to cultural studies, as well as epistemological and methodological ones. In the absence of access to the products of data mining, the authors suggest cultural studies is left theorising their effects.

    If we approach data analysis technologies as communicative media, we miss a “shift away from interpretive approaches and meaning-making practices towards the project of arranging and sorting people (and things) in time and space” (pg 381). Data mining isn’t undertaken to understand the communication taking place, as much as to “arrange and sort people and their interactions”. They suggest that recent developments in social theory mirror this changing reality (pg 381-382):

    Perhaps not coincidentally, recent forms of social and cultural theory mirror develop- ments in big data analytics; new materialism, object-oriented ontology, post-humanism and new medium theory – all of which are coming to play an important role in digital media studies – de-centre the human and her attendant political and cultural concerns in favour of a ‘flat’ ontology wherein humans are but one node, and perhaps not the most  important, in complex networks of interactions and assemblages. Thus, analysis of the circulation of affects and effects rather than of meanings, content or representations, con- nected as they are to human-centred forms of meaning-making, has become a dominant trope in some influential current approaches to media. Such analyses tend to fashion themselves as anti-discursive in their rejection of a focus on representation and cognition and their turn towards bodies and things in their materiality (rather than their significa- tion).

    They make the compelling argument that to “remain within the horizon of interpretation, explanation and narrative” can be a “strategic critical resource in the face of theoretical tendencies that reproduce the correlational logic of the database by focusing on patterns and effects rather than on interpretations or explanations” (pg 382). The promise of these new approaches to correct an excessively discursive focus risks an “over-correction” and a “view from nowhere” in which “the goal of comprehensiveness (the inclusion of all components of an endless network of inter-relations) tends towards a politically inert process of specification in which structures of power and influence dissipate into networks and assemblages” (pg 383). Pushing beyond human concerns too easily leads to ever more specific analyses which collapse the substance of interactions into their effects, leaving us with “no way of generating a dynamics of contestation and argument in a flat ontology of ever proliferating relations or objects” (pg 384).

    This is not a claim that there is nothing beyond culture, but rather a reminder that invoking this beyond is intrinsically cultural and a call for “an interrogation of the embrace of a post-cultural imaginary within contemporary media theory” (pg 384). This imaginary often obscures the political economy of data infrastructure, compounding the existing tendency for the ‘virtual’ character of digital phenomenon to distract from their socio-economic materiality; for all their opacity, complexity and power they are just another phase in the technological development of human civilisation (pg 385). When we recognise this in becomes easier to reject the “celebratory presentism” and remember that “technological forms, and the rhetorics and analytic practices that accompany them, do not come from nowhere – they have histories, which shape and condition them, and inevitably bear the marks of the cultural, social and political conditions surrounding their production and implementation” (pg 385). They end this wonderful paper with a call to action which I’d like to explore in the digital public sociology book I’m writing with Lambros Fatsis (pg 393):

    We need to develop new methodologies and new intellectual and critical competencies to tackle the embedded assumptions buried in the code and their political and cultural implications. Our ability to accomplish these things will require more than isolated scholarly effort; collaborative, politically engaged activist sensibilities will no doubt be required in order to push past the privatized digital enclosures and open up access to the algorithms, analytics, distributive regimes and infrastructural monopolies that are increasingly coming to condition the contours and substance of our daily lives.

     
  • Mark 10:44 am on December 1, 2018 Permalink | Reply
    Tags: machine learning   

    Masterclass: An Introduction to Machine Learning 

    Tuesday December 4th 12pm
    Faculty of Education, University of Cambridge
    Everyone welcome! It’s a short journey from Cambridge train station

    We hear a lot about the coming ‘automation revolution’, but what might developments in machine learning and AI mean for researchers in the social sciences and humanities? In our next masterclass, Associate Professor Inger Mewburn (from the Australian National University and the Thesis Whisperer Blog) will talk about her forthcoming book on machine learning in the social sciences. What kinds of projects does machine learning make possible? What kind of collaborations can social scientists make to take advantage of these new tools and techniques? Do some of our PhD graduates have a future in bespoke algorithm design? Come along to this discussion on future social science practice.

    Register here: https://www.eventbrite.com/e/masterclass-an-introduction-to-machine-learning-with-dr-inger-mewburn-tickets-52665601231

     
    • TheSociologicalMail 3:16 pm on December 1, 2018 Permalink

      Very interesting!

    • landzek 5:42 pm on December 1, 2018 Permalink

      It’s the beginning of Skynet. From The Terminator. We should expect the dude coming back from the future any day now. 😄

  • Mark 10:22 am on October 31, 2018 Permalink | Reply
    Tags: , , , , , disruptive innovation, machine learning, techno-nationalism   

    Provincialising disruption 

    The robots are coming! The robots are coming! After watching More Human Than Human, I’ve woken up preoccupied by the rise of the robots narrative and how inadequate it is for making sense of the cultural politics and political economy of automation. The film is an engaging exploration of artificial intelligence and its social significance. While its analysis is often superficial, it foregrounds the agency of the roboticists and thinkers who are shaping emerging technologies and this feels important to me. Nonetheless it sits uneasily with the film’s tendency to frame technological change as inexorable, able to be steered for good or evil but nonetheless impossible to constraint. This is a tension at the heart of disruption rhetoric, celebrating innovation as a form of creativity while holding it to be unavoidable. But this is just one way in which the film so starkly embodies a broader trend.

    One reason it is important to see the figures shaping these developments is that it makes clear how white, male and Anglo-American they are. As Jana Bacevic observed, the film manifestly fails the Bechdel test. There are three women with speaking roles in the film, only of whom talks about her own work but does so in a way framed through the lens of the man whose memory powers it. As far as I can recall, every single person in the film is white, mostly American with a few northern Europeans thrown in for good measure. The only exception is a Russian-born women in the film who now works as an entrepreneur in Silicon Valley. This is problematic for many reasons, not least of all that much cutting edge work in artificial intelligence is taking place in China. By ignoring these developments, not only does the film undermine its own investigative mission but it further evacuates the political questions it raises by robbing them of their geopolitical dimension. Disruptive innovation is bound up in techno-nationalism, as machine learning becomes an arms race with epochal significance at a time when American power seemingly enters a state of terminal decline after years of domination without hegemony.

    The film ends in a contemplative mode, reiterating familiar rumination about our future. Every sentence in the closing scene repeatedly invokes ‘we’ and ‘our’. Who are we? How does the white American author in his early 30s who provides the intellectual narration for the film come to articulate the agenda of this we? How does the older white American director who provides its substantive narration, with the film framed around his own personal project in disruptive innovation, come to articulate the agenda of this we? The ‘we’ here is devoid of politics. It is a we without a they as Chantal Mouffe would put it. At a time when the liberal order is in chaos, we ought to be suspicious to the point of paranoia about the emergence of a powerful narrative of civilisational renewal in which we can save ourselves or we can doom ourselves. It is Anglo-American capitalism mystifying its own bleeding age, making a religion out of its own products and celebrating them as world-making or fearing them as world-breaking. None of this is to deny hugely significant technological advances are occurring. But the rise of the robots narrative actively frustrates our understanding of it, systematically shutting down the intellectual space in which it becomes possible to think through the cultural politics and political economy of automation. Provincialising disruption is unavoidable if we want to understand the reality of putatively disruptive technologies.

     
  • Mark 4:49 pm on June 6, 2018 Permalink | Reply
    Tags: , , , machine learning, ,   

    CfP: The Cultural Life of Machine Learning: An Incursion into Critical AI Studies 

    How good does this look? So much of this chimes with the paper I’m currently struggling to finish

    The Cultural Life of Machine Learning: An Incursion into Critical AI Studies
    Preconference Workshop, #AoIR2018 Montréal, Canada
    Urbanisation Culture Société Research Centre, INRS (Institut national de la recherche scientifique)
    Wednesday October 10th 2018

    Machine learning (ML), deep neural networks, differentiable programming and related contemporary novelties in artificial intelligence (AI) are all leading to the development of an ambiguous yet efficient narrative promoting the dominance of a scientific field—as well as a ubiquitous business model. Indeed, AI is very much in full hype mode. For its advocates, it represents a ‘tsunami’ (Manning, 2015) or ‘revolution’ (Sejnowski, 2018)—terms indicative of a very performative and promotional, if not self-fulfilling, discourse. The question, then, is: how are the social sciences and humanities to dissect such a discourse and make sense of all its practical implications? So far, the literature on algorithms and algorithmic cultures has been keen to explore both their broad socio-economic, political and cultural repercussions, and the ways they relate to different disciplines, from sociology to communication and Internet studies. The crucial task ahead is understanding the specific ways by which the new challenges raised by ML and AI technologies affect this wider framework. This would imply not only closer collaboration among disciplines—including those of STS for instance—but also the development of new critical insights and perspectives. Thus a helpful and precise pre-conference workshop question could be: what is the best way to develop a fine-grained yet encompassing field under the name of Critical AI Studies? We propose to explore three regimes in which ML and 21st-century AI crystallize and come to justify their existence: (1) epistemology, (2) agency, and (3) governmentality—each of which generates new challenges as well as new directions for inquiries.

    In terms of epistemology, it is important to recognize that ML and AI are situated forms of knowledge production, and thus worthy of empirical examination (Pinch and Bijker, 1987). At present, we only have internal accounts of the historical development of the machine learning field, which increasingly reproduce a teleological story of its rise (Rosenblatt, 1958) and fall (Minsky and Papert 1968; Vapnik 1998) and rise (Hinton 2006), concluding with the diverse if as-yet unproven applications of deep learning. Especially problematic in this regard is our understanding of how these techniques are increasingly hybridized with large-scale training datasets, specialized graphics-processing hardware, and algorithmic calculus. The rationale behind contemporary ML finds its expression in a very specific laboratory culture (Forsythe 1993), with a specific ethos or model of “open science”. Models trained on the largest datasets of private corporations are thus made freely available, and subsequently détourned for the new AI’s semiotic environs of image, speech, and text—promising to make the epistemically recalcitrant landscapes of unruly and ‘unstructured’ data newly “manageable”.

    As the knowledge-production techniques of ML and AI move further into the fabric of everyday life, it creates a particularly new form of agency. Unlike the static, rule-based systems critiqued in a previous generation by Dreyfus (1972), modern AI models pragmatically unfold as a temporal flow of decontextualized classifications. What then does agency mean for machine learners (Mackenzie, 2017)? Performance in this particular case relates to the power of inferring and predicting outcomes (Burell, 2016); new kinds of algorithmic control thus emerge at the junction of meaning-making and decision-making. The implications of this question are tangible, particularly as ML becomes more unsupervised and begins to impact on numerous aspects of daily life. Social media, for instance, are undergoing radical change, as insightful new actants come to populate the world: Echo translates your desires into Amazon purchases, and Facebook is now able to detect suicidal behaviours. In the general domain of work, too, these actants leave permanent traces—not only on repetitive tasks, but on the broader intellectual responsibility.

    Last but not least, the final regime to explore in this preconference workshop is governmentality. The politics of ML and AI are still largely to be outlined, and the question of power for these techniques remains largely unexplored. Governmentality refers specifically to how a field is organised—by whom, for what purposes, and through which means and discourses (Foucault, 1991). As stated above, ML and AI are based on a model of open science and innovation, in which public actors—such as governments and universities—are deeply implicated (Etzkowitz and Leydesdorff, 2000). One problem, however, is that while the algorithms themselves may be openly available, the datasets on which they rely for implementation are not—hence the massive advantages for private actors such as Google or Facebook who control the data, as well as the economic resources to attract the brightest students in the field. But there is more: this same open innovation model makes possible the manufacture of military AI with little regulatory oversight, as is the case for China, whose government is currently helping to fuel an AI arms race (Simonite 2017). What alternatives or counter-powers could be imagined in these circumstances? Could ethical considerations stand alone without a proper and fully developed critical approach to ML and AI? This workshop will try to address these pressing and interconnected issues.

    We welcome all submissions which might profitably connect with one or more of these three categories of epistemology, agency, and governmentality; but we welcome other theoretically and/or empirically rich contributions.

    Interested scholars should submit proposal abstracts, of approximately 250 words, by 11:59pm EDT on June 30th, 2018 to CriticalAI2018 [at] gmail [dot] com. Proposals may represent works in progress, short position papers, or more developed research. The format of the workshop will focus on paper presentations and keynotes, with additional opportunities for group discussion and reflection.

    This preconference workshop will be held at the Urbanisation Culture Société Research Centre of INRS (Institut national de la recherche scientifique). The Centre is located at 385 Sherbrooke St E, Montreal, QC, and is about a 20-minute train ride from the Centre Sheraton on the STM Orange Line (enter at the Bonaventure stop, exit at Sherbrooke), or about a 30-minute walk along Rue Sherbrooke.

    For information on the AoIR (Association of Internet Researchers) conference, see https://aoir.org/aoir2018/ ; for other preconference workshops at AoIR 2018, see https://aoir.org/aoir2018/preconfwrkshop/.

    Organizers: Jonathan Roberge (INRS), Michael Castelle (University of Warwick), and Thomas Crosbie (Royal Danish Defence College).

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel