Tagged: discrimination Toggle Comment Threads | Keyboard Shortcuts

  • Mark 7:51 pm on November 11, 2019 Permalink | Reply
    Tags: bias, discrimination,   

    How machine learning veils human bias 

    The promise of introducing machine learning into public administration is that it can counteract human bias. The latent promise of bureaucracy can be realised by systems that won’t be up-ended by the messy imperfections of their human operators. However Virginia Eubanks makes clear in Automating Inequality that the reality is something much more worrying, as the operation of machinic systems does what Andrew Pickering calls ontological veiling: rendering them unrepresentable by taking us on a detour from those aspects of reality. As Eubanks recalls on pg 166:

    Human bias has been a problem in child welfare since the field’s inception. In its earliest days, Charles Loring Brace’s orphan trains carried away so many Catholic sons and daughters that the religious minority had to create an entirely parallel system of child welfare organizations. Scientific charity workers had religious biases that tended to skew their decision-making. They believed that the children of Protestants could be redeemed by their families, but Catholics were incorrigible and had to be sent to labor on (mostly Protestant) farms in the Midwest. Today, racial disproportionality shatters the bonds of too many Black and Native American families. Some of that disproportion can certainly be traced to human discretion in child welfare decision-making. But human bias is a built-in feature of the predictive risk model, too.

    Compare to the contemporary reality depicted on pg 167:

    Once the big blue button is clicked and the AFST runs, it manifests a thousand invisible human choices. But it does so under a cloak of evidence-based objectivity and infallibility. Intake screeners reflect a variety of experiences and life paths, from the suburban white Penn State postgraduate to an African American Pittsburgh native, like Pat Gordon, with over a decade of experience. The automated discretion of predictive models is the discretion of the few. Human discretion is the discretion of the many. Flawed and fallible, yes. But also fixable.

     
  • Mark 8:01 am on April 11, 2016 Permalink
    Tags: , discrimination, ,   

    The Ambivilance of the Abject 

    There’s an interesting observation made by David Schultz in his American Politics In An Age of Ignorance concerning the stock character of ‘the welfare queen’ which I think applies to other such abject characters. From loc 975:

    This image of the welfare queen as a shrewd, calculating, yet lazy individual seemed odd. She was smart enough to follow welfare laws across the country and migrate to the place with the best benefits. Yet she was also unskilled and not motivated enough to look for real work. The welfare queen was a composite of the best and worst in human nature.

    If this is an ontological dimension to abjection, can we also see an epistemological one? Listen to any call in show for long enough and you’ll find examples of such figures being recognisable from the slightest cues (e.g. their curtains are still drawn by 9am) or on the basis of placeless testimony (e.g. I heard they got a new flat screen TV).

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel