A machinery for producing rationalisations

I thought this was extremely powerful by Virgina Eubanks in Automating Inequality. She explains on pg 121-122 how machinic learning systems can operate as a form of triage, sorting people in order to distribute scarce resources in a seemingly more rational fashion:

COunter INTELligence PROgram of the FBI), for example, focused on civil rights activists for both their race and their political activism. But wiretaps, photography, tailing, and other techniques of old surveillance were individualized and focused. The target had to be identified before the watcher could surveil. In contrast, in new data-based surveillance, the target often emerges from the data. The targeting comes after the data collection, not before. Massive amounts of information are collected on a wide variety of individuals and groups. Then, the data is mined, analyzed, and searched in order to identify possible targets for more thorough scrutiny. Sometimes this involves old-school, in-person watching and tracking. But increasingly, it only requires finer sifting of data that already exists. If the old surveillance was an eye in the sky, the new surveillance is a spider in a digital web, testing each connected strand for suspicious vibrations. Surveillance is not only a means of watching or tracking, it is also a mechanism for social sorting. Coordinated entry collects data tied to individual behavior, assesses vulnerability, and assigns different interventions based on that valuation. “Coordinated entry is triage,” said Molly Rysman, the Housing and Homeless deputy for LA’s Third District. “All of us have thought about it like a natural disaster. We have extraordinary need and can’t meet all of that need at once. So you’ve got to figure out: How do we get folks who are going to bleed to death access to a doctor, and folks who have the flu to wait? It’s unfortunate to have to do that, but it is the reality of what we’re stuck with.” In his prescient 1993 book, The Panoptic Sort, communication scholar Oscar Gandy of the University of Pennsylvania also suggests that automated sorting of digital personal information is a kind of triage. But he pushes further, pointing out that the term is derived from the French trier, which means to pick over, cull, or grade marketable produce. “Although some metaphors speak for themselves, let me be clear,” he writes. In digital triage, “individuals and groups of people are being sorted according to their presumed economic or political value. The poor, especially poor people of color, are increasingly being treated as broken material or damaged goods to be discarded.”

But as she goes on to write on pg 122, those systems support moral judgements which can operate as rationalisations for those we don’t help and actions we don’t take:

But if homelessness is a human tragedy created by policy decisions and professional middle-class apathy, coordinated entry allows us to distance ourselves from the human impacts of our choice to not act decisively. As a system of moral valuation, coordinated entry is a machine for producing rationalization, for helping us convince ourselves that only the most deserving people are getting help. Those judged “too risky” are coded for criminalization. Those who fall through the cracks face prisons, institutions, or death.

One response to “A machinery for producing rationalisations”

  1. Don’t people do that anyways without the assistance of machines?

    Couldn’t we even say that individual cultural groups are themselves a machine that’s sort out who is worthy and where they belong in a hierarchical organization of the group/not group?

    It’s interesting to me how some types of analysis speak to things as if it’s something new, just because it’s appearing differently.

    It’s like capitalism creating an antagonist that itself is capitalistic.

    For example war. Does it really matter, or does it have any significance at all if there were two groups of say 500,000 people who get in a war and 20,000 of them get killed, compared if there’s two groups of 2 million people and the same percentage die?

    Sometimes I wonder what the implicit morality or implicit messages in some of these analyses.

    I’m not sure what is so horrible about a machine designating and enforcing through intellectual capital what individuals are not valued or valuable beyond the fact that human beings do that all the time without machines, that is, deciding who is valuable and who is not?

    Sometimes I wonder what the world would look out like if everyone was valued equally and there was no war?

    The question itself seems to be never asked because the answer then goes back to kind of emphasize how many particular project is almost useless and basically generated just for the fact of the individual attempting to play some self in a society that itself is a machine that’s valuing and D valuing individuals and groups in particular ways.

    And I’m not being nihilistic nor pessimistic, but when I read some of these kinds of analyses, what strikes me is what implicit in the analysis that no one wants to talk about.

    So I guess I’m kind of asking you, since you posted the segment, what do you think is being left out? What is the implicit agenda, the endgame, what is the point of this segment? What, for example, are you trying to tell me?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.