This looks like a very interesting panel:

We are looking for a few additional people who might be interested in contributing to an AoIR panel exploring critical questions and issues surrounding algorithmic agency, power and publics.

Researchers and media commentators alike are seemingly fascinated with the magic-like and opaque properties of algorithms. Algorithms are touted as responsible for, or implicated in, a range of diverse outcomes and opportunities – from the mundane to the transformative – for individuals, corporations and communities.

Questions around how to critically frame and understand algorithmic agency in contemporary life and where and how to situate questions about power and accountability are raised. This panel is interested in addressing and reframing some of these issues, including the challenges in locating agency in the first place, the politics of making agential claims, and the possible social, political and ethical implications of algorithmic agency (however defined) within and towards publics.

If you are interested in participating please send a 1200 word abstract following the AoIR template to Michele (m.willson@curtin.edu.au<mailto:m.willson@curtin.edu.au>) and Taina (wfg568@hum.ku.dk) by the 25th February outlining your specific contribution to a discussion of algorithmic agency, power and publics.  Short queries about the suitability of topic can be sent to either one of us before that date.

Those contributors whose topic fits the panel mix will be contacted  by late 27th Feb. (given the impending deadlines, it is a tight turnaround). Depending on the range and mix of submissions,  we may also  explore a possible special issue publication. Please only submit should your attendance at the conference be a likely outcome.

Apologies for this very short notice, we tried sending this message out a week ago but it somehow got stuck along the way.

From One Market Under God, by Thomas Frank, loc 2230:

For all the revulsion expressed by books like Liar’s Poker and Barbarians at the Gate, the dominant note was starstruck wonderment at these “masters of the universe,” at their millions and their manses, at their Gulfstream jets and Mercedes cars, at the high quality of the sex and luxuries they enjoyed. Occasional digressions to consider those shafted by the pros served only to heighten this sense, to establish just how satisfying it was to bring misfortune to some dope on the phone. The more monstrous the manipulation the merrier.

This was written almost two decades ago but the trend has only intensified since then. As someone who has spent a lot of time reading these books, I’ve become curious as to what exactly the appeal is. They’re the kind of thing I inevitably buy  when bored in airport and train station bookshops, before devouring in a couple of journeys and feeling vaguely guilty afterwards.

I wonder if these representations of financiers make finance itself more tractable. Narratives about individuals give shape to diffuse systems which influence all aspects of our lives in spite of their distance and abstraction. What has struck me as odd is the cultural prominence of the banker-hedonistic at precisely the time when such figures are in decline. Could there be something comforting about the wolf of Wall Street as we enter an era of algorithmic trading and flash crashes? 

In our discussion of metrics systems, it’s easy to treat subjectivity as a cipher, regarding people as passively moulded by algorithms or blindly governed by the incentives that operate through the institutionalisation of the metrics. My objection to the former is not the claim that people are shaped by metrics, but rather the assumption that this process is basically passive. My interest is in how metrics come to matter to us. How are people shaped over time? How do their biographically accumulating dispositions and concerns influence the actions they take over time? How do these feed back into the metrics system and the organisations within which they are institutionalised?

The fictional portrayals that are starting to emerge of this – novels like Super Sad True Love Story, the Circle and Whiskey Tango Foxtrot, films like Nerve – often struggle to represent this engaged subjectivity because the imperatives of effective story telling militate against it. What we really need is a novel or film that explores metricisation through the internal monologue of what I imagine would turn out to be an unreliable narrator.

From Misbehaving, by Richard Thaler, pg 134. Social norms hindered it in this instance. Why could the same not true be true of online retail? 

The CEO of Coca-Cola also discovered the hard way that violating the norms of fairness can backfire. Douglas Ivester, aged fifty-two, appeared to be on his way to the job of chairman when he abruptly resigned after a push from several board members including legendary investor Warren Buffett. Although several actions contributed to his downfall, one speech in Brazil attracted the most attention. At a press conference, Mr. Ivester was asked about tests Coke was running with vending machines that could change price dynamically. He replied: “Coca-Cola is a product whose utility varies from moment to moment. In a final summer championship, when people meet in a stadium to have fun, the utility of a cold Coca-Cola is very high. So it is fair that it should be more expensive. The machine will simply make this process automatic.” As the Wall Street Journal stated in a story about his downfall, Mr. Ivester seemed to have a “tin ear.” An editorial cartoon captured the feelings of the general public perfectly with an image of a customer walking away from a Coke vending machine with a can in his hand, looking back to see an arm reaching out of the machine and picking his pocket.

An absolutely fascinating account of developments in the newsfeed algorith at Facebook since its introduction:

Adam Mosseri, Facebook’s 32-year-old director of product for news feed, is Alison’s less technical counterpart—a “fuzzie” rather than a “techie,” in Silicon Valley parlance. He traffics in problems and generalities, where Alison deals in solutions and specifics. He’s the news feed’s resident philosopher.

The push to humanize the news feed’s inputs and outputs began under Mosseri’s predecessor, Will Cathcart. (I wrote about several of those innovations here.) Cathcart started by gathering more subtle forms of behavioral data: not just whether someone clicked, but how long he spent reading a story once he clicked on it; not just whether he liked it, but whether he liked it before or after reading. For instance: Liking a post before you’ve read it, Facebook learned, corresponds much more weakly to your actual sentiment than liking it afterward.

After taking the reins in late 2013, Mosseri’s big initiative was to set up what Facebook calls its “feed quality panel.” It began in summer 2014 as a group of several hundred people in Knoxville whom the company paid to come in to an office every day and provide continual, detailed feedback on what they saw in their news feeds. (Their location was, Facebook says, a “historical accident” that grew out of a pilot project in which the company partnered with an unnamed third-party subcontractor.) Mosseri and his team didn’t just study their behavior. They also asked them questions to try to get at why they liked or didn’t like a given post, how much they liked it, and what they would have preferred to see instead. “They actually write a little paragraph about every story in their news feed,” notes Greg Marra, product manager for the news feed ranking team. (This is the group that’s becoming Facebook’s equivalent of Nielsen families.)

“The question was, ‘What might we be missing?’ ” Mosseri says. “‘Do we have any blind spots?’” For instance, he adds, “We know there are some things you see in your feed that you loved and you were excited about, but you didn’t actually interact with.” Without a way to measure that, the algorithm would devalue such posts in favor of others that lend themselves more naturally to likes and clicks. But what signal could Facebook use to capture that information?

http://www.slate.com/articles/technology/cover_story/2016/01/how_facebook_s_news_feed_algorithm_works.single.html

An interesting case discussed on pg 85 of Unforbidden Pleasures, by Adam Phillips:

We may live in the aftermath of the myth of the Fall, and the even longer aftermath of the myth of Oedipus, but the first traffic lights were invented in the United States after the First World War. The traditional mutual accommodation travellers had been making to each other on their bikes and cars and carts was replaced by a set of lights. ‘Its purpose,’ the anthropologist James C. Scott writes, ‘was to prevent accidents by imposing an engineered scheme of coordination.’ Partly through pressure of numbers and the proliferation of vehicles, and partly through the new scientific and bureaucratic fantasies of efficiency and productivity, familiar forms of cooperation gave way to a new, technologically implemented, set of rules. People’s practical judgement was delegated to a red light. They had known when to stop, but now they were being told when to stop.

The institutionalisation of algorithmic evaluation should be seen as continuous with this much longer standing process of using technology to impose patterns upon, often entirely substituting for, situated judgement. It’s a new means towards a long standing end, rather than something radically discontinuous with what has come before.

In fact recognising this poses an apparent challenge for the argument I’ve been developing about the increasing ‘cognitive load’ of digital capitalism. Surely systems which support or substitute for reflexivity actually decrease the burden upon subjects? My argument will be that the experienced necessity for these systems develops in relation to their unintended consequences towards ratcheting up the burden in other ways. But it’s currently rather light on detail.

From InfoGlut, by Mark Andrejevic, loc 601:

The fictional portrayals envision a contradictory world in which individual actions can be predicted with certainty and effectively thwarted. They weave oracular fantasies about perfect foresight. Predictive analytics, by contrast, posits a world in which probabilities can be measured and resources allocated accordingly. Because forecasts are probabilistic, they never attain the type of certitude that would, for example, justify arresting someone for a crime he or she has not yet committed. Rather, they distribute probabilities across populations and scenarios.

The most pressing question this raises for me concerns the micro-sociology of algorithmic authority. To what extent are the algorithms black boxed by those ‘on the ground’? Does awareness of the probabilistic character of the forecast drop out of the picture in the social situations in which actors are intervening on the basis of these forecasts? How much implicit authority derives from the fact ‘the algorithm says so’, even if those designing the underlying system would stress that the forecasts are probabilistic? How does this vary between different groups? It’s easy to imagine securely embedded professionals (e.g. Doctors) treating these forecasts with care, not least of all because many already do so as a routine part of working life, but what if algorithmic authority is a corollary to deliberate deskilling? What if interventions are made by those who are precariously employed? As much as I dislike the show in question, it’s hard not to think of this when reflecting on these issues:


These are empirical questions which are hugely important for the study of algorithmic authority. I’m planning to start looking for ethnographic and ethnomethodological literature which can shed light on them, even if not directly addressed to questions of algorithms. Any reading suggestions are, as ever, much appreciated. 

I just had my card stopped by HSBC for the second time in a month and the seventh or eighth (I’ve genuinely lost count) time this year. As with previous occurrences, I spend twenty minutes on hold and go through a tedious security check process to confirm that my last ten transactions were indeed my transactions. I don’t want to leave my bank. I’m both lazy and very busy. It’s a massive hassle, the prospect of which fills me with no enthusiasm. But their fraud detection system is absurd. I’m told repeatedly that they have identified irregular transactions – in fact I am told this regularly and they are (roughly) the same transactions in each instance.

One upside to this frustration is that it’s left me keen to read about data analysis techniques for fraud detection (the brief wikipedia page is interesting) in order to understand why HSBC’s are so idiotic. What seems IOTTMC (intuitively obvious to the most causal observer) is that routine transactions shouldn’t be flagged as problematic: it’s desirable to build a profile of routine activity based on the available data in order to detect transactions that deviate from that routine. As far as I can tell, HSBC instead uses a series of red flag events in a rather mechanical fashion, presumably opting for a lowest common denominator approach because it’s cheaper in the long run.

They really dislike the film service Mubi for some reason (despite the fact I’ve been a subscriber for months) and generally find international micro-payments suspicious, all the more so when they are clustered. I can see why this might be a good idea on paper but in practice it makes life exceptionally irritating if you’re someone who uses iTunes (etc) a lot. What’s even more irritating is my inability to get any comment from HSBC beyond “I’m sorry, sir, I realise that must be frustrating”. So I’m leaving HSBC, entirely because of their idiotic fraud detection system, though thanking the bank for helping sensitise me to a causal power of algorithms that I might otherwise have failed to recognise: inciting people to take action by routinely pissing them off.