I’m saving this here to come back to because I’m very interested in this theme.

Call for Workshop Participation
Algorithms on the Shop Floor: Data-driven Technologies in Organizational Context

Deadline for applications: April 19, 2019
Workshop date: June 14, 2019 in NYC at Data & Society <http://datasociety.net/>
Application link: http://datasociety.net/algorithms-on-the-shop-floor <http://datasociety.net/algorithms-on-the-shop-floor>
For questions, email events@datasociety.net <mailto:events@datasociety.net>

On June 14, 2019, Data & Society will host a workshop in NYC on the intersection of technology and organizational theory and practice. The workshop arises from an increasing need to understand how automated, algorithmic, AI, or otherwise data-driven technologies are being integrated into organizational contexts and processes.

The workshop will convene researchers who study how new technologies are introduced, incorporated, resisted or maintained within organized groups, and the changes this integration brings. Such changes might include processes (workflows, tasks, “re-skilling,” “changed” skills, augmentation) or in structures (roles, jurisdictions, authority), or other key sociological issues (such as power, culture, diversity, expertise, risk, rationality, legitimacy, and solidarity). In a world where new technologies are being integrated into organizations of all sizes and types, how can we make sense of what gets lost, what gets gained, and what gets changed? Many of these questions are long standing themes in organizational studies and ethnographies examining the social complexities of working on the machine shop floor, to which the title of our workshop alludes. Still, how do such integrations provoke new shifts in power relations and social values?

The range of field sites and research questions appropriate for this event is wide. The only requirements for participation are that: 1) you must be a researcher (with or without an academic affiliation); 2) your research questions must address a dimension of socio-technical practice in the context of a formalized organization.

Relevant topics for this workshop might include:
How do formations of power, hierarchy, and discretionary decision-making change when automated and AI technologies are introduced?
How are issues of diversity and equity brought into and reconstituted when new technologies are introduced?
How does the integration of new technologies into organizations intersect with issues of access, inclusion, and disability?
What are sites of unintended use, resistance, or deviance with respect to technology in organizations?
How are new forms of expertise, skill, and training emerging to meet demands of using new technologies in the workplace?
How are new or existing labor organizations confronting the perceived threat of AI?
What are organizational formations or organizational processes that build on digital technologies to advance equity and social justice?
What lessons does the history of organizational theory and practice hold for contemporary dynamics?
How are bureaucratic forms of control (such as auditing or impact policy) integrated into the development of technology?

These examples are by no means exhaustive, but intended to provide a flavor of the kind of relevant research questions. We are especially interested in strange outliers and unexpected studies.

Key Dates
* Application Deadline: April 19, 2019
* Selection Decisions: May 1, 2019
* Full Paper Deadline: May 28, 2019
* Workshop: June 14, 2019

Participation Requirements
The structure of the Data & Society Workshop series is designed to maximize scholarly thinking about the evolving and societally important issues surrounding data-driven technologies. Participants will be asked to read three full papers in advance of the event and prepare comments for intensive discussion. Some participants will be asked to be discussants of papers, where they will lead the conversation and engage the room. Authors will not present their work, but rather participate in critical discussion with the assembled group about the paper, with explicit intent of making the work stronger and more interdisciplinary.

All participants are required to read three papers in advance of the event and come ready to offer constructively critical feedback. We want researchers to constructively spar with and challenge one another to strengthen ourselves across the board. This is not an event for passive attendance, but an opportunity to engage each other substantively.

This event is first and foremost an opportunity to collectively think and help construct a field. Although this event is designed to bring together 30-40 researchers, only 12 papers will be workshopped. Yet, everyone who attends is expected to be an active participant and contribute to rich conversations. We believe that it is through active engagement with other scholars around research that new insights can emerge. In other words, this event is designed to be the kind of intense intellectual engagement that made you fall in love with being a researcher in the first place.
Format

The day will be organized into three time slots, each 75 minutes long. One paper will be workshopped in each session. Multiple sessions will run in parallel so there will be a total of ~12 papers, but each participant will only be responsible for reading and engaging with 3. Within each group, a discussant will open with a critique of the paper before inviting participants to share their feedback. (If you participate in this event, you may be asked to be a discussant on one paper.) All are expected to share feedback, with author response towards the end of the session.

Logistics

The event will take place on June 14, 2019, and will run from 8:45am to 6pm. Paper sessions will run until 4:15pm; afterwards, there will be a reception for all participants.
All meals will be covered during the event. Unfortunately, we have limited funding to support travel for this workshop; however, we’re happy to provide a formal invitation for participation/“speaking” to anyone who may need it to secure their own funding.
Application Process (Deadline: April 19)

For this event, we are looking to bring together researchers from diverse disciplines studying technology in organizations. This can include management, organization studies, communications, information studies, computer-supported cooperative work, computer-human interaction, science and technology studies, ethics, labor, law, policy, anthropology, and design research. As a result, attendees should expect to engage with scholars who are outside of their field of study. We ask that attendees think of the Data & Society Workshop series as an opportunity to engage with a broader cross-disciplinary field, and to strengthen both relationships and research through participation in the workshop.

Because the paper submission date is only a few weeks after the application deadline, you should only apply as an author if you have a paper that you’re actively writing right now and will be ready to share a draft with others by May 28, 2019. If you aren’t already working on this paper, you probably aren’t in a good position to workshop it at this event. Appropriate papers may be a work-in-progress book chapter or a journal article. (Full-length books are a bit too much for this event, so if you’re writing a book, think about the chapter that you most want to get feedback on.)

To apply as an author, please submit the following:
* Name, affiliation, title, email address, discipline.
* Big research question you’re seeking to answer with your research.
* Paper title + 100-250 word abstract.
* The current half-baked, thick-outline, total mess of the paper.**
** We are asking for the disaster of a paper to understand where you are with the piece now, and the arguments you intend to make, so that we can appropriately match you to a discussant. We won’t share this version with anyone (we promise).
Please note: All co-authors for papers must apply separately. If your co-author doesn’t apply, we will assume that s/he is not interested in attending the workshop. It will be hard to add additional participants later, so make sure your co-authors apply if they want to attend.

To apply as a participant/discussant, please submit the following:
* Name, affiliation, title, email address, discipline.
* Big research question you’re seeking to answer with your research.
* 100-250 word description of your research.

Application link: http://datasociety.net/algorithms-on-the-shop-floor <http://datasociety.net/algorithms-on-the-shop-floor>
For questions, email events@datasociety.net <mailto:events@datasociety.net>

My notes on Caplan, R., & Boyd, D. (2018). Isomorphism through algorithms: Institutional dependencies in the case of Facebook. Big Data & Society, 5(1), 2053951718757253.

Are data-driven technologies leading organisations to take on shared characteristics? This is the fascinating question addressed in this paper by Robyn Caplan and danah boyd which they begin with the example of news media. The popularity of social media platform as intermediaries has forced many news media producers to change their operations, increasingly producing with a view to popularity on these platforms. As they put it, “these platforms have upended the organizational practices of news-producing platforms, altering how both the newsroom and individual journalists operate” (2). They use the concept of isomorphism to understand how “algorithms structure disparate businesses and aims into an organizational field, leading them to change their goals and adopt new practices” (2). This is a process of homogenisation, as organisations reconstruct themselves into a field orientated around the assumptions embedded into the t mediating platform. The ensuing ambiguity has regulatory consequences, as social media platforms are not straight forward media actors but nor are they mere intermediaries. By theorising algorithmic mediation as akin to bureaucratisation, it become easier to identify the precise character of the role of platforms within it. It also makes clear the continuities with earlier isomorphic processes, for instance as corporate software platforms introduced common features to organisations.

The roots of this connection are deep. They argue that “algorithms that serve to pre- process, categorize, and classify individuals and organizations should be viewed as extensions of bureaucratic tools such as forms, that have been associated with the state in the past” (3). Software like Lotus 1-2-3 and Microsoft Office restructured business activity through the affordances it offered to digitalise bureaucratic processes and algorithmic technologies should be seen as a further extension of this process. The neutrality which animated the promise of bureaucracy is also often expressed in the belief that algorithmic judgement will negate the role of subjectivity and bias in decision making processes. This is obscured by the familiar black box of the algorithm but also the mythology of its uniqueness, seeing it as something distinct from previous organisational processes. However if we see algorithms as organisational phenomena then the problem comes to look quite different, simultaneously more straight forward but also more challenging because the problems will likely spiral outwards across dependent organisations. 

They use DiMaggio and Powell’s concept of isomorphism which considers how a common environment can lead otherwise different units of a population facing that environment to come to resemble one another. For organisations this occurs through one organisation becoming dependent on another organisation, with the expected degree of resemblance tracking the degree of that dependence. For instance in the case of Facebook’s newsfeed, the concept of what is ‘relevant’ has been redefined by the vast size of the audience whose access is mediated through this mechanism. The dependence of the news media on that mechanism means they come to reproduce its characteristics, increasingly operating with a view towards metrics like clicks, likes and shares. The early winners in the Facebook ecosystem were those publishers like Buzzfeed and Upworthy who “subsumed their own organizational practices to the logic of Facebook’s algorithms” (5). But Facebook’s attempts to modulate this mechanism in order to produce what they deemed better quality results inevitably leads the actors dependent upon it to make adaptive changes in response to these modulations. Mimesis thrives in this environment as they explain on pg 6-7:

“Changes stemming from coercive forces, especially when frequent, lead to an environment of uncertainty that prompts dependent organizations to learn from other dependent organizations that have successfully conformed to the structuring mechanisms. This process of ‘‘mimesis,’’ or imitating models for success, is another process DiMaggio and Powell (1983: 151) argue will induce similarity across an organizational field. In this sense, the dominant organization’s incentives or goals become embedded across an industry through the borrowing of practices that lead to success over the network. In the case of Facebook, this was seen in the adoption of data-driven metrics and analytics into newsrooms, as well as the growth of a new set of intermediaries that were fed directly by the Facebook API, whose role it was to analyze and com- municate Facebook metrics back to publishers”

A further ecosystem of intermediaries thrives under these circumstances, as new players emerge who help the firms concerned address their common problems. These responses to uncertainty are driven by a concern to “demonstrate to others that they are working to change their practices to be in-line with those of the dominant organization“ (7) as well as increasing possibilities for success. The discussion of professionalisation is really important for my interests. The roles themselves changed as a result of isomorphism, with normative pressure to enact new functions and perform new skills which contrbute to the success of the organisation. This is my concern about the institutionalisation of social media within higher education. There’s a lot here which I’m going to need to go back to and I think it’s crucial for my developing project on the digital university. 

This looks like a very interesting panel:

We are looking for a few additional people who might be interested in contributing to an AoIR panel exploring critical questions and issues surrounding algorithmic agency, power and publics.

Researchers and media commentators alike are seemingly fascinated with the magic-like and opaque properties of algorithms. Algorithms are touted as responsible for, or implicated in, a range of diverse outcomes and opportunities – from the mundane to the transformative – for individuals, corporations and communities.

Questions around how to critically frame and understand algorithmic agency in contemporary life and where and how to situate questions about power and accountability are raised. This panel is interested in addressing and reframing some of these issues, including the challenges in locating agency in the first place, the politics of making agential claims, and the possible social, political and ethical implications of algorithmic agency (however defined) within and towards publics.

If you are interested in participating please send a 1200 word abstract following the AoIR template to Michele (m.willson@curtin.edu.au<mailto:m.willson@curtin.edu.au>) and Taina (wfg568@hum.ku.dk) by the 25th February outlining your specific contribution to a discussion of algorithmic agency, power and publics.  Short queries about the suitability of topic can be sent to either one of us before that date.

Those contributors whose topic fits the panel mix will be contacted  by late 27th Feb. (given the impending deadlines, it is a tight turnaround). Depending on the range and mix of submissions,  we may also  explore a possible special issue publication. Please only submit should your attendance at the conference be a likely outcome.

Apologies for this very short notice, we tried sending this message out a week ago but it somehow got stuck along the way.

From One Market Under God, by Thomas Frank, loc 2230:

For all the revulsion expressed by books like Liar’s Poker and Barbarians at the Gate, the dominant note was starstruck wonderment at these “masters of the universe,” at their millions and their manses, at their Gulfstream jets and Mercedes cars, at the high quality of the sex and luxuries they enjoyed. Occasional digressions to consider those shafted by the pros served only to heighten this sense, to establish just how satisfying it was to bring misfortune to some dope on the phone. The more monstrous the manipulation the merrier.

This was written almost two decades ago but the trend has only intensified since then. As someone who has spent a lot of time reading these books, I’ve become curious as to what exactly the appeal is. They’re the kind of thing I inevitably buy  when bored in airport and train station bookshops, before devouring in a couple of journeys and feeling vaguely guilty afterwards.

I wonder if these representations of financiers make finance itself more tractable. Narratives about individuals give shape to diffuse systems which influence all aspects of our lives in spite of their distance and abstraction. What has struck me as odd is the cultural prominence of the banker-hedonistic at precisely the time when such figures are in decline. Could there be something comforting about the wolf of Wall Street as we enter an era of algorithmic trading and flash crashes? 

In our discussion of metrics systems, it’s easy to treat subjectivity as a cipher, regarding people as passively moulded by algorithms or blindly governed by the incentives that operate through the institutionalisation of the metrics. My objection to the former is not the claim that people are shaped by metrics, but rather the assumption that this process is basically passive. My interest is in how metrics come to matter to us. How are people shaped over time? How do their biographically accumulating dispositions and concerns influence the actions they take over time? How do these feed back into the metrics system and the organisations within which they are institutionalised?

The fictional portrayals that are starting to emerge of this – novels like Super Sad True Love Story, the Circle and Whiskey Tango Foxtrot, films like Nerve – often struggle to represent this engaged subjectivity because the imperatives of effective story telling militate against it. What we really need is a novel or film that explores metricisation through the internal monologue of what I imagine would turn out to be an unreliable narrator.

From Misbehaving, by Richard Thaler, pg 134. Social norms hindered it in this instance. Why could the same not true be true of online retail? 

The CEO of Coca-Cola also discovered the hard way that violating the norms of fairness can backfire. Douglas Ivester, aged fifty-two, appeared to be on his way to the job of chairman when he abruptly resigned after a push from several board members including legendary investor Warren Buffett. Although several actions contributed to his downfall, one speech in Brazil attracted the most attention. At a press conference, Mr. Ivester was asked about tests Coke was running with vending machines that could change price dynamically. He replied: “Coca-Cola is a product whose utility varies from moment to moment. In a final summer championship, when people meet in a stadium to have fun, the utility of a cold Coca-Cola is very high. So it is fair that it should be more expensive. The machine will simply make this process automatic.” As the Wall Street Journal stated in a story about his downfall, Mr. Ivester seemed to have a “tin ear.” An editorial cartoon captured the feelings of the general public perfectly with an image of a customer walking away from a Coke vending machine with a can in his hand, looking back to see an arm reaching out of the machine and picking his pocket.

An absolutely fascinating account of developments in the newsfeed algorith at Facebook since its introduction:

Adam Mosseri, Facebook’s 32-year-old director of product for news feed, is Alison’s less technical counterpart—a “fuzzie” rather than a “techie,” in Silicon Valley parlance. He traffics in problems and generalities, where Alison deals in solutions and specifics. He’s the news feed’s resident philosopher.

The push to humanize the news feed’s inputs and outputs began under Mosseri’s predecessor, Will Cathcart. (I wrote about several of those innovations here.) Cathcart started by gathering more subtle forms of behavioral data: not just whether someone clicked, but how long he spent reading a story once he clicked on it; not just whether he liked it, but whether he liked it before or after reading. For instance: Liking a post before you’ve read it, Facebook learned, corresponds much more weakly to your actual sentiment than liking it afterward.

After taking the reins in late 2013, Mosseri’s big initiative was to set up what Facebook calls its “feed quality panel.” It began in summer 2014 as a group of several hundred people in Knoxville whom the company paid to come in to an office every day and provide continual, detailed feedback on what they saw in their news feeds. (Their location was, Facebook says, a “historical accident” that grew out of a pilot project in which the company partnered with an unnamed third-party subcontractor.) Mosseri and his team didn’t just study their behavior. They also asked them questions to try to get at why they liked or didn’t like a given post, how much they liked it, and what they would have preferred to see instead. “They actually write a little paragraph about every story in their news feed,” notes Greg Marra, product manager for the news feed ranking team. (This is the group that’s becoming Facebook’s equivalent of Nielsen families.)

“The question was, ‘What might we be missing?’ ” Mosseri says. “‘Do we have any blind spots?’” For instance, he adds, “We know there are some things you see in your feed that you loved and you were excited about, but you didn’t actually interact with.” Without a way to measure that, the algorithm would devalue such posts in favor of others that lend themselves more naturally to likes and clicks. But what signal could Facebook use to capture that information?

http://www.slate.com/articles/technology/cover_story/2016/01/how_facebook_s_news_feed_algorithm_works.single.html

An interesting case discussed on pg 85 of Unforbidden Pleasures, by Adam Phillips:

We may live in the aftermath of the myth of the Fall, and the even longer aftermath of the myth of Oedipus, but the first traffic lights were invented in the United States after the First World War. The traditional mutual accommodation travellers had been making to each other on their bikes and cars and carts was replaced by a set of lights. ‘Its purpose,’ the anthropologist James C. Scott writes, ‘was to prevent accidents by imposing an engineered scheme of coordination.’ Partly through pressure of numbers and the proliferation of vehicles, and partly through the new scientific and bureaucratic fantasies of efficiency and productivity, familiar forms of cooperation gave way to a new, technologically implemented, set of rules. People’s practical judgement was delegated to a red light. They had known when to stop, but now they were being told when to stop.

The institutionalisation of algorithmic evaluation should be seen as continuous with this much longer standing process of using technology to impose patterns upon, often entirely substituting for, situated judgement. It’s a new means towards a long standing end, rather than something radically discontinuous with what has come before.

In fact recognising this poses an apparent challenge for the argument I’ve been developing about the increasing ‘cognitive load’ of digital capitalism. Surely systems which support or substitute for reflexivity actually decrease the burden upon subjects? My argument will be that the experienced necessity for these systems develops in relation to their unintended consequences towards ratcheting up the burden in other ways. But it’s currently rather light on detail.

From InfoGlut, by Mark Andrejevic, loc 601:

The fictional portrayals envision a contradictory world in which individual actions can be predicted with certainty and effectively thwarted. They weave oracular fantasies about perfect foresight. Predictive analytics, by contrast, posits a world in which probabilities can be measured and resources allocated accordingly. Because forecasts are probabilistic, they never attain the type of certitude that would, for example, justify arresting someone for a crime he or she has not yet committed. Rather, they distribute probabilities across populations and scenarios.

The most pressing question this raises for me concerns the micro-sociology of algorithmic authority. To what extent are the algorithms black boxed by those ‘on the ground’? Does awareness of the probabilistic character of the forecast drop out of the picture in the social situations in which actors are intervening on the basis of these forecasts? How much implicit authority derives from the fact ‘the algorithm says so’, even if those designing the underlying system would stress that the forecasts are probabilistic? How does this vary between different groups? It’s easy to imagine securely embedded professionals (e.g. Doctors) treating these forecasts with care, not least of all because many already do so as a routine part of working life, but what if algorithmic authority is a corollary to deliberate deskilling? What if interventions are made by those who are precariously employed? As much as I dislike the show in question, it’s hard not to think of this when reflecting on these issues:


These are empirical questions which are hugely important for the study of algorithmic authority. I’m planning to start looking for ethnographic and ethnomethodological literature which can shed light on them, even if not directly addressed to questions of algorithms. Any reading suggestions are, as ever, much appreciated. 

I just had my card stopped by HSBC for the second time in a month and the seventh or eighth (I’ve genuinely lost count) time this year. As with previous occurrences, I spend twenty minutes on hold and go through a tedious security check process to confirm that my last ten transactions were indeed my transactions. I don’t want to leave my bank. I’m both lazy and very busy. It’s a massive hassle, the prospect of which fills me with no enthusiasm. But their fraud detection system is absurd. I’m told repeatedly that they have identified irregular transactions – in fact I am told this regularly and they are (roughly) the same transactions in each instance.

One upside to this frustration is that it’s left me keen to read about data analysis techniques for fraud detection (the brief wikipedia page is interesting) in order to understand why HSBC’s are so idiotic. What seems IOTTMC (intuitively obvious to the most causal observer) is that routine transactions shouldn’t be flagged as problematic: it’s desirable to build a profile of routine activity based on the available data in order to detect transactions that deviate from that routine. As far as I can tell, HSBC instead uses a series of red flag events in a rather mechanical fashion, presumably opting for a lowest common denominator approach because it’s cheaper in the long run.

They really dislike the film service Mubi for some reason (despite the fact I’ve been a subscriber for months) and generally find international micro-payments suspicious, all the more so when they are clustered. I can see why this might be a good idea on paper but in practice it makes life exceptionally irritating if you’re someone who uses iTunes (etc) a lot. What’s even more irritating is my inability to get any comment from HSBC beyond “I’m sorry, sir, I realise that must be frustrating”. So I’m leaving HSBC, entirely because of their idiotic fraud detection system, though thanking the bank for helping sensitise me to a causal power of algorithms that I might otherwise have failed to recognise: inciting people to take action by routinely pissing them off.