Tagged: algorithms Toggle Comment Threads | Keyboard Shortcuts

  • Mark 7:31 pm on September 24, 2019 Permalink | Reply
    Tags: algorithms, isomorphism, , platform intermediaries, , ,   

    Viral populism: what happens when isomorphism through algorithm hits politics? 

    This is an admirably prescient post from 2014 by BuzzFeed editor Ben Smith about the viral populism which social media has facilitated. It brings a new dimension to political life which eludes the familiar expectations of pundits:

    At some point in the next two years, the pollsters and ad makers who steer American presidential campaigns will be stumped: The nightly tracking polls are showing a dramatic swing in the opinions of the electorate, but neither of two typical factors — huge news or a major advertising buy — can explain it. They will, eventually, realize that the viral, mass conversation about politics on Facebook and other platforms has finally emerged as a third force in the core business of politics, mass persuasion.

    The incentive structure which Buzzfeed, Upworthy, Breitbart and the many other platform intermediaries have adapted themselves to is one which politicians now confront as well:

    What is beginning to dawn on campaigns is that persuasion works differently when it relies on sharing. It is a political truism that people are most likely to believe what their friends and neighbors tell them, a truth that explains everything from sophisticated and earnest door-knocking efforts to malign email-forward whispering campaigns. And the social conversation favors things that generations of politicians have been trained to avoid: spontaneity, surprise, authenticity, humor, raw edge, the occasional human stumble. (Joe Biden!) As mobile becomes increasingly central to the social web, I suspect that more voters in 2016 will be persuaded by a video in their Facebook mobile browsers than by any other medium.

    The terrifying prospect this suggests is that the process we have seen in the media, what Caplan and boyd call isomorphism through algorithm, will be seen in politics as well. With each viral success story aspirant politicians will be inclined to immediately mimic their strategies. This doesn’t mean that they will be identical to each other but rather their strategies will be orientated around the same touchstones: authenticity, spontaneity, emotionality. The closing paragraph immediately made me think about Rory Stewart’s viral success in the Tory leadership contest as a striking example of how unradical this political populism might prove in practice:

    A few modern politicians appear to have a real feel for the raw emotion and, sometimes, (apparent) spontaneity that people will want to share. Elizabeth Warren’s blunt and casual economic 2011 tirade and Ted Cruz’s theatrical confrontations (and even his own low-production-value cell phone videos) are the beginnings of that viral populism for which the social web has opened a real space.

     
  • Mark 9:29 am on April 26, 2019 Permalink | Reply
    Tags: algorithms, , computer science,   

    The birth of machinology 

    My notes on Rahwan, I. et al. (2019) Machine Behaviour. Nature, 568, 477–486

    The proliferation of intelligent machines, ranging from machine learning systems through to their embodiment in robotics, raises the question of how their behaviour should be studied and understood. In this agenda setting paper, the team of authors suggest this now requires the deliberate formation of a new field, bringing together many parts which have already taken shape. From 477:

    This field overlaps with, but is distinct from, computer science and robotics. It treats machine behaviour empirically. This is akin to how ethology and behavioural ecology study animal behaviour by integrating physiology and biochemistry—intrinsic properties—with the study of ecology and evolution—properties shaped by the environment. Animal and human behaviours cannot be fully understood without the study of the contexts in which behaviours occur. Machine behaviour similarly cannot be fully understood without the integrated study of algorithms and the social environments in which algorithms operate

    These AI agents, virtual and embodied, tend to be studied by their creators with a narrow focus on whether they are performing their intended tasks. However a broader approach is necessary as they “increasingly integrate into our society and are already involved in a variety of activities, such as credit scoring, algorithmic trading, local policing, parole decisions, driving, online dating and drone warfare” (477). This would respond to growing concerns over the lack of predictability surrounding their consequences, the loss of human oversight and the potential harms arising from once human led tasks being increasingly taken over by AI agents. They cite three main motivations for this:

    1. The ubiquity of algorithms has reached an unprecedented point, operating across fields as diverse as news, credit, entertainment, travel, security and dating. They are increasingly likely to play a role in raising the young, caring for the old and coordinating collective behaviour.
    2. The complexity and opacity of algorithms is increasingly rapidly, leading to black boxes in which the mechanism linking inputs and outputs is uncertain, even if the initial code was well understood because the training data and training process have uncertain results. This is compounded by the proprietary character of the work underlying them. Their feedbacks with diverse environments adds another layer of complexity to this process.
    3. Their capacity to have beneficial or detrimental effect on humanity necessitates scrutiny because “with the power to nudge human behaviours in positive or intended ways comes the risk that human behaviours may be nudged in costly or unintended ways” (478).

    This is a necessarily interdisciplinary field which is currently in a nascent period of integration, operating in an ad hoc way. There is a skills gap on both sides of the groups with a stake in this, described on 478:

    These scientists may be expert mathematicians and engineers; however, they are typically not trained behaviourists. They rarely receive formal instruction on experimental methodology, population-based statistics and sampling paradigms, or observational causal inference, let alone neuroscience, collective behaviour or social theory. Conversely, although behavioural scientists are more likely to possess training in these scientific methods, they are less likely to possess the expertise required to proficiently eval- uate the underlying quality and appropriateness of AI techniques for a given problem domain or to mathematically describe the properties of particular algorithms.

    But their work will necessarily move beyond optimisation of AI agents against benchmarks, necessitating “a broader set of indicators, much as social scientists explore a wide range of human behaviours in the realm of social, political or economic interactions” (479). It’s notable how there’s no room left here for interpretation of AI agents, human responses to them and the relation between the two, despite the potential value which a broader methodological repertoire could offer. The fact they later in the paper cite a paper from 2018 to say how lab based research has begun on human machine interaction, speaks volumes even on the most charitable reading that they’re aware of HCI but see it as a different endeavour. From 479:

    As such, scholars of machine behaviour spend considerable effort in defining measures of micro and macro outcomes to answer broad questions such as how these algorithms behave in different environments and whether human interactions with algorithms alter societal outcomes. Randomized experiments, observational inference and population-based descriptive statistics—methods that are often used in quantitative behavioural sciences—must be central to the study of machine behaviour. Incorporating scholars from out- side of the disciplines that traditionally produce intelligent machines can provide knowledge of important methodological tools, scientific approaches, alternative conceptual frameworks and perspectives on the economic, social and political phenomena that machines will increas- ingly influence.

    The theoretical approach they suggest is a taxonomic one, drawing on a parallel with ethology. As they put it on 480, “Machines have mechanisms that produce behaviour, undergo development that integrates environmental information into behaviour, produce functional consequences that cause specific machines to become more or less common in specific environments and embody evolutionary histories through which past environments and human decisions continue to influence machine behaviour”.

    They see mechanisms for generating behaviour as emerging from the interplay between an algorithm and its environment, even if a particular algorithm is embedded in an interlocking structure to constitute a complex AI agent. Machinology (my term, not theirs) would ask how particular behaviours emerged from this interplay, whether from human engineer choices, training, interaction or feedback from the environment. These behaviours serve a function or otherwise for human stakeholders, leading to a selective force which might make some more common e.g. if useful behaviours are reproduced in further AI agents. These reference institutions as sources of incentives which shape the behaviour of human stakeholders. Institutional incentives can cause machine behaviours to spread which are socially pathological, for instance systems to maximise user engagement on social media which leads to the proliferation of ‘fake news’ while ensuring the success of the platform itself.

    These are two parallel levels of explanation, one explaining ontogeny (how the entity has the characteristics it does) through the identification of causal mechanisms leading to the development of behaviour, the other explaining phylogeny (why the entity has developed these characteristics rather than others) through the analysis of adaptive value. The transmission of these characteristics is varied and open, though mediated through institutions like intellectual property laws and regulatory regimes. As they put it, “machines may exhibit very different evolutionary trajectories, as they are not bound by the mechanisms of organic evolution” (481).

    They suggest three levels of analysis: individual machines, collective machines and groups of machines embedded in an environment. The first is currently undertaken by computer scientists and engineers, often looking at an individual machine solely in terms of properties arising from its source code and design. These could be conducted in a within-machine or between-machine manner, looking at variation in how one machine behaves across contexts or looking at the same behaviours as they vary between machines. The second looks at “the interactive and system- wide behaviours of collections of machine agents” (482) as can be seen in approaches such as Multiagent systems and computational game theory. Machines using simple algorithms for local interactions can aggregate into complex behaviours at a collective level, studied as aggregation dynamics, but also the forms of social learning which might take place when humans and institutions provide a potent source of environmental feedback. Thirdly, machines shape human behaviour and vice versa, necessitating study of these hybrid and embedded realities. But these dynamics are likely to operate at the same time, in complex and hybridising ways.

    They end with a consideration of some of the methodological, theoretical and logistical obstacles to machinology, the most urgent of which is the uncertain over legal penalties for reverse engineering algorithms and violating terms of services & the risks involved in anthropomorphising machines (with their potentially alien intelligences) and doing so in a way which distracts from the crucial human operations involved in providing the training data.

     
  • Mark 10:34 am on March 30, 2019 Permalink | Reply
    Tags: algorithms, , ,   

    Organisational sociology and algorithms 

    I’m saving this here to come back to because I’m very interested in this theme.

    Call for Workshop Participation
    Algorithms on the Shop Floor: Data-driven Technologies in Organizational Context

    Deadline for applications: April 19, 2019
    Workshop date: June 14, 2019 in NYC at Data & Society <http://datasociety.net/>
    Application link: http://datasociety.net/algorithms-on-the-shop-floor <http://datasociety.net/algorithms-on-the-shop-floor>
    For questions, email events@datasociety.net <mailto:events@datasociety.net>

    On June 14, 2019, Data & Society will host a workshop in NYC on the intersection of technology and organizational theory and practice. The workshop arises from an increasing need to understand how automated, algorithmic, AI, or otherwise data-driven technologies are being integrated into organizational contexts and processes.

    The workshop will convene researchers who study how new technologies are introduced, incorporated, resisted or maintained within organized groups, and the changes this integration brings. Such changes might include processes (workflows, tasks, “re-skilling,” “changed” skills, augmentation) or in structures (roles, jurisdictions, authority), or other key sociological issues (such as power, culture, diversity, expertise, risk, rationality, legitimacy, and solidarity). In a world where new technologies are being integrated into organizations of all sizes and types, how can we make sense of what gets lost, what gets gained, and what gets changed? Many of these questions are long standing themes in organizational studies and ethnographies examining the social complexities of working on the machine shop floor, to which the title of our workshop alludes. Still, how do such integrations provoke new shifts in power relations and social values?

    The range of field sites and research questions appropriate for this event is wide. The only requirements for participation are that: 1) you must be a researcher (with or without an academic affiliation); 2) your research questions must address a dimension of socio-technical practice in the context of a formalized organization.

    Relevant topics for this workshop might include:
    How do formations of power, hierarchy, and discretionary decision-making change when automated and AI technologies are introduced?
    How are issues of diversity and equity brought into and reconstituted when new technologies are introduced?
    How does the integration of new technologies into organizations intersect with issues of access, inclusion, and disability?
    What are sites of unintended use, resistance, or deviance with respect to technology in organizations?
    How are new forms of expertise, skill, and training emerging to meet demands of using new technologies in the workplace?
    How are new or existing labor organizations confronting the perceived threat of AI?
    What are organizational formations or organizational processes that build on digital technologies to advance equity and social justice?
    What lessons does the history of organizational theory and practice hold for contemporary dynamics?
    How are bureaucratic forms of control (such as auditing or impact policy) integrated into the development of technology?

    These examples are by no means exhaustive, but intended to provide a flavor of the kind of relevant research questions. We are especially interested in strange outliers and unexpected studies.

    Key Dates

    • Application Deadline: April 19, 2019
    • Selection Decisions: May 1, 2019
    • Full Paper Deadline: May 28, 2019
    • Workshop: June 14, 2019

    Participation Requirements
    The structure of the Data & Society Workshop series is designed to maximize scholarly thinking about the evolving and societally important issues surrounding data-driven technologies. Participants will be asked to read three full papers in advance of the event and prepare comments for intensive discussion. Some participants will be asked to be discussants of papers, where they will lead the conversation and engage the room. Authors will not present their work, but rather participate in critical discussion with the assembled group about the paper, with explicit intent of making the work stronger and more interdisciplinary.

    All participants are required to read three papers in advance of the event and come ready to offer constructively critical feedback. We want researchers to constructively spar with and challenge one another to strengthen ourselves across the board. This is not an event for passive attendance, but an opportunity to engage each other substantively.

    This event is first and foremost an opportunity to collectively think and help construct a field. Although this event is designed to bring together 30-40 researchers, only 12 papers will be workshopped. Yet, everyone who attends is expected to be an active participant and contribute to rich conversations. We believe that it is through active engagement with other scholars around research that new insights can emerge. In other words, this event is designed to be the kind of intense intellectual engagement that made you fall in love with being a researcher in the first place.
    Format

    The day will be organized into three time slots, each 75 minutes long. One paper will be workshopped in each session. Multiple sessions will run in parallel so there will be a total of ~12 papers, but each participant will only be responsible for reading and engaging with 3. Within each group, a discussant will open with a critique of the paper before inviting participants to share their feedback. (If you participate in this event, you may be asked to be a discussant on one paper.) All are expected to share feedback, with author response towards the end of the session.

    Logistics

    The event will take place on June 14, 2019, and will run from 8:45am to 6pm. Paper sessions will run until 4:15pm; afterwards, there will be a reception for all participants.
    All meals will be covered during the event. Unfortunately, we have limited funding to support travel for this workshop; however, we’re happy to provide a formal invitation for participation/“speaking” to anyone who may need it to secure their own funding.
    Application Process (Deadline: April 19)

    For this event, we are looking to bring together researchers from diverse disciplines studying technology in organizations. This can include management, organization studies, communications, information studies, computer-supported cooperative work, computer-human interaction, science and technology studies, ethics, labor, law, policy, anthropology, and design research. As a result, attendees should expect to engage with scholars who are outside of their field of study. We ask that attendees think of the Data & Society Workshop series as an opportunity to engage with a broader cross-disciplinary field, and to strengthen both relationships and research through participation in the workshop.

    Because the paper submission date is only a few weeks after the application deadline, you should only apply as an author if you have a paper that you’re actively writing right now and will be ready to share a draft with others by May 28, 2019. If you aren’t already working on this paper, you probably aren’t in a good position to workshop it at this event. Appropriate papers may be a work-in-progress book chapter or a journal article. (Full-length books are a bit too much for this event, so if you’re writing a book, think about the chapter that you most want to get feedback on.)

    To apply as an author, please submit the following:

    • Name, affiliation, title, email address, discipline.
    • Big research question you’re seeking to answer with your research.
    • Paper title + 100-250 word abstract.
    • The current half-baked, thick-outline, total mess of the paper.**

    ** We are asking for the disaster of a paper to understand where you are with the piece now, and the arguments you intend to make, so that we can appropriately match you to a discussant. We won’t share this version with anyone (we promise).
    Please note: All co-authors for papers must apply separately. If your co-author doesn’t apply, we will assume that s/he is not interested in attending the workshop. It will be hard to add additional participants later, so make sure your co-authors apply if they want to attend.

    To apply as a participant/discussant, please submit the following:

    • Name, affiliation, title, email address, discipline.
    • Big research question you’re seeking to answer with your research.
    • 100-250 word description of your research.

    Application link: http://datasociety.net/algorithms-on-the-shop-floor <http://datasociety.net/algorithms-on-the-shop-floor>
    For questions, email events@datasociety.net <mailto:events@datasociety.net>

     
  • Mark 12:46 pm on March 10, 2019 Permalink | Reply
    Tags: algorithms, clicks, , , , meso-social, , news media, , organisational theory   

    Algorithms and institutional isomorphism 

    My notes on Caplan, R., & Boyd, D. (2018). Isomorphism through algorithms: Institutional dependencies in the case of Facebook. Big Data & Society, 5(1), 2053951718757253.

    Are data-driven technologies leading organisations to take on shared characteristics? This is the fascinating question addressed in this paper by Robyn Caplan and danah boyd which they begin with the example of news media. The popularity of social media platform as intermediaries has forced many news media producers to change their operations, increasingly producing with a view to popularity on these platforms. As they put it, “these platforms have upended the organizational practices of news-producing platforms, altering how both the newsroom and individual journalists operate” (2). They use the concept of isomorphism to understand how “algorithms structure disparate businesses and aims into an organizational field, leading them to change their goals and adopt new practices” (2). This is a process of homogenisation, as organisations reconstruct themselves into a field orientated around the assumptions embedded into the t mediating platform. The ensuing ambiguity has regulatory consequences, as social media platforms are not straight forward media actors but nor are they mere intermediaries. By theorising algorithmic mediation as akin to bureaucratisation, it become easier to identify the precise character of the role of platforms within it. It also makes clear the continuities with earlier isomorphic processes, for instance as corporate software platforms introduced common features to organisations.

    The roots of this connection are deep. They argue that “algorithms that serve to pre- process, categorize, and classify individuals and organizations should be viewed as extensions of bureaucratic tools such as forms, that have been associated with the state in the past” (3). Software like Lotus 1-2-3 and Microsoft Office restructured business activity through the affordances it offered to digitalise bureaucratic processes and algorithmic technologies should be seen as a further extension of this process. The neutrality which animated the promise of bureaucracy is also often expressed in the belief that algorithmic judgement will negate the role of subjectivity and bias in decision making processes. This is obscured by the familiar black box of the algorithm but also the mythology of its uniqueness, seeing it as something distinct from previous organisational processes. However if we see algorithms as organisational phenomena then the problem comes to look quite different, simultaneously more straight forward but also more challenging because the problems will likely spiral outwards across dependent organisations. 

    They use DiMaggio and Powell’s concept of isomorphism which considers how a common environment can lead otherwise different units of a population facing that environment to come to resemble one another. For organisations this occurs through one organisation becoming dependent on another organisation, with the expected degree of resemblance tracking the degree of that dependence. For instance in the case of Facebook’s newsfeed, the concept of what is ‘relevant’ has been redefined by the vast size of the audience whose access is mediated through this mechanism. The dependence of the news media on that mechanism means they come to reproduce its characteristics, increasingly operating with a view towards metrics like clicks, likes and shares. The early winners in the Facebook ecosystem were those publishers like Buzzfeed and Upworthy who “subsumed their own organizational practices to the logic of Facebook’s algorithms” (5). But Facebook’s attempts to modulate this mechanism in order to produce what they deemed better quality results inevitably leads the actors dependent upon it to make adaptive changes in response to these modulations. Mimesis thrives in this environment as they explain on pg 6-7:

    “Changes stemming from coercive forces, especially when frequent, lead to an environment of uncertainty that prompts dependent organizations to learn from other dependent organizations that have successfully conformed to the structuring mechanisms. This process of ‘‘mimesis,’’ or imitating models for success, is another process DiMaggio and Powell (1983: 151) argue will induce similarity across an organizational field. In this sense, the dominant organization’s incentives or goals become embedded across an industry through the borrowing of practices that lead to success over the network. In the case of Facebook, this was seen in the adoption of data-driven metrics and analytics into newsrooms, as well as the growth of a new set of intermediaries that were fed directly by the Facebook API, whose role it was to analyze and com- municate Facebook metrics back to publishers”

    A further ecosystem of intermediaries thrives under these circumstances, as new players emerge who help the firms concerned address their common problems. These responses to uncertainty are driven by a concern to “demonstrate to others that they are working to change their practices to be in-line with those of the dominant organization“ (7) as well as increasing possibilities for success. The discussion of professionalisation is really important for my interests. The roles themselves changed as a result of isomorphism, with normative pressure to enact new functions and perform new skills which contrbute to the success of the organisation. This is my concern about the institutionalisation of social media within higher education. There’s a lot here which I’m going to need to go back to and I think it’s crucial for my developing project on the digital university. 

     
  • Mark 10:26 am on February 20, 2017 Permalink | Reply
    Tags: , algorithms, ,   

    Call for AoIR Tartu panel participants on Algorithmic Agency 

    This looks like a very interesting panel:

    We are looking for a few additional people who might be interested in contributing to an AoIR panel exploring critical questions and issues surrounding algorithmic agency, power and publics.

    Researchers and media commentators alike are seemingly fascinated with the magic-like and opaque properties of algorithms. Algorithms are touted as responsible for, or implicated in, a range of diverse outcomes and opportunities – from the mundane to the transformative – for individuals, corporations and communities.

    Questions around how to critically frame and understand algorithmic agency in contemporary life and where and how to situate questions about power and accountability are raised. This panel is interested in addressing and reframing some of these issues, including the challenges in locating agency in the first place, the politics of making agential claims, and the possible social, political and ethical implications of algorithmic agency (however defined) within and towards publics.

    If you are interested in participating please send a 1200 word abstract following the AoIR template to Michele (m.willson@curtin.edu.au<mailto:m.willson@curtin.edu.au>) and Taina (wfg568@hum.ku.dk) by the 25th February outlining your specific contribution to a discussion of algorithmic agency, power and publics.  Short queries about the suitability of topic can be sent to either one of us before that date.

    Those contributors whose topic fits the panel mix will be contacted  by late 27th Feb. (given the impending deadlines, it is a tight turnaround). Depending on the range and mix of submissions,  we may also  explore a possible special issue publication. Please only submit should your attendance at the conference be a likely outcome.

    Apologies for this very short notice, we tried sending this message out a week ago but it somehow got stuck along the way.

     
  • Mark 10:48 pm on February 1, 2017 Permalink | Reply
    Tags: algorithms, , bankers, , , , ,   

    Cultural representations of finance  

    From One Market Under God, by Thomas Frank, loc 2230:

    For all the revulsion expressed by books like Liar’s Poker and Barbarians at the Gate, the dominant note was starstruck wonderment at these “masters of the universe,” at their millions and their manses, at their Gulfstream jets and Mercedes cars, at the high quality of the sex and luxuries they enjoyed. Occasional digressions to consider those shafted by the pros served only to heighten this sense, to establish just how satisfying it was to bring misfortune to some dope on the phone. The more monstrous the manipulation the merrier.

    This was written almost two decades ago but the trend has only intensified since then. As someone who has spent a lot of time reading these books, I’ve become curious as to what exactly the appeal is. They’re the kind of thing I inevitably buy  when bored in airport and train station bookshops, before devouring in a couple of journeys and feeling vaguely guilty afterwards.

    I wonder if these representations of financiers make finance itself more tractable. Narratives about individuals give shape to diffuse systems which influence all aspects of our lives in spite of their distance and abstraction. What has struck me as odd is the cultural prominence of the banker-hedonistic at precisely the time when such figures are in decline. Could there be something comforting about the wolf of Wall Street as we enter an era of algorithmic trading and flash crashes? 

     
  • Mark 11:23 am on August 17, 2016 Permalink | Reply
    Tags: , algorithms, , , , , , , , ,   

    The Dispositions of the Metricised 

    In our discussion of metrics systems, it’s easy to treat subjectivity as a cipher, regarding people as passively moulded by algorithms or blindly governed by the incentives that operate through the institutionalisation of the metrics. My objection to the former is not the claim that people are shaped by metrics, but rather the assumption that this process is basically passive. My interest is in how metrics come to matter to us. How are people shaped over time? How do their biographically accumulating dispositions and concerns influence the actions they take over time? How do these feed back into the metrics system and the organisations within which they are institutionalised?

    The fictional portrayals that are starting to emerge of this – novels like Super Sad True Love Story, the Circle and Whiskey Tango Foxtrot, films like Nerve – often struggle to represent this engaged subjectivity because the imperatives of effective story telling militate against it. What we really need is a novel or film that explores metricisation through the internal monologue of what I imagine would turn out to be an unreliable narrator.

     
  • Mark 11:58 am on June 12, 2016 Permalink | Reply
    Tags: algorithms, , , pricing, , ,   

    Algorithmic pricing predates online retail 

    From Misbehaving, by Richard Thaler, pg 134. Social norms hindered it in this instance. Why could the same not true be true of online retail? 

    The CEO of Coca-Cola also discovered the hard way that violating the norms of fairness can backfire. Douglas Ivester, aged fifty-two, appeared to be on his way to the job of chairman when he abruptly resigned after a push from several board members including legendary investor Warren Buffett. Although several actions contributed to his downfall, one speech in Brazil attracted the most attention. At a press conference, Mr. Ivester was asked about tests Coke was running with vending machines that could change price dynamically. He replied: “Coca-Cola is a product whose utility varies from moment to moment. In a final summer championship, when people meet in a stadium to have fun, the utility of a cold Coca-Cola is very high. So it is fair that it should be more expensive. The machine will simply make this process automatic.” As the Wall Street Journal stated in a story about his downfall, Mr. Ivester seemed to have a “tin ear.” An editorial cartoon captured the feelings of the general public perfectly with an image of a customer walking away from a Coke vending machine with a can in his hand, looking back to see an arm reaching out of the machine and picking his pocket.

     
    • Dave Ashelman 1:19 pm on June 12, 2016 Permalink

      I’m not sure where “Algorithms” comes in with this example. There was no “algorithm.” In this example, it was simply the axiomatic, 19th Century throwback to utility maximization. While there is a formula: Marginal Utility (x) divided by Price of (x) = Marginal Utility of (y) divided by Price of (y), that’s not an algorithm per se – it does not establish a set of mathematical “rules.” Instead, it relies purely on axioms: that which does not need to be proven. Marginal Utility establishes assumptions – not “rules.”

      Mr. Ivester was simply following the (algebraic) utility maximization formula that had (probably) been taught to him in business school, which had been taught since at least the Marginal Revolution.

      This may be an ontological difference, however, in my work in Economic Sociology, I find that the “play book” of economics (especially as a tool of legitimization by neoliberalism) is important for Sociologists to parse. While Sociology (rightly) looks at social location & social forces, Homo Economicus has no social location, and is never impacted by the social. Is Homo Economicus real? Of course not – but it is the protagonist in the economics & neoliberal “play book” that has an extreme impact in everyday life.

    • anacanhoto 2:52 pm on June 12, 2016 Permalink

      Interesting question.

      Maybe because this case of price discrimination was very visible, and online is less so? For instance, even if I know that most people on a plane paid a price different to the one I paid, I have no way of knowing if I paid more or less than them.

      I think it also depends on how much control you think you have. So, in the case of the plane tickets, I know that if I had booked earlier, I too could have got a lower price. But I chose not to. Which is why the ‘pink tax’ is so infuriating (in my view, of course).

    • Mark 1:12 pm on June 13, 2016 Permalink

      That interesting, I guess the key question is if social norms can trump technological possibility, what factors make it likely that this will or won’t happen in each case? The opacity of online shopping is definitely the key one I think. Really interesting point about control, will think further

    • Mark 1:14 pm on June 13, 2016 Permalink

      Surely the algorithm are the rules which govern the variable pricing? I’d line to read more about how the machine actually worked

      V interesting thanks & now agree, am enjoying Thaler’s book on history of behavioural economics more than I anticipated. Interesting overlaps with sociological critiques of classical economics.

  • Mark 9:49 am on January 13, 2016 Permalink
    Tags: algorithms, , , , , , ,   

    Are social media algorithms killing online activism? 

    Via John Brissenden

     
  • Mark 9:45 am on January 13, 2016 Permalink
    Tags: , algorithms, , , ,   

    Amongst Algorithms: Technical Fieldwork between Software Studies and Digital Methods 

    unnamed-4

     
  • Mark 1:52 pm on January 10, 2016 Permalink
    Tags: algorithms, , , ,   

    the rise of human centred data science at facebook 

    An absolutely fascinating account of developments in the newsfeed algorith at Facebook since its introduction:

    Adam Mosseri, Facebook’s 32-year-old director of product for news feed, is Alison’s less technical counterpart—a “fuzzie” rather than a “techie,” in Silicon Valley parlance. He traffics in problems and generalities, where Alison deals in solutions and specifics. He’s the news feed’s resident philosopher.

    The push to humanize the news feed’s inputs and outputs began under Mosseri’s predecessor, Will Cathcart. (I wrote about several of those innovations here.) Cathcart started by gathering more subtle forms of behavioral data: not just whether someone clicked, but how long he spent reading a story once he clicked on it; not just whether he liked it, but whether he liked it before or after reading. For instance: Liking a post before you’ve read it, Facebook learned, corresponds much more weakly to your actual sentiment than liking it afterward.

    After taking the reins in late 2013, Mosseri’s big initiative was to set up what Facebook calls its “feed quality panel.” It began in summer 2014 as a group of several hundred people in Knoxville whom the company paid to come in to an office every day and provide continual, detailed feedback on what they saw in their news feeds. (Their location was, Facebook says, a “historical accident” that grew out of a pilot project in which the company partnered with an unnamed third-party subcontractor.) Mosseri and his team didn’t just study their behavior. They also asked them questions to try to get at why they liked or didn’t like a given post, how much they liked it, and what they would have preferred to see instead. “They actually write a little paragraph about every story in their news feed,” notes Greg Marra, product manager for the news feed ranking team. (This is the group that’s becoming Facebook’s equivalent of Nielsen families.)

    “The question was, ‘What might we be missing?’ ” Mosseri says. “‘Do we have any blind spots?’” For instance, he adds, “We know there are some things you see in your feed that you loved and you were excited about, but you didn’t actually interact with.” Without a way to measure that, the algorithm would devalue such posts in favor of others that lend themselves more naturally to likes and clicks. But what signal could Facebook use to capture that information?

    http://www.slate.com/articles/technology/cover_story/2016/01/how_facebook_s_news_feed_algorithm_works.single.html

     
  • Mark 8:44 am on December 8, 2015 Permalink
    Tags: , algorithms, , practical reasoning, situated judgement, ,   

    algorithms, situated judgements and imposed patterns 

    An interesting case discussed on pg 85 of Unforbidden Pleasures, by Adam Phillips:

    We may live in the aftermath of the myth of the Fall, and the even longer aftermath of the myth of Oedipus, but the first traffic lights were invented in the United States after the First World War. The traditional mutual accommodation travellers had been making to each other on their bikes and cars and carts was replaced by a set of lights. ‘Its purpose,’ the anthropologist James C. Scott writes, ‘was to prevent accidents by imposing an engineered scheme of coordination.’ Partly through pressure of numbers and the proliferation of vehicles, and partly through the new scientific and bureaucratic fantasies of efficiency and productivity, familiar forms of cooperation gave way to a new, technologically implemented, set of rules. People’s practical judgement was delegated to a red light. They had known when to stop, but now they were being told when to stop.

    The institutionalisation of algorithmic evaluation should be seen as continuous with this much longer standing process of using technology to impose patterns upon, often entirely substituting for, situated judgement. It’s a new means towards a long standing end, rather than something radically discontinuous with what has come before.

    In fact recognising this poses an apparent challenge for the argument I’ve been developing about the increasing ‘cognitive load’ of digital capitalism. Surely systems which support or substitute for reflexivity actually decrease the burden upon subjects? My argument will be that the experienced necessity for these systems develops in relation to their unintended consequences towards ratcheting up the burden in other ways. But it’s currently rather light on detail.

     
  • Mark 11:30 am on November 8, 2015 Permalink
    Tags: , algorithms, , , , , , ,   

    the micro-sociology of algorithmic authority  

    From InfoGlut, by Mark Andrejevic, loc 601:

    The fictional portrayals envision a contradictory world in which individual actions can be predicted with certainty and effectively thwarted. They weave oracular fantasies about perfect foresight. Predictive analytics, by contrast, posits a world in which probabilities can be measured and resources allocated accordingly. Because forecasts are probabilistic, they never attain the type of certitude that would, for example, justify arresting someone for a crime he or she has not yet committed. Rather, they distribute probabilities across populations and scenarios.

    The most pressing question this raises for me concerns the micro-sociology of algorithmic authority. To what extent are the algorithms black boxed by those ‘on the ground’? Does awareness of the probabilistic character of the forecast drop out of the picture in the social situations in which actors are intervening on the basis of these forecasts? How much implicit authority derives from the fact ‘the algorithm says so’, even if those designing the underlying system would stress that the forecasts are probabilistic? How does this vary between different groups? It’s easy to imagine securely embedded professionals (e.g. Doctors) treating these forecasts with care, not least of all because many already do so as a routine part of working life, but what if algorithmic authority is a corollary to deliberate deskilling? What if interventions are made by those who are precariously employed? As much as I dislike the show in question, it’s hard not to think of this when reflecting on these issues:


    These are empirical questions which are hugely important for the study of algorithmic authority. I’m planning to start looking for ethnographic and ethnomethodological literature which can shed light on them, even if not directly addressed to questions of algorithms. Any reading suggestions are, as ever, much appreciated. 

     
  • Mark 3:54 pm on April 10, 2015 Permalink | Reply
    Tags: algorithms, ,   

    Videocast of the Algorithms and Accountability conference (cc @DALupton @SusanJHalford) 

    I’m so glad these have been produced. I really wanted to go to this conference but couldn’t bring myself to leave the Digital Sociology mini-conference at ESS15.

     
  • Mark 12:16 pm on October 18, 2014 Permalink | Reply
    Tags: algorithms, hsbc, ,   

    The causal power of (stupid) algorithms: why I’m leaving HSBC 

    I just had my card stopped by HSBC for the second time in a month and the seventh or eighth (I’ve genuinely lost count) time this year. As with previous occurrences, I spend twenty minutes on hold and go through a tedious security check process to confirm that my last ten transactions were indeed my transactions. I don’t want to leave my bank. I’m both lazy and very busy. It’s a massive hassle, the prospect of which fills me with no enthusiasm. But their fraud detection system is absurd. I’m told repeatedly that they have identified irregular transactions – in fact I am told this regularly and they are (roughly) the same transactions in each instance.

    One upside to this frustration is that it’s left me keen to read about data analysis techniques for fraud detection (the brief wikipedia page is interesting) in order to understand why HSBC’s are so idiotic. What seems IOTTMC (intuitively obvious to the most causal observer) is that routine transactions shouldn’t be flagged as problematic: it’s desirable to build a profile of routine activity based on the available data in order to detect transactions that deviate from that routine. As far as I can tell, HSBC instead uses a series of red flag events in a rather mechanical fashion, presumably opting for a lowest common denominator approach because it’s cheaper in the long run.

    They really dislike the film service Mubi for some reason (despite the fact I’ve been a subscriber for months) and generally find international micro-payments suspicious, all the more so when they are clustered. I can see why this might be a good idea on paper but in practice it makes life exceptionally irritating if you’re someone who uses iTunes (etc) a lot. What’s even more irritating is my inability to get any comment from HSBC beyond “I’m sorry, sir, I realise that must be frustrating”. So I’m leaving HSBC, entirely because of their idiotic fraud detection system, though thanking the bank for helping sensitise me to a causal power of algorithms that I might otherwise have failed to recognise: inciting people to take action by routinely pissing them off.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel