From InfoGlut, by Mark Andrejevic, loc 601:
The fictional portrayals envision a contradictory world in which individual actions can be predicted with certainty and effectively thwarted. They weave oracular fantasies about perfect foresight. Predictive analytics, by contrast, posits a world in which probabilities can be measured and resources allocated accordingly. Because forecasts are probabilistic, they never attain the type of certitude that would, for example, justify arresting someone for a crime he or she has not yet committed. Rather, they distribute probabilities across populations and scenarios.
The most pressing question this raises for me concerns the micro-sociology of algorithmic authority. To what extent are the algorithms black boxed by those ‘on the ground’? Does awareness of the probabilistic character of the forecast drop out of the picture in the social situations in which actors are intervening on the basis of these forecasts? How much implicit authority derives from the fact ‘the algorithm says so’, even if those designing the underlying system would stress that the forecasts are probabilistic? How does this vary between different groups? It’s easy to imagine securely embedded professionals (e.g. Doctors) treating these forecasts with care, not least of all because many already do so as a routine part of working life, but what if algorithmic authority is a corollary to deliberate deskilling? What if interventions are made by those who are precariously employed? As much as I dislike the show in question, it’s hard not to think of this when reflecting on these issues:
These are empirical questions which are hugely important for the study of algorithmic authority. I’m planning to start looking for ethnographic and ethnomethodological literature which can shed light on them, even if not directly addressed to questions of algorithms. Any reading suggestions are, as ever, much appreciated.