the irreducibility of interpretation in socio-technical systems

A really interesting example discussed in The New Ruthless Economy, by Simon Head, loc 1398-1407CasePoint was a system which aimed to remove expertise from the customer-facing service function by turning the call centre operator into an ‘information conduit’. But inevitably interpretation was necessary and the assumption that it could be avoided produced all sorts of unintended consequences:

CasePoint’s designers had programmed the system to recognize a vocabulary of machine error that they and their management colleagues used, and that they assumed the customer would use as well. But customers frequently used “unauthorized” words of their own. When this happened, it was the inescapable task of the agent to try to establish which of the designer’s authorized words best corresponded to the customer’s rogue word. But agents were not trained to carry out this kind of interrogation and had to do the translations on a hit-or-miss basis. When agents got the translation wrong, CasePoint would take off on an irrelevant line of questioning and end up making a faulty diag-nosis.

A second and even less realistic assumption of the software designers ers was that customers would provide a description of a faulty machine’s symptoms which, in its logical structure, would conform naturally to the structure the system was programmed to understand. But customers often came up with rambling, incoherent descriptions that CasePoint Point could not possibly digest, and agents then had to make sense of the customer’s raw utterances. The transcripts show agents trying to perform this editorial task but finding themselves severely handicapped by a lack of training, a lack of confidence, and the pressure of time. More often than not, agents would abandon the struggle and arrange for a technician to visit the customer’s offices-the very outcome the expert system was designed to head off.

A third error of the software engineers was their assumption that a digestible account of a machine’s symptoms could be put together at the very outset of a conversation between agent and customer. But customers tomers had the inconvenient habit of failing to provide all the relevant information up front, with vital bits of information instead dribbling out when a conversation was already under way. But for a new description tion of a problem to be considered by CasePoint, the system had to be reset and questioning started anew. There were strong pressures on the agent not to do this. Coming up with a new problem description took time, and agents were always under pressure to fulfill their quota of calls per day. With more than one problem description now on offer, agents also had to choose between them, again something the agents were not trained to do.