Raiding the inarticulate since 2010

accelerated academy acceleration agency AI Algorithmic Authoritarianism and Digital Repression archer Archive Archiving artificial intelligence automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities elites Fragile Movements and Their Politics Cultures generative AI higher education Interested labour Lacan Listening LLMs margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms Post-Democracy, Depoliticisation and Technocracy post-truth psychoanalysis public engagement public sociology publishing Reading realism reflexivity scholarship sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work theory The Political Economy of Digital Capitalism The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Generative AI and the legal principle that computer records must be presumed to be accurate

Thanks to Richard Sandford for drawing my attention towards to this legal principle in the UK. Horizon was an initial sign of how totally ill placed this assumption is with regards to ever more opaque computer systems. Yet another way in which the legal system is not prepared for generative AI becoming ubiquitous.

The first legal fault is a rule of evidence: computer records must be presumed to be accurate unless shown to be otherwise.

Note that this is a presumption, not an absolute rule.

(And technically speaking the rule goes to admissibility of records as evidence, rather than the weight to be placed on them, but the practical effect is that once admitted the evidence is accepted for what it says.)

If computer records can be shown to be inaccurate, the presumption is rebutted.

The origin of this rule is from before the computer age: the days of mechanical instruments such as watches and then, more recently, speed cameras and breathalysers. The courts would presume such instruments were accurate and would allow prosecutions to rely on the evidence of such devices.

Such a rule of evidence makes sort-of-sense with such elementary contraptions, but it was (and is) woefully unrealistic with computer-generated data.

Indeed, software systems of any complexity are unlikely to be bug-free: the best one can hope for is that the bugs are not disruptive in any non-trivial way.

https://davidallengreen.com/2024/01/how-the-legal-system-made-it-so-easy-for-the-post-office-to-destroy-the-lives-of-the-sub-postmasters-and-sub-postmistresses-and-how-the-legal-system-then-made-it-so-hard-for-them-to-obtain-justice/