Thanks to Richard Sandford for drawing my attention towards to this legal principle in the UK. Horizon was an initial sign of how totally ill placed this assumption is with regards to ever more opaque computer systems. Yet another way in which the legal system is not prepared for generative AI becoming ubiquitous.
The first legal fault is a rule of evidence: computer records must be presumed to be accurate unless shown to be otherwise.
Note that this is a presumption, not an absolute rule.
(And technically speaking the rule goes to admissibility of records as evidence, rather than the weight to be placed on them, but the practical effect is that once admitted the evidence is accepted for what it says.)
If computer records can be shown to be inaccurate, the presumption is rebutted.
The origin of this rule is from before the computer age: the days of mechanical instruments such as watches and then, more recently, speed cameras and breathalysers. The courts would presume such instruments were accurate and would allow prosecutions to rely on the evidence of such devices.
Such a rule of evidence makes sort-of-sense with such elementary contraptions, but it was (and is) woefully unrealistic with computer-generated data.
Indeed, software systems of any complexity are unlikely to be bug-free: the best one can hope for is that the bugs are not disruptive in any non-trivial way.
https://davidallengreen.com/2024/01/how-the-legal-system-made-it-so-easy-for-the-post-office-to-destroy-the-lives-of-the-sub-postmasters-and-sub-postmistresses-and-how-the-legal-system-then-made-it-so-hard-for-them-to-obtain-justice/
