Asking the right questions is crucial when computer evidence is disputed

News

Lee Castleton recalls to the last penny the shortfall that flashed on to his post office terminal on New Year’s eve, 2003 — £1,103.68. A week later another loss appeared, this time £4,230. Then another and another. By March, the sub-postmaster was £25,000 short. “I kind of knew from the second loss that this wasn’t a mistake on my part,” Castleton says.

With no means of interrogating the Post Office’s back-office systems, he called and emailed the IT helpline and his managers — 91 times. Yet all he received were instructions to do checks that he had performed dozens of times and, after some bland reassurances, the higher-ups stopped replying altogether.

An engineer technician, then briefly a stockbroker, Castleton had bought a post office in the seaside town of Bridlington, northern England, in the hope of providing a lifestyle that his young family would enjoy. Instead, a High Court judgment bankrupted him by ordering him to pay the Post Office the £321,000 it spent suing him for an illusory debt. Bankruptcy put paid to going back to stockbroking. So he had to make do as a jobbing engineer, sometimes sleeping in his car, in a hand-to-mouth struggle to meet the mortgage payments on the family’s flat above their now-defunct post office.

How a bug-ridden IT system led the state-owned UK Post Office to prosecute more than 700 sub-postmasters for thefts they did not commit, and to ruin others, is now the subject of a public inquiry.

The episode adds to a gathering global backlash over the human harms that automated processes can cause. In the US, a group of White House science advisers are calling for a “bill of rights” to guard against injustices wrought by artificial intelligence.

Much of this centres on how AI-powered algorithms can amplify societal prejudices, such as female jobseekers being sidelined in male-dominated fields and black citizens, profiled by AI tools for their risk of reoffending, receiving stiffer prison sentences from judges. Yet digital injustice is not confined to AI, and neither is it a new phenomenon. In 2011, the UK government apologised to relatives of two Royal Air Force pilots blamed for the fatal crash in 1994 of a Chinook helicopter, which campaigners argued faulty software may have caused.

All of which raises the question of how truth is established in disputes that pit the word of people against the reliability of computers.

“When patients are harmed, staff often get blamed,” writes Harold Thimbleby, professor emeritus of science at Swansea University. People forget the other suspect in the room, namely flawed technology, or some hidden man-machine interaction. His new book, Fix IT, describes such a case. During a routine investigation, a hospital discovered a mismatch between measurements automatically uploaded to a database from clinical devices and nurses’ paper notes. Knowing computers do not lie, the hospital accused its nurses of making fraudulent patient records, and several stood trial. Yet, three weeks in, an IT support engineer from the device’s supplier caused the trial’s collapse when he revealed, under cross-examination, he had “tidied up” the database, which was poorly maintained, deleting records.

In the Post Office scandal, a mix of faulty software, inadequate disclosure and mendacity aided and abetted by a legal presumption that computers work reliably ruined hundreds of lives, says Paul Marshall, a barrister who acted, pro bono, for three convicted post office workers in the Court of Appeal. For more than a decade, judges and juries trusted the word of witnesses for the Post Office that its Horizon accounting system, provided by IT specialist Fujitsu, was reliable and inferred sub-postmasters and mistresses must have stolen money it recorded as missing. Yet in 2019, the disclosure of error records known to affect Horizon, and which had existed all along, led a more inquiring judge to conclude the system was “not remotely robust”.

The presumption of computer reliability puts the onus on anyone contesting digital evidence to show the computer is untrustworthy. That can be “hugely difficult” when those accused lack IT knowledge and systems access, says Martyn Thomas, an emeritus professor of IT at Gresham College. Computers can also misbehave, while seeming to work perfectly. That was the “Catch-22” that ensnared the post office workers, says Marshall. “They had no basis for questioning what the computer spat out, because they didn’t know how it worked or its propensity to fail and the Post Office wasn’t telling.”

Asking the right questions is also important when email evidence is disputed. In 2018, Peter Duffy, a consultant urologist, won a constructive dismissal claim against University Hospitals of Morecambe Bay NHS Foundation Trust. He then published a book alleging failings regarding the death of a patient, prompting NHS England and NHS Improvement to commission an external investigation.

The 2020/21 investigation revealed two emails, allegedly sent by Duffy in 2014, as the patient deteriorated. Duffy says the emails were fabricated. However, as a result of their entering the record, he found himself implicated in the patient’s poor care.

In a statement, Aaron Cummins, chief executive of UHMBT, said that “two separate independent, external reviews” for the investigation “found no evidence the emails in question were tampered with and no evidence they were not sent from Mr Duffy’s NHS hospital email account”.

Yet during Duffy’s 2018 employment tribunal, a judge ordered the trust to search for all correspondence concerning the patient’s death. None of the digital searches the trust conducted produced the disputed emails. Nor did the emails appear in information gathered by two internal NHS inquiries concerning the patient death, or in responses to freedom of information requests made by the deceased patient’s family and Duffy himself.

“How can assessing an organisation’s cyber security today authenticate emails supposedly sent six years earlier, yet neither acknowledged nor actioned by the recipients, and at odds with contemporary clinical notes and the bereaved family’s recollections?” asks Duffy.

Without commenting on Duffy’s particular case, Thimbleby says that when digital searches have been performed and a court has been informed there are no more emails to be found, “you cannot presume authenticity”. There has to be strong evidence the emails existed, “such as backups”.

From banking apps to algorithms that inform hiring choices, computer-controlled systems have entered our everyday lives in countless small ways since the first Post Office prosecutions. Yet, while technology’s reach has advanced, the same cannot be said of the law’s ability to cope with its failures, “You can become a lawyer knowing nothing about electronic evidence, though it forms part of almost every single court case,” says Stephen Mason, co-editor of Electronic Evidence and Electronic Signatures, a lawyers’ textbook. “That really matters,” says Marshall, citing the jailing of sub-postmistress Seema Misra for the alleged theft of money the Post Office’s Horizon system showed as missing. “On four separate occasions before three different judges,” Marshall says, Misra asked for disclosure of Horizon’s error records, and was refused. A decade later, error records led to the quashing of her conviction.

In a 2020 paper, submitted to the Ministry of Justice, Marshall and several co-authors recommend revisiting the legal presumption of computer reliability. Starting from the premise that all computer software contains bugs, they wrestle with how to prevent instances of injustice without clogging up courtrooms with hopeful try-ons, such as motorists demanding software investigations of speed cameras.

The paper recommends that organisations relying on computer-generated evidence be required, as standard procedure, to disclose their system’s error logs and known vulnerabilities. For well-run operations that should be straightforward, says Thomas at Gresham College, otherwise “the burden should fall on [the organisations] to show it wasn’t the computer that caused things to go wrong”.

While corporations often hire IT consultants to give expert opinion in court cases, individuals can rarely afford experts’ fees. To reduce the inequity, Thimbleby at Swansea University suggests the setting up of independent IT panels to advise judges on when digital evidence can reasonably be relied on, and when it cannot. “In the world I’m envisaging, people would be able to say ‘this is clearly an IT issue and we have a right to call on the IT panel’ and the panel would take an informed view,” he says.

Had such a system been in place when the Post Office brought its actions, the Castletons could be living a very different life. Now a factory engineer working night shifts, rather than a businessman, Castleton says he butted against a corporation that wouldn’t bend. “I felt I was drowning and nobody was doing anything to save me. I was just insignificant.”