Home / Royal Mail / Lords say police use of AI must not undermine human rights and rule of law

Lords say police use of AI must not undermine human rights and rule of law

A House of Lords report has highlighted concerns that the police use of advanced technologies such as Artificial Intelligence – AI – could undermine people’s human rights and the rule of law. The House of Lords Justice and Home Affairs Committee has revealed that the proliferation of Artificial Intelligence tools used in the justice system without proper oversight, particularly by the police, could have serious implications for human rights and civil liberties.

In its report, ‘Technology rules? The advent of new technology in the justice system’, published today, the committee notes the pace of the development of technologies, largely unseen by the public. They claim that without sufficient safeguards, supervision, and caution, advanced technologies used in the justice system in England and Wales could undermine a range of human rights, risk the fairness of trials and damage the rule of law.

They note that facial recognition is the best known, but other technologies are in use, and more are being introduced. The committee felt that development was “moving fast” and controls had “not kept up” In a statement the committee said it “acknowledges the benefits: preventing crime, increasing efficiency, and generating new insights that feed into the criminal justice system.

Read next: Campaigners call police guidance on use of facial recognition an ‘atrocious policy’


“However, it is concerning that there is no mandatory training for the users of AI technologies, such as facial recognition, particularly given their potential impact on people’s lives. Meanwhile, users can be deferential (‘the computer must be right’) rather than critical. The committee is clear that ultimately decisions should always be made by humans.

“There are risks of exacerbating discrimination. The report highlights serious concerns about the dangers of human bias contained in original data being reflected, and further embedded, in algorithmic outcomes. The committee heard about dubious selling practices and claims made as to products’ effectiveness which are often untested and unproven.

“The committee calls for the establishment of a mandatory register of algorithms used in relevant tools. Without a register it is virtually impossible to find out where and how specific algorithms are used, or for Parliament, the media, academia, and, importantly, those subject to their use, to scrutinise and challenge them.”

The report goes on to highlight that most public bodies “lack the expertise and resources to carry out evaluations, and procurement guidelines do not address their needs”.

The committee has recommended that a national body be established to set strict scientific, validity, and quality standards and to certify new technological solutions against those standards. In addition, they recommended that no tool should be introduced without receiving certification first, allowing police forces to procure the technological solutions of their choice among those ‘kitemarked’.

They claimed that it was not possible to work out who was responsible for what, with more than 30 public bodies, initiatives, and programmes that play a role in the governance of new technologies in the application of the law and that the system “needs urgent streamlining”. The committee argues that reforms to governance should be supported by a strong legal framework.

Facial Recognition Technology in use in Leicester Square
Facial Recognition Technology in use in Leicester Square

They found that without coordination between Government departments, roles were “unclear, functions overlap, joint working was patchy and where ultimate responsibility lies cannot be identified.”

The committee has also called for a duty of candour on the police so that there was full transparency. They added: “AI can have huge impacts on people’s lives, particularly those in marginalised communities. Without transparency, there can be no scrutiny and no accountability when things go wrong.”

Baroness Hamwee, Chair of the Justice and Home Affairs Committee, said: “What would it be like to be convicted and imprisoned on the basis of AI which you don’t understand and which you can’t challenge? Without proper safeguards, advanced technologies may affect human rights, undermine the fairness of trials, worsen inequalities and weaken the rule of law. The tools available must be fit for purpose, and not be used unchecked.

“We had a strong impression that these new tools are being used without questioning whether they always produce a justified outcome. Is ‘the computer’ always right? It was different technology, but look at what happened to hundreds of Post Office managers. Government must take control. Legislation to establish clear principles would provide a basis for more detailed regulation. A ‘kitemark’ to certify quality and a register of algorithms used in relevant tools would give confidence to everyone – users and citizens.

“We welcome the advantages AI can bring to our justice system, but not if there is no adequate oversight. Humans must be the ultimate decision-makers, knowing how to question the tools they are using and how to challenge their outcome.”

Get the best stories about the things you love most curated by us and delivered to your inbox every day. Choose what you love here




Source link

About admin

Check Also

Royal Mail threatens to hike stamp prices again after £120m Budget hit | Bristol Live

Royal Mail has signalled potential further increases in stamp prices following a £120million national insurance …

Leave a Reply

Your email address will not be published. Required fields are marked *