Daily Telegraph

By Charles Hymas

Home Affairs Editor (The Daily Telegraph)

https://www.telegraph.co.uk/authors/charles-hymas/

Please accept statistics, marketing cookies to watch this video.

When you enter Paul Wilks’ supermarket in Aylesbury, a facial recognition camera by the door snaps your image and then checks it against a “watchlist” of people previously caught shoplifting or abusing staff.

If you are on it, managers receive a “ping” alert on their mobile phones from Facewatch, the private firm that holds the watchlists of suspects, and you will be asked to leave or monitored if you decide you want to walk around the store.

This is not some Big Brother vision of the future but Budgens in Buckinghamshire in Britain 2019.

It is also stark evidence of the way that Artificial Intelligence (AI) technology is spreading without regulation potentially intruding on our personal privacy.

For Mr Wilks, it has been a success. Since he introduced it at his 3,000 square foot store a year ago, he says shoplifting has fallen from ten to two incidents a week. The thousands he has saved has more than paid for the technology.

“As well as stopping people, it’s a deterrent. Shoplifters know about it,” says Mr Wilks, who has a prominent poster warning customers they will be subject to facial recognition. “As retailers, we have to find ways to counteract what is going on.”

As the retail sector loses £700 million a year to thefts, Facewatch gives store owners a “self-help” solution to the reluctance of police to investigate petty shoplifting.

It is the brainchild of businessman Simon Gordon, whose own London wine bar Gordons was plagued by pickpockets. Using AI technology provided by Intel, a US multinational, he has bold ambitions to have 5,000 high-resolution facial recognition cameras in operation across the UK by 2022.

His firm is close to a deal with a major UK supermarket chain and already has cameras being used or trialled in 100 stores, garages and other retail outlets.

The lenses are mounted by entry doors to catch a full clean facial image, which is sent to a computer that extracts biometric information to compare it to faces in a database of suspects.

Facewatch says there must be a 99 per cent match before the alert is sent to store staff and in consultation with the Information Commissioner Elizabeth Denham has introduced privacy safeguards including immediate automatic deletion of images of “innocent” faces.

“When CCTV first came out 25 or 30 years ago, people thought it was the end of the world, Big Brother and 1984,” says Stuart Greenfield, a Facewatch executive. Now there are six million cameras in London. People either think they are not working or are there to stop terrorists. No-one really worries about it. Facial recognition is the same. Facebook, Instagram and the airports are using it. It is here to stay but it has to be regulated. Everything needs to be controlled because every technology can be used negatively.”

And there’s the rub. MPs, experts and watchdogs, like the Information Commissioner Ms Elizabeth Denham and Paul Wiles, the biometrics commissioner, are concerned facial recognition technology is becoming established if not widespread with little public debate or regulatory scrutiny. They point to critical questions yet to be resolved.

When should facial technology surveillance be used, in what circumstances and under what conditions? And should consent be required before it is deployed?

Judges in a test case against its use by South Wales Police ruled taking a biometric image of a face is as intrusive as a fingerprint or DNA swab. More significantly, unlike with fingerprints or a swab, people have no choice about whether, where or when their biometric image is snapped.

South Wales Police are thought to have scanned more than 500,000 faces since first deploying facial recognition cameras during the Champions League Final at Cardiff’s Millennium Stadium in June 2017. The Met Police and Leicestershire police have scanned thousands more in their “trials.”

The test case in South Wales was brought by Ed Bridges, a former LibDem councillor, who noticed the cameras when he went out to buy a lunchtime sandwich. He said taking “biometric information without his consent” was a breach of his privacy when he was acting lawfully in a public place.

The judges, however, ruled use of the technology was “proportionate.” They said it was deployed in an “open and transparent” way, for a limited time to identify particular individuals of “justifiable interest” to the police and with publicity campaigns to alert the public in advance.

However, Mr Wiles, the biometrics commissioner, is not convinced that this test case alone should be taken as sufficient justification for a roll-out of police use of facial recognition. “I am not disagreeing with the South Wales Police judgement. What South Wales Police did was lawful,” he says.

“Some uses of Automated Face Recognition in public places when highly targeted – for example scanning the faces of people going into a football match against watchlists of those known to cause trouble in football matches in the past – that is arguably in the public interest.

“However, scanning everyone walking down the street against a watchlist of people you would like to arrest seems to be a bit more difficult because it gets near mass surveillance. I don’t think in this country we have ever really wanted to see police using mass surveillance. That’s what the Chinese are doing with facial recognition. There are some lines between legitimate use to protect people who have committed crimes against a rather different use. It is not for me to say where the line is. Nor should it be the police who say where it is. That’s the debate we are not having. I feel it is frustrating that ministers are not talking about it. And before we ask Parliament to decide, we need to have a public debate.”

Cases have already emerged where Mr Wiles’s line appears to have been crossed. Last year, the Trafford Centre in Manchester had to stop using live facial recognition cameras after the Surveillance Camera Commissioner intervened. Up to 15 million people were scanned during the operation.

At Sheffield’s Meadowhall shopping centre, some two million people are thought to have been scanned in secret police trials last year, according to campaign group Big Brother Watch.

The privately-owned Kings Cross estate in London has also had to switch off its facial recognition cameras after it became public. It later emerged the Met Police shared images of suspects with the property firm without anyone’s consent or senior officers and mayor’s office apparently knowing.

Liverpool’s World Museum scanned visitors with facial recognition cameras during its exhibition, “China’s First Emperor and the Terracotta Warriors” in 2018, while Millennium Point conference centre in Birmingham – a scene of demonstrations by trade unionists, football fans and anti-racism campaigners – has deployed it “at the request of law enforcement.”

The Daily Telegraph has revealed Waltham Forest council in London has trialled facial recognition cameras on its streets without residents’ consent and even that AI and facial expression technology is being used for the first time in job interviews to identify the best UK candidates.

As its use widens, one key issue is its reliability and accuracy. The success of the technology’s algorithms in matching a face is improving and is good when there is a high-quality image – as at UK passport control – but less effective with CCTV images that do not always give a clear view.

The US National Institute of Standards and Technology which assesses the ability of algorithms to match a new image to a face in a database estimates it has improved 20-fold between 2014 and 2018.

However, it is by no means infallible. In South Wales, police started in 2017 at a Champions League match with a “true positive” rate – where it got an accurate match with a “suspect” on its database – of just three per cent. This rose to 46 per cent when deployed at the Six Nations rugby last year.

Across all events where deployed, there were 2,900 possible matches of suspects generated by the facial recognition system, of which only 144 were confirmed “true positives” by operators; 2,755 were “false positives,” according to the analysis by Cardiff University. Eighteen people were arrested.

The researchers found performance fell as light faded and was less accurate if faces were obscured by clothing, glasses or jewellery. They concluded it should be viewed as “assisted” rather than “automated” facial recognition, as the decision on whether there was a match was a police officer’s.

Professor Peter Fussey, from Essex University, who reviewed the Met Police trials of the technology, said only eight of the 42 “matches” that they saw thrown up by the technology were accurate.

Sixteen were instantly rejected in the van as it was clear they were the “wrong ethnicity, wrong sex or ten years younger,” he said. Four were then lost in the crowd, which left 22 suspects who were then approached by a police officer in the street to show their id or be mobile fingerprinted.

Of these, 14 were inaccurate and just eight were correct.

In the febrile world of facial recognition, how you measure success is a source of debate. It could be argued you have high 90 per cent-plus accuracy given the cameras scan thousands of faces to pick out the 42. Or you can measure it according to the ratio of “false positives” to accurate matches.

On human rights grounds, Professor Fussey said his concern was consent and legitimacy. In Berlin and Nice, the trials have been conducted where volunteers have signed up to be “members of the public” to test the facial recognition technology.

By contrast, in the Met police trial in Romford, he saw one young man targeted after being seen to avoid the facial recognition cameras which were signposted as in operation. “He was stopped and searched and had a small quantity of cannabis on him and was arrested,” he said.

He may have acted suspiciously by trying to avoid the camera but he was not a suspect on any “watch list,” nor had he consented to take part in the “trial.”

“For me, one of the big issues is around the definition of ‘wanted’,” said Professor Fussey. “There is ‘wanted by the courts’ where someone has absconded and there is judicial oversight. Then there is ‘wanted by the police’ which is wanted for questioning or wanted on the basis of suspicion.”

South Wales police was careful to prepare four watchlists: red – those who posed a risk to public safety, amber – offenders with previous convictions, green – those whose presence did not pose any risk, blue – police officers’ faces to test the system.

However, human rights campaigners cite as an example police databases that hold the images of 21 million innocent people who have been arrested or in custody but never convicted.

It has been ruled such databases are illegal but so great is the task of processing and deleting them that progress on doing so has stalled.

So concerned is the Commons science and technology committee that in its recent report on face recognition it called for a moratorium on its use until rules on its deployment are agreed.

Professor Wiles sums it up:

“There are some uses that are not in the public interest. What that raises is who should make that decision about those uses. The one thing I am clear about is that the people who want to use facial recognition technology should not be the people who make that decision. It ought to be decided by a body that represents the public interest and the most obvious one is Parliament. There should be governance backed by legislation. Parliament should decide, yes, this is in the public interest provided these conditions are met. We have done that with DNA. Parliament said it’s in the public interest that the police can derive profiles of individuals from DNA but it’s not in the public interest that police could keep the samples. You can tell a lot more about a person from a sample. It’s important because if you get it wrong, the police will lose public trust and in Britain, we have a historic tradition of policing by consent.”

Ms Denham, the Information Commissioner, is to publish the results of her investigation into facial recognition, which she says should only be deployed where there is “demonstrable evidence” that it is “necessary, proportionate and effective.”

She has demanded police and other organisations using facial technology must ensure safeguards are in place including assessments of how it will affect people before each deployment and a clear public policy on why it is being used.

“There remain significant privacy and data protection issues that must be addressed and I remain deeply concerned about the rollout of this technology,” she says.