By Charles Hymas

Home Affairs Editor (The Daily Telegraph)

https://www.telegraph.co.uk/authors/charles-hymas/

When you enter Paul Wilks’ supermarket in Aylesbury, a facial recognition camera by the door snaps your image and then checks it against a “watchlist” of people previously caught shoplifting or abusing staff.

If you are on it, managers receive a “ping” alert on their mobile phones from Facewatch, the private firm that holds the watchlists of suspects, and you will be asked to leave or monitored if you decide you want to walk around the store.

This is not some Big Brother vision of the future but Budgens in Buckinghamshire in Britain 2019.

It is also stark evidence of the way that Artificial Intelligence (AI) technology is spreading without regulation potentially intruding on our personal privacy.

For Mr Wilks, it has been a success. Since he introduced it at his 3,000 square foot store a year ago, he says shoplifting has fallen from ten to two incidents a week. The thousands he has saved has more than paid for the technology.

“As well as stopping people, it’s a deterrent. Shoplifters know about it,” says Mr Wilks, who has a prominent poster warning customers they will be subject to facial recognition. “As retailers, we have to find ways to counteract what is going on.”

As the retail sector loses £700 million a year to thefts, Facewatch gives store owners a “self-help” solution to the reluctance of police to investigate petty shoplifting.

It is the brainchild of businessman Simon Gordon, whose own London wine bar Gordons was plagued by pickpockets. Using AI technology provided by Intel, a US multinational, he has bold ambitions to have 5,000 high-resolution facial recognition cameras in operation across the UK by 2022.

His firm is close to a deal with a major UK supermarket chain and already has cameras being used or trialled in 100 stores, garages and other retail outlets.

The lenses are mounted by entry doors to catch a full clean facial image, which is sent to a computer that extracts biometric information to compare it to faces in a database of suspects.

Facewatch says there must be a 99 per cent match before the alert is sent to store staff and in consultation with the Information Commissioner Elizabeth Denham has introduced privacy safeguards including immediate automatic deletion of images of “innocent” faces.

“When CCTV first came out 25 or 30 years ago, people thought it was the end of the world, Big Brother and 1984,” says Stuart Greenfield, a Facewatch executive. Now there are six million cameras in London. People either think they are not working or are there to stop terrorists. No-one really worries about it. Facial recognition is the same. Facebook, Instagram and the airports are using it. It is here to stay but it has to be regulated. Everything needs to be controlled because every technology can be used negatively.”

And there’s the rub. MPs, experts and watchdogs, like the Information Commissioner Ms Elizabeth Denham and Paul Wiles, the biometrics commissioner, are concerned facial recognition technology is becoming established if not widespread with little public debate or regulatory scrutiny. They point to critical questions yet to be resolved.

When should facial technology surveillance be used, in what circumstances and under what conditions? And should consent be required before it is deployed?

Judges in a test case against its use by South Wales Police ruled taking a biometric image of a face is as intrusive as a fingerprint or DNA swab. More significantly, unlike with fingerprints or a swab, people have no choice about whether, where or when their biometric image is snapped.

South Wales Police are thought to have scanned more than 500,000 faces since first deploying facial recognition cameras during the Champions League Final at Cardiff’s Millennium Stadium in June 2017. The Met Police and Leicestershire police have scanned thousands more in their “trials.”

The test case in South Wales was brought by Ed Bridges, a former LibDem councillor, who noticed the cameras when he went out to buy a lunchtime sandwich. He said taking “biometric information without his consent” was a breach of his privacy when he was acting lawfully in a public place.

The judges, however, ruled use of the technology was “proportionate.” They said it was deployed in an “open and transparent” way, for a limited time to identify particular individuals of “justifiable interest” to the police and with publicity campaigns to alert the public in advance.

However, Mr Wiles, the biometrics commissioner, is not convinced that this test case alone should be taken as sufficient justification for a roll-out of police use of facial recognition. “I am not disagreeing with the South Wales Police judgement. What South Wales Police did was lawful,” he says.

“Some uses of Automated Face Recognition in public places when highly targeted – for example scanning the faces of people going into a football match against watchlists of those known to cause trouble in football matches in the past – that is arguably in the public interest.

“However, scanning everyone walking down the street against a watchlist of people you would like to arrest seems to be a bit more difficult because it gets near mass surveillance. I don’t think in this country we have ever really wanted to see police using mass surveillance. That’s what the Chinese are doing with facial recognition. There are some lines between legitimate use to protect people who have committed crimes against a rather different use. It is not for me to say where the line is. Nor should it be the police who say where it is. That’s the debate we are not having. I feel it is frustrating that ministers are not talking about it. And before we ask Parliament to decide, we need to have a public debate.”

Cases have already emerged where Mr Wiles’s line appears to have been crossed. Last year, the Trafford Centre in Manchester had to stop using live facial recognition cameras after the Surveillance Camera Commissioner intervened. Up to 15 million people were scanned during the operation.

At Sheffield’s Meadowhall shopping centre, some two million people are thought to have been scanned in secret police trials last year, according to campaign group Big Brother Watch.

The privately-owned Kings Cross estate in London has also had to switch off its facial recognition cameras after it became public. It later emerged the Met Police shared images of suspects with the property firm without anyone’s consent or senior officers and mayor’s office apparently knowing.

Liverpool’s World Museum scanned visitors with facial recognition cameras during its exhibition, “China’s First Emperor and the Terracotta Warriors” in 2018, while Millennium Point conference centre in Birmingham – a scene of demonstrations by trade unionists, football fans and anti-racism campaigners – has deployed it “at the request of law enforcement.”

The Daily Telegraph has revealed Waltham Forest council in London has trialled facial recognition cameras on its streets without residents’ consent and even that AI and facial expression technology is being used for the first time in job interviews to identify the best UK candidates.

As its use widens, one key issue is its reliability and accuracy. The success of the technology’s algorithms in matching a face is improving and is good when there is a high-quality image – as at UK passport control – but less effective with CCTV images that do not always give a clear view.

The US National Institute of Standards and Technology which assesses the ability of algorithms to match a new image to a face in a database estimates it has improved 20-fold between 2014 and 2018.

However, it is by no means infallible. In South Wales, police started in 2017 at a Champions League match with a “true positive” rate – where it got an accurate match with a “suspect” on its database – of just three per cent. This rose to 46 per cent when deployed at the Six Nations rugby last year.

Across all events where deployed, there were 2,900 possible matches of suspects generated by the facial recognition system, of which only 144 were confirmed “true positives” by operators; 2,755 were “false positives,” according to the analysis by Cardiff University. Eighteen people were arrested.

The researchers found performance fell as light faded and was less accurate if faces were obscured by clothing, glasses or jewellery. They concluded it should be viewed as “assisted” rather than “automated” facial recognition, as the decision on whether there was a match was a police officer’s.

Professor Peter Fussey, from Essex University, who reviewed the Met Police trials of the technology, said only eight of the 42 “matches” that they saw thrown up by the technology were accurate.

Sixteen were instantly rejected in the van as it was clear they were the “wrong ethnicity, wrong sex or ten years younger,” he said. Four were then lost in the crowd, which left 22 suspects who were then approached by a police officer in the street to show their id or be mobile fingerprinted.

Of these, 14 were inaccurate and just eight were correct.

In the febrile world of facial recognition, how you measure success is a source of debate. It could be argued you have high 90 per cent-plus accuracy given the cameras scan thousands of faces to pick out the 42. Or you can measure it according to the ratio of “false positives” to accurate matches.

On human rights grounds, Professor Fussey said his concern was consent and legitimacy. In Berlin and Nice, the trials have been conducted where volunteers have signed up to be “members of the public” to test the facial recognition technology.

By contrast, in the Met police trial in Romford, he saw one young man targeted after being seen to avoid the facial recognition cameras which were signposted as in operation. “He was stopped and searched and had a small quantity of cannabis on him and was arrested,” he said.

He may have acted suspiciously by trying to avoid the camera but he was not a suspect on any “watch list,” nor had he consented to take part in the “trial.”

“For me, one of the big issues is around the definition of ‘wanted’,” said Professor Fussey. “There is ‘wanted by the courts’ where someone has absconded and there is judicial oversight. Then there is ‘wanted by the police’ which is wanted for questioning or wanted on the basis of suspicion.”

South Wales police was careful to prepare four watchlists: red – those who posed a risk to public safety, amber – offenders with previous convictions, green – those whose presence did not pose any risk, blue – police officers’ faces to test the system.

However, human rights campaigners cite as an example police databases that hold the images of 21 million innocent people who have been arrested or in custody but never convicted.

It has been ruled such databases are illegal but so great is the task of processing and deleting them that progress on doing so has stalled.

So concerned is the Commons science and technology committee that in its recent report on face recognition it called for a moratorium on its use until rules on its deployment are agreed.

Professor Wiles sums it up:

“There are some uses that are not in the public interest. What that raises is who should make that decision about those uses. The one thing I am clear about is that the people who want to use facial recognition technology should not be the people who make that decision. It ought to be decided by a body that represents the public interest and the most obvious one is Parliament. There should be governance backed by legislation. Parliament should decide, yes, this is in the public interest provided these conditions are met. We have done that with DNA. Parliament said it’s in the public interest that the police can derive profiles of individuals from DNA but it’s not in the public interest that police could keep the samples. You can tell a lot more about a person from a sample. It’s important because if you get it wrong, the police will lose public trust and in Britain, we have a historic tradition of policing by consent.”

Ms Denham, the Information Commissioner, is to publish the results of her investigation into facial recognition, which she says should only be deployed where there is “demonstrable evidence” that it is “necessary, proportionate and effective.”

She has demanded police and other organisations using facial technology must ensure safeguards are in place including assessments of how it will affect people before each deployment and a clear public policy on why it is being used.

“There remain significant privacy and data protection issues that must be addressed and I remain deeply concerned about the rollout of this technology,” she says.

By 27 August 2019

Facial recognition technology burst into the headlines this month following an exposé in the Financial Times about its use in London’s King’s Cross.

The Information Commissioner’s Office has launched an investigation into the use of the technology, which scanned pedestrians’ faces across the 67-acre site comprising King’s Cross and St Pancras stations and nearby shopping areas, without their knowledge.

It is the latest controversy to embroil the technology. Manchester’s Trafford Centre was ordered to stop using it by the Surveillance Camera Commission, which works for the Home Office.

Information commissioner Elizabeth Denham said after details of the King’s Cross scheme emerged that she was “deeply concerned about the growing use of facial recognition technology in public spaces”.

“Scanning people’s faces as they lawfully go about their daily lives in order to identify them is a potential threat to privacy that should concern us all”

Elizabeth Denham, information commissioner

“Scanning people’s faces as they lawfully go about their daily lives in order to identify them is a potential threat to privacy that should concern us all,” she maintained.

“That is especially the case if it is done without people’s knowledge or understanding. My office and the judiciary are both independently considering the legal issues and whether the current framework has kept pace with emerging technologies and people’s expectations about how their most sensitive personal data is used.”

The European Commission is also understood to planning new regulation that will give EU citizens explicit rights over the use of their facial recognition data as part of an update of artificial intelligence laws.

What’s it for?

So what does that mean for retailers that are either already deploying or are considering a roll-out of facial recognition technology in their stores?

Given the level of concern and scrutiny from regulators and public alike about how such technology is used, can retailers deploy it in a way that adds value to their business and without risking alienating customers?

Innovation agency Somo’s senior vice-president of product management Tim Johnson says: “There’s a very wide range of things [facial recognition] could potentially be used for. It is a very significant technology and a really seamless process that provides a strong form of identification, so it is undeniably big news.

“But at the moment it is a big muddle in terms of what it is for, whether it is useful or too risky and in what ways. We’ll look back on where we are now as an early stage of this technology.”

One area where facial recognition technology has been piloted by retailers is in-store to crack down on shoplifting and staff harassment.

“The only information held is on those who are known to have already committed a crime in the store previously”

Stuart Greenfield, Facewatch

According to the BRC, customer theft cost UK retailers £700m last year, up 31% year on year, while 70% of retail staff surveyed described police response to retail crime as poor or very poor.

Against that backdrop, retailers such as Budgens have rolled out tech from facial recognition provider Facewatch to stores across the South and Southeast, after a trial in an Aylesbury shop resulted in a 25% crime reduction.

Facewatch marketing and communications director Stuart Greenfield explains that clear signage is displayed throughout any store where the platform’s technology is used, and any data is held in Facewatch’s own cloud platform, not by the retailers.

“The only information held is on those who are known to have already committed a crime in the store previously, anyone whose face is scanned by the system and does not correspond against our existing watchlist is deleted immediately,” says Greenfield.

He believes it is the “combination of marketing, in-store signage and the system itself” which acts as a deterrent to shoplifting and staff harassment in stores where Facewatch’s technology is used.

Shopping centre operator Westfield has teamed up with digital signage firm Quividi, which analyses passersby’s facial data based on their age, gender and mood to determine which adverts are displayed as a means of driving customer engagement and sales. Shoe specialist Aldo and jeweller Pandora also work with Quividi overseas.

Quividi chief marketing officer Denis Gaumondie argues that the platform’s technology is not facial recognition – rather it is facial analysis, because it does not store any data on passersby and would therefore not recognise a repeat customer, or link their data to purchases.

He adds that it is the responsibility of Quividi’s retail partners to inform shoppers that the technology is in use.

Hot potato

However, DWF partner Ben McLeod, who specialises in commercial and technology law, says even using facial recognition or analysis technology in-store as described above could land retailers in hot water.

“There is a general prohibition on processing special category data [which may, for instance, include racial or ethnic origin] unless a specific exception applies,” he points out. “Many of the exceptions relate to the public interest which doesn’t really apply to retailers, particularly where the primary purpose for the use of the technology is marketing or to prevent stock loss.”

“Processing is possible where the data subject [the customer] has given explicit consent, but in practice, this will be difficult to demonstrate, as merely alerting customers to the use of facial recognition technology will not suffice.”

“Given that the basis on which the police are using surveillance technology is also currently subject to legal challenge, retailers are advised to tread carefully,” he cautions.

Opting in

Facial recognition technology is prompting controversy

Facial recognition has also been tried out by the Co-op to verify the purchase of age-restricted products such as alcohol at self-service checkouts. Customers found to be over 30 were allowed to complete the purchase without the need for verification by a member of staff.

Johnson believes such use of facial recognition technology would be welcomed by many customers because it would require their specific consent to use it, as was the case with the Co-op, as would verification of the purchase of a whole shopping basket using biometric data.

“People are comfortable with using facial identification on their own device [such as Apple’s Face ID], so using it as a means of verifying purchases in-store feels like a logical next step. It would speed up the check-out experience.”

Capgemini principal consultant Bhavesh Unadkat also points to the roll-out of Amazon Go stores in the US, which verify shoppers’ purchases and link them to their Amazon account using biometric data including facial recognition technology.

He explains that shoppers who download the Amazon Go app and then go into one of the checkout-free stores understand what technology is being used, and how it is benefiting them by providing an efficient shopping experience. The trade-off is clear and there is an “opt-in” to use the technology.

“I don’t think [retailers] can ask customers to opt out of facial recognition technology being used in-store, or just alert them to it being there,” he says.

“They need to ask shoppers to opt in and sell them the benefits they would get, such as a cashless checkout, more rewards, personalised offers to your mobile as you enter the store. Don’t go down the route of assuming people will never opt in and not communicating effectively, because if you get it wrong then the trust is broken.

“Right now we are making a mess of [facial recognition technology] because people are already paranoid about sharing information online and now feel like they are being victimised in a bricks-and-mortar environment as well.”

McLeod concurs with that view.

He says: “Amazon Go is the kind of thing where people are making a choice upfront by downloading the app. That is different from walking into a shopping centre or having the technology foisted upon you in a way that isn’t transparent.

“It becomes far more pervasive in that setting, but the more fundamental issue is there isn’t a strong legal grounding for the use of the technology.”

Right side of the law

Greenfield emphasises that Facewatch is working with the ICO to ensure its technology remains compliant with current and incoming regulations.

“We are pushing like mad for legislation as quickly as possible,” he says. “We want to do everything that is good for the technology because the reality is we cannot put the genie back in the bottle; [facial recognition] is out there and it will be used by someone, so we should have legislation to ensure it is used properly.”

Johnson advises retailers to collaborate closely with engaged suppliers and legislators, and tread carefully when deploying facial recognition technology, but does not believe that current controversies should deter retailers from using it for good.

He says: “I absolutely think [retailers] should still be exploring it. The current environment should make them fully aware of the risks, but it isn’t going away and the potential rewards are large, from crime prevention to age verification and flagging relevant products to customers.

“We’ll hopefully see a period of innovation which shows people what [facial recognition] is useful for.”

Reported by:

As the world’s first case against the controversial technology concluded, two leading judges dismissed the case brought by human rights campaign group Liberty on behalf of Ed Bridges, a Cardiff resident whose face was scanned by South Wales Police during a trial of facial recognition.

Lord Justice Haddon-Cave, sitting with Mr Justice Swift, concluded that South Wales Police’s use of live facial recognition “met the requirements of the Human Rights Act”.In a three-day hearing in May, Mr Bridges’ lawyers had argued that South Wales Police violated his human right to privacy by capturing and processing an image taken of him in public.

The judges also ruled that existing data protection law offered sufficient safeguards for members of the public whose faces were scanned by facial recognition cameras, and that South Wales Police had considered the implications.

Liberty lawyer Megan Goulding said: “This disappointing judgment does not reflect the very serious threat that facial recognition poses to our rights and freedoms.

“Facial recognition is a highly intrusive surveillance technology that allows the police to monitor and track us all.

“It is time that the government recognised the danger this dystopian technology presents to our democratic values and banned its use. Facial recognition has no place on our streets.”

A spokesperson for the Information Commissioner’s Office said: “We will be reviewing the judgment carefully.

“We welcome the court’s finding that the police use of Live Facial Recognition (LFR) systems involves the processing of sensitive personal data of members of the public, requiring compliance with the Data Protection Act 2018.

“Any police forces or private organisations using these systems should be aware that existing data protection law and guidance still apply.”

Sajid Javid, the Home Secretary, has backed trials of face recognition by the Metropolitan Police. The trials will be used to test AFR (automatic facial recognition) to help in the fight against child abuse.

Speaking at the launch of new computer technology aimed at helping police fight against online child abuse, Mr Javid said it was right for forces to…

“To be on top of the latest technology”

He added:

“I back the police in looking at technology and trialling it and… different types of facial recognition technology is being trialled especially by the Met at the moment and I think it’s right they look at that.”

Report from the BBC may be read here

“I actually believe facial recognition technology, properly overseen, properly thought about, properly circumscribed, is something that our public would expect us to be doing”