Discrimination by design: The ramifications of of Facial Recognition Technology

Facial recognition is all around us. Many instinctively rely on logging in to their iPhones by glancing at the camera rather than trying to remember their pin, and some of us remain accustomed to walking through airport security without having to get our passports out. But the sinister ramifications of this technology are becoming more and more apparent. Police deployment of facial recognition is rapidly on the increase and we know this fits within a broader web of heightened state surveillance.

It is already well-known that London is home to more CCTV cameras per person than any city outside of China. If that is not harrowing enough, new technology is being used to intrude even further.

Automatic Facial Recognition (AFR) technology allows the user to not just record passers-by, but also to analyse their face by taking their biometric data. This sensitive information is the equivalent of taking something as identifiable as a fingerprint.

UK police have been quietly rolling out this technology for almost five years. South Wales Police were recently judged to be breaching human rights when they used AFR in Cardiff. Meanwhile, the Metropolitan police have been using a system called NEC NeoFace to record and analyse the faces of members of the public at events such as Notting Hill Carnival since 2016. There is now a real worry that they are using the same technology at protests, in an effort to both surveil and suppress.

The particular concern in respect of the use of facial recognition at the Black Lives Matter protests is that this technology is widely acknowledged to be discriminatory by design. Research by Black academics Joy Buolamwini, Deb Raji and Timnit Gebru found that some facial recognition algorithms misclassified Black women nearly 35 per cent of the time, compared to nearly always correctly identifying white men. Just last year, in the UK, documents from the police, Home Office and university researchers showed that whilst the police are aware ethnicity can have an impact on search accuracy within facial recognition systems, they have failed on several occasions to test this.

The implications of this are severe. Firstly, the deployment and targeting of this technology is often racist, given the police tend to use facial recognition against communities who are already disproportionately monitored by the state. Secondly, the technology itself is inherently racist given it is more likely to misidentify Black, brown and other racialised groups. Therefore, whilst its use at the Black Lives Matter protests perhaps comes as no surprise, it is crucial we all arm ourselves with a better understanding of how this technology works, how we can challenge it and why we need to support calls for a total ban.

What is AFR?

How does AFR work?

The Metropolitan police use NECs NeoFace Live Recognition (SWP were using AFR Locate). These technologies both work on the same basic method.

They start with a “watchlist”. This is a database of existing images which the police already have.

They then analyse these images and pick out particular “biometrics”. These are basically measurements of facial features, such as the width of a person’s eyes, or the height of their forehead.

Then, when they bring an AFR camera out into the public, the camera scans the crowd and takes the biometric data of every single person who walks past. Each time it finds a face it isolates it and analyses that face’s biometrics (the same measurements as before). When enough of the biometrics of an individual’s face matches enough of the biometrics of a face on the watchlist, the face is flagged up to the police as an ‘alert’.

What is the difference between ‘live’ and non-‘live’ AFR?

The recent change in police technology refers to live AFR. This technology allows the cameras which take an image to analyse and compare it to the watchlist instantly, giving the police an alert while the individual is still right in front of them.

It is worth noting that non-live AFR has existed for quite some time, and is known as retrospective AFR. That technology allows the police to compare pictures from a watchlist to images on a screen after the image has been taken.

Although a lot of the debate currently focuses on live AFR, it is worth noting that a lot of the same issues emerge with retrospective AFR.

What pictures are on the ‘watchlist’?

The faces on the ‘watchlist’ are one of the major concerns. Clearview AI got in trouble in America for scraping images from social media for their own library of images, and the NYPD were accused of using Instagram images when they tracked BLM protester Derrick Ingram. This means that someone using this technology could identify you in a crowd simply because you had a profile picture on Facebook. As far as we know, AFR Locate and NEC NeoFace do not use social media images.

However, the police do have wide discretion over whose faces they can include.

You would expect the ‘watchlist’ to include known criminals who the police were actively looking for. However, when South Wales Police were using AFR Locate, their watchlist included “persons wanted on suspicion for an offence, wanted on warrant, vulnerable persons and other persons where intelligence is required”. This last category is really worrying, because we do not know how they are deciding intelligence is required, or where they are getting those people’s images from.

So essentially, there is no clear guidance on whose pictures they will be comparing images too. The existing policies allow wide discretion on who the police can choose. We also know that they can theoretically source images from anywhere on the internet.

How accurate is AFR?

The police have continued to claim that AFR is very accurate. As we have highlighted above though, this is far from the truth. 

NEC NeoFace claim that their accuracy rate is 99.2% in a perfect environment (e.g. an airport passenger gate when the individual stands still and stares at the camera), and 85.5% in more difficult environments (e.g. a crowd in an indoor stadium).

However, when false positives are considered, an analysis of the Met police’s NeoFace technology shows that it is inaccurate 96% of the time. For example, when the MET trialled their technology at the Notting Hill Carnival in 2017 it flagged up 96 people to be stopped. One of these turned out to be a correct identification. In February 2020 they also tried it in Oxford Circus and it flagged up 8 people and 6 were actively stopped in the street and questioned. In the end they found 1 correct identification.

It is not the case that FRT allows the police to accurately pick the correct, and only the correct, people out of a crowd. Innocent people regularly seem to be caught up in the net. By the current figures, far more innocent people are caught up by NeoFace than correct identifications.

Nonetheless, even if the technology becomes more accurate over time, a number of concerning human rights issues remain (see below).

What are the human rights issues with AFR?

Right to Privacy
The law

From a legal perspective, the right to privacy is protected in the Human Rights Act 1998 and Article 8 of the European Convention on Human Rights. This states that all people have the right to a private life. It is a qualified right, which means any infringement on someone’s privacy must necessary in line with Article 8(2).  AFR unlawfully infringes our right to privacy because it uses very personal data. Your biometric measurements are like a fingerprint and can enable access to any information connected to you. This could enable intrusive monitoring of our actions on an unprecedented scale.

In practice

The real-life application of this is clear. Complete surveillance is the hallmark of a totalitarian state. In Kashgar, a city in China, an extensive surveillance system is used to keep the Uighur population in check. It is difficult for people to oppose a leader when everything they do and say is monitored.

But even for those who do not feel they have anything to fear from the authorities, having a private life is important. For one thing, our behaviour changes when we are being watched; there’s a legitimate comfort in privacy. And, related to this, there’s the fear of where that evidence might go. Hackers are regularly raiding apparently secure lists from large companies and the state. What if that information also included pictures of you and your kids at the park? The sale of data is already its own industry (including state authorities selling data to private companies). Once your data is captured and recorded, it could very easily end up in the wrong hands.

Protection against discrimination
The law

The Human Rights Act, the European Convention on Human Rights, and the Equality Act 2010 all protect citizens from discrimination. This means that no one, including the state, should act differently towards you simply due to your race, age, gender, or a number of other ‘Protected Characteristics’. However, there is overwhelming evidence that the current forms of AFR show racial and gender bias. Repeated studies have shown that AFR systems are far worse at differentiating between the faces of Black people and women than they are at differentiating between white men. This means that a Black woman who passes an AFR camera is not treated the same as a white man.

In practice

The effect of this is that Black people and women are far more likely to be caught in the net of false positives. Therefore, entirely innocent Black people and women are more likely to be stopped and questioned by the police even when they are not connected to any crime. There have already been some high-profile cases, including a Black man in America who was arrested in front of his family and questioned in the police station for a crime he never committed, all because a facial recognition system mixed him up with another Black man. This simply reflects existing human biases. In a country in which Black people are already more likely to be stopped and searched, more likely to be convicted and liable to receive harsher punishments for the same crimes, an additional racially oppressive system must be challenged. 

Right to protest
The law

The right to protest is protected in Articles 10 and 11 of the European Convention on Human Rights and the Human Rights Act. It allows people to express, explore and debate political and social ideals, and is a vital part of any democratic system. However, it relies on people feeling free to do so. AFR threatens this right, because it creates the potential for everyone in the crowd to be personally identified and their part in the protest tied to their personal life.

In practice

Not everyone who goes on a protest is expected to live and die for that cause. When Extinction Rebellion protesters were arrested there were many more who protested and believed in the cause but were not willing or able to go that far. The use of AFR at protests threatens to force people into that position. When your face can be instantly scanned and recognised, and all your information called up alongside it, this can intimidate people from getting involved.

Are the police allowed to use AFR?

Legislation

Facial recognition is a relatively new technology and governments around the world have been slow to catch up. That means that the police are currently left to run pretty much unchecked.

Although there is no legislation specifically prescribing the use of facial recognition technology, there is other legislation that should help us challenge its deployment.

Firstly, the relevant human rights legislation discussed above clearly shows how the use of AFR infringes fundamental civil liberties. The police should presently justify the reasons for its use to demonstrate any interference with our privacy rights is proportionate and necessary. As shown in the recent case by Liberty and Ed Bridges, they have not done this.

Secondly, because this technology involves analysing and recording our biometric data, it is relevant to the Data Protection Act 1998, which protects what the police can and can’t do with personal data. Whether the police are complying with this legislation with respect to data collected from facial recognition deployment remains to be seen. 

Whilst knowing what laws can currently protect us against this technology is important, legal regulation is not the solution. This technology is so inherently intrusive and discriminatory that we should, at every stage, be calling for a total ban on its use.

Policies

Each police force which wants to use AFR is required to develop their own policy. This should give a transparent explanation as to how they will use the technology and what limits they are putting in place. Please look up your local police force and investigate their policies.

In London, the Met policy confirms that they must do the following whenever they use live facial recognition technology:

  • place posters and signs in and around the area to make people aware the technology is being used,
  • provide information leaflets to give to the public,
  • make officers available to talk to members of the public to help explain what’s happening and how LFR works,
  • tell people online where we’re going to use LFR before any deployment, and
  • publish the results of each deployment on the Met website

Therefore, if you are wondering whether AFR is being used, there should be signs available and, if not, you are allowed to ask officers questions if you feel safe and comfortable to do so. They have to tell you if it is and should have leaflets explaining your rights. You also have the right to contact the police and ask for all of your data to be deleted if you think they have taken your picture without lawful cause.

Can they force me to have my face scanned?

It is not illegal to avoid AFR or cover your face from it. The police can ask you to remove masks, but you can ask to do so when outside of the AFR’s area of surveillance.

The police also cannot force you to have your face scanned, so if you are asked to look into an AFR camera you can refuse.

What should I do if I object to the use of AFR?

If you believe that your picture has been taken by the police, you can ask them to delete it by emailing SARenquiries@met.police.uk. They will be required to do so unless they can claim they need the picture for lawful purposes, which they will have to justify.

If you are incorrectly stopped due to facial recognition technology, then you can bring a complaint to the police station who were involved and the Independent Office for Police Conduct. You can also bring a civil claim against the police.

%d bloggers like this: