An exclusive interview with New York Times reporter Kashmir Hill, author of a new book on the implications of facial recognition software.

It’s all about digital privacy. How much of our personal information is out there, how much is being used by big corporations and governments and how much of that is being used without our consent – especially when it comes to facial recognition technology?

One aspect of this debate that is particularly intense involves the use of facial recognition – pictures of you and me, in other words. Should law enforcement agencies be able to track down known bad guys as they move through an airport or a city, plotting their next big heist or act of terrorism? Or track and arrest a paedophile with a hitherto clean arrest record? Maybe the algorithms could be used to develop a pair of glasses, spectacles, that would help me remember someone’s name at a dinner party or a business meeting? How bad could that be? 

One American company stands out in this field. It’s called Clearview AI. Over the past six or seven years it has become the controversial leader in the business, widely used by police departments in several countries, but banned in others. It’s also the subject of a new book by New York Times reporter, Kashmir Hill: Your Face Belongs to Us. She spoke to Acumen from New York.

Kashmir Hill: Clearview AI is a very small startup based in New York City and it’s a ragtag band of individuals – not your typical tech entrepreneurs. But they decided to scrape billions of photos from the public web, including social media sites like Facebook, Instagram, LinkedIn, without anyone’s consent. They now have thirty billion faces in their database. Originally, they were a product in search of a customer and eventually they landed on police departments. Clearview AI started offering this very powerful tool to police departments around the United States and around the world. You take a photo of somebody, upload it to Clearview AI, and it shows you all the other places on the internet where the face appears. This means that you can find out a person’s name, their social media profiles, who they might know, where they might live, and even find photos of them that they might not know are on the internet.

Chris Gibbons: What are the negatives associated with this technology? Why are so many people – many of whom you’ve interviewed for your book – so vehemently against its use?

KH: There are a lot of different criticisms of Clearview AI. One is just the way that they collected these photos, they went out onto the internet and collected photos of people without their consent from all over the place. Several privacy regulators around the world have said that they didn’t have the right to do that, that people should have control over whether they’re put in some company’s database or not. That’s a big objection.

And then there’s the question of whether we want law enforcement to have this power, to be able to put a name to a stranger’s face. It’s a powerful tool and it can be very effective for solving crimes, but it can also be used as a weapon of control to suppress dissent. I’ve talked to police officers who have solved really heinous crimes this way, crimes against children, murders, sexual assault.

At the same time, we’ve seen this kind of tool used in China, for example, to automatically give tickets to jaywalkers, to identifying protesters, even to control how much toilet paper people can get in a public restroom.

CG: But the argument that you just mentioned – that a police department has in fact used this technology to track down a paedophile – surely that holds water? Could it even be used to stop terrorists plotting the next 9/11?

KH: That’s been the dream of the government for decades. Here in the United States, they funded tools to try and develop facial recognition way back in the early 1960s when computers were huge. The dream was, can we identify everyone? Can we stop any potential threat? It could be wielded that way potentially. Facial recognition technology has come a long way, especially since those days in the 1960s. It still does have flaws, it’s not a hundred percent perfect. One question is whether you have misidentifications, or whether it might flag a doppelganger? And if you are going to have this tool in the hands of law enforcement do you want them to be going through a database of thirty billion faces? I mean, thirty billion is more than the number of people on the planet, so there are a lot of people in Clearview’s database and they’re in there many times. Myself, I have more than a hundred different photos that come up of me in Clearview’s database. Do we want them searching that database of everyone, all these people all over the world every time a crime is committed, given the possibility that it could go wrong, it could misidentify?

CG: The irony, of course, is that for every person demanding privacy and an end to this technology, there’s at least a hundred others busily posting as many pictures of themselves and their mates on Facebook, Instagram, TikTok, and so on – as many and as often as they can. I want privacy but I also want to be all over social media – isn’t that something of a contradiction?

KH: Yes, and that is the difficult thing about privacy. It is hard to predict, when we’re putting information out into the world, how it might be used against us as technology improves. Most people who were posting their photos to the internet over the last two/three decades, weren’t thinking, OK, one day a tool is going to come along that is going to reorganise the internet by face, and that essentially I’m giving them my face. I think that, to a certain extent, we got ourselves into this mess but I think it was hard for most people to predict that something like this was going to come along.

CG: In most big cities in the West, also certainly in China as you’ve described, and increasingly here in South Africa, there are passive surveillance cameras on every street corner. Are we kidding ourselves to think we haven’t already lost our privacy?

KH: it’s certainly possible to go in that direction: we can roll out facial recognition technology on all those cameras and track everyone. And we’re seeing that happen in some places. Moscow has rolled out facial recognition technology that way, as China has. But we can also choose not to do that if we feel it is too chilling to imagine that you can’t move through the world unobserved. And we have constrained surveillance cameras in the past. Surveillance cameras only capture our images. They only are looking at us, they are not listening to us. And here in the United States, and I hope that it’s the same in South Africa, we have passed laws to prevent secret recording because we as a society believe in conversational privacy and we don’t think that we should have listening devices around us all of the time. You can look to that as an example in the past where we have constrained what the technology does to protect our privacy. We can make the same choice with facial recognition technology if we choose.

CG: Is that the middle ground, where a Clearview AI could continue to help the police in genuine criminal cases but without compromising privacy? Without turning into “Big Brother is watching you”?

KH: This is one of the big questions in security: in privacy, do you watch everyone all the time to make sure that no one does anything wrong? Or just use these surveillance tools after something has happened to catch those responsible? That is one of the moments that we’re in right now: how do we want to do this? Do we want to live in a panopticon, in the name of safety and security, or do we want to have some degree of anonymity and privacy to live in a healthy society?

CG: What advice would you offer to people who are concerned about their privacy and facial recognition technology – specifically just as Clearview AI uses it?

KH: Think about what you’re posting publicly. The images you’re putting out there, on the public internet, are increasingly going to be scraped and collected by companies, including Clearview AI, so I think there’s some benefits to kind of posting privately, not putting something on the public internet unless you must, and there’s a clear benefit to it. While Clearview AI is restricted to law enforcement, we are seeing the development now of public face search engines with names like PimEyes, and FaceCheck.id, they’re following in the same model as Clearview AI – scrape the internet, make people’s faces searchable. And so if somebody is worried about this they might want to go to those search engines and see what’s out there about themselves. Some of these search engines do allow people to opt out, so if you don’t like what you find, you could request that they remove it from their search results.

CG: So if I discover myself on Clearview AI, would they remove me if I asked?

KH: You wouldn’t be able to search Clearview AI because it’s restricted to law enforcement and they are not honouring access and deletion requests from people outside the United States.

South Africa and racial bias in facial recognition

Kashmir Hill explained to me that facial recognition technology first started rolling out commercially in 2000.

"Especially after 2001, after September 11, there was a big push. I talked to one of the vendors from back then about how clunky it was, how it really didn’t work well. He told me that they had done a pilot project in South Africa that they had abandoned. They found that the face recognition technology at that time just did not work on people with darker skin tones.”

Had that problem with darker skin tones been sorted out, I asked Hill?

“It was a problem for a very long time. The technology, especially from Western companies, just worked best with white faces. Over time it has improved. The facial recognition vendors got so much criticism so they looked at how to improve the software. The solution is to train the algorithms with a more diverse collection of faces. They now do that; they make sure that they have diverse training data and it really has improved a lot. And in testing these programmes don’t show as much bias anymore as they used to. It tends to be minimal now for the good algorithms.”

Gallery

Related

Global Threats, Local Challenges

Global Threats, Local Challenges

Proven Under Pressure

Proven Under Pressure

Becoming a Digital Leader: Cultivating a Mindset for the Future

Becoming a Digital Leader: Cultivating a Mindset for the Future