PimEyes: facial recognition for consumers

easyDNS is pleased to sponsor Jesse Hirsh‘s “Future Fibre / Future Tools” segments of his new email list, Metaviews

Can facial recognition be banned or has that ship sailed?

 

Every once in a while I wonder what we’re missing amidst this pandemic as all eyes are focused on the wild news cycle driven by the coronavirus, politics, protests, celebrities, or some lady we’ll momentarily refer to as Karen. The ongoing proliferation and accessibility of facial recognition technology is one of them.

While on the one hand there’s encouraging news, such as the recent withdrawal of Clearview AI from operating in Canada due to investigations into their product and service.

Clearview AI has advised Canadian privacy protection authorities that, in response to their joint investigation, it will cease offering its facial recognition services in Canada.

This step includes the indefinite suspension of Clearview AI’s contract with the RCMP, which was its last remaining client in Canada.

The investigation of Clearview by privacy protection authorities for Canada, Alberta, British Columbia and Quebec remains open. The authorities still plan to issue findings in this matter given the importance of the issue for the privacy rights of Canadians.

An ongoing issue under investigation by the authorities is the deletion of the personal information of Canadians that Clearview has already collected as well as the cessation of Clearview’s collection of Canadians’ personal information.

However on the other hand, just like whack-a-mole, with the retreat of one service, comes the emergence of others.

PimEyes is the new facial recognition service that is relatively accessible and easy to use. There was a bit of reporting on this Polish site in early June, but not much since.

PimEyes, a Polish facial recognition website, is a free tool that allows anyone to upload a photo of a person’s face and find more images of that person from publicly accessible websites like Tumblr, YouTube, WordPress blogs, and news outlets.

In essence, it’s not so different from the service provided by Clearview AI, which is currently being used by police and law enforcement agencies around the world. PimEyes’ facial recognition engine doesn’t seem as powerful as Clearview AI’s app is supposed to be. And unlike Clearview AI, it does not scrape most social media sites.

PimEyes markets its service as a tool to protect privacy and the misuse of images. But there’s no guarantee that someone will upload their own face, making it equally powerful for anyone trying to stalk someone else. The company did not respond to a request for comment.

PimEyes monetizes facial recognition by charging for a premium tier, which allows users to see which websites are hosting images of their faces and gives them the ability to set alerts for when new images are uploaded. The PimEyes premium tiers also allow up to 25 saved alerts, meaning one person could be alerted to newly uploaded images of up to 25 people across the internet. PimEyes has also opened up its service for developers to search its database, with pricing for up to 100 million searches per month.

Polish website PimEyes was set up in 2017 as a hobby project, and commercialised last year. It currently has 6,000 users signed up.

The website allows people to upload any picture for free and it will then find matching images from around the web, drawing on publicly accessible sites such as Tumblr, news outlets and blogs.

Silkie Carlo, director of Big Brother Watch, told the BBC: “To see this powerful surveillance tech marketed to individuals is chilling. It’s ripe for stalking and puts women and children at unprecedented risk.”

In response, PimEyes said: “Our privacy policy prevents people from using our tool for this case. Every tool could be used in the wrong way.”

Since PimEyes is based in the EU, it is subject to the GDPR, the General Data Protection Regulation that is considered one of the strongest privacy and data protection frameworks in the world. However this may be a good example of how easy it is for a company to slip through the proverbial GDPR cracks.

In this case, all PimEyes needs is a privacy policy.

I suspect most people understand the paradox of privacy statements and terms of service agreements that are filled with legal language designed to be incomprehensible. PimEyes has elevated this to a new level, in that most people will obviously focus on the service or application and less on how it works or what laws govern it.

We may also be entering a phase of the Internet where specialized search engines are far more effective and yield better results due to their particular focus, in this case on faces.

It can also spawn new applications like finding people who look like you rather than just pictures of yourself. Not only might we want to find a stunt double, but what about an alibi or friend to show up to the virtual work meeting when you have a conflict?

All jokes aside, the ongoing proliferation and accessibility of facial recognition technology poses considerable risks to already vulnerable elements of our population. PimEyes is in the context of stalking, but it also helps to normalize the technology and incentivize people to use it as if it were no big deal. This flies in the face of concerted efforts to see the technology either restricted in use or banned outright.

Activists in Detroit have been waiting a long time for July 24. Since the city’s contract with DataWorks began in 2017, community members have been pushing to stop the software company’s facial recognition services from expanding in their neighborhoods.

On that day, Detroit’s $1.2 million contract with DataWorks is set to expire — unless the City Council votes to renew the deal for another two years for an additional $219,934.50.

After years of privacy concerns, issues with technology’s racial bias, evidence showing that surveillance doesn’t reduce crime in Detroit and a damning wrongful arrest from a facial recognition mismatch, activists want to make sure that the technology is kicked out of the city for good.

When the renewal came up for a vote on June 16, the City Council had been expected to vote in favor of extending the contract. It’s since delayed the vote due to public outcry, and community organizers say they’re doing everything they can to swing the vote against facial recognition.

Part of the concern with facial recognition in particular, but automated technology and AI in general, is that the regulatory environment, or the rules for using this tech, are not yet in place, nor are they clear.

In this context, the companies themselves desire rules, so they can understand what is kosher and what is not. However the political struggle rests in determining what those rules are. With the trick being that rules are not a denial, and while many want an outright ban, rules may provide the industry a means to deploy the tech and be seen as being responsible.

When Amazon and Microsoft announced they were putting a pause on providing facial recognition to police departments, the move also came with a request to Congress: Pass regulations to ensure ethical use of the technology.

The Facial Recognition and Biometric Technology Moratorium Act is the first facial recognition legislation introduced since that request came, and the companies calling for regulations have remained silent on the proposal.

The bill, introduced June 25 by Sens. Ed Markey and Jeff Merkley and Reps. Ayanna Pressley and Pramila Jayapal, would put an indefinite ban on facial recognition use by police, until Congress passes a law to lift the moratorium. Civil rights advocates consider it an effective stop to technology that research shows has racial bias and that can have dire consequences when used by law enforcement.

The contention in this negotiation generally revolves around the role of bias and error in these automated systems. Companies assert that bias can be mitigated and resolved, whereas many researchers are skeptical as to whether that is possible.

This newsletter is of the position that bias is inherent in the subjective experience of human beings, and therefore bias is inherent in the data and systems we create. The larger question is what kind of bias is desired.

Or alternatively, in this instance, what kind of bias is PimEyes currently using and fostering with the growth and employment of their service.

Without algorithmic transparency and effective oversight, we may never know. Similarly these sorts of stories often remain below the radar of public awareness, especially when we’re distracted by an otherwise wild and crazy world.

All of this reinforces the need for a kind of watchdog, or observatory, that keeps track of these issues, and helps ensure the public is informed. Although that may be a topic for another day.

Would you use PimEyes? Have you used similar services? Are concerns about the availability of this tech overblown? Are there legit uses we’re not yet considering or contemplating?

Leave a Reply

Your email address will not be published. Required fields are marked *