Clearview AI, the notorious facial recognition tool referred to as "dystopian" by its own investors, is reportedly being used to identify Russian soldiers in Ukraine. Law enforcement's use of Clearview, which its creator says can be used to "instantly identify someone just from a photo", has been banned in many places due to privacy concerns, racially-biased intelligence systems, and other flaws with the tool.

"We have no data to suggest this tool is accurate," Clare Garvie of Georgetown University's Center on Privacy and Technology wrote in a report on government use of facial recognition.

Despite a lack of evidence in support of these technologies, their proliferation continues. This is partially due to the support from powerful officials. For example, in Ukraine, Clearview creator Hoan Ton-That was able to get in contact with the Ukrainian government by passing a letter through Lee Wolosky, a former Biden Administration lawyer.

"I remember seeing videos of captured Russian soldiers and Russia claiming they were actors," Ton-That is quoted by The New York Times. "I thought if Ukrainians could use Clearview, they could get more information to verify their identities."

In his letter to Ukrainian officials, Ton-That said that Clearview could be used to identify spies and the deceased. He added that Clearview's 20 billion-face database includes images pulled from "Russian social sites such as VKontakte."

So far, Clearview has been used by more than 200 Ukrainian officials in five government agencies in order to conduct more than 5,000 searches. Those searches included the identification of prisoners of war, foreign travelers, and dead Russian soldiers.

The Ukrainian government has begun a campaign to inform the families of dead Russian soldiers about their loss in an effort to "dispel the myth of a 'special operation' in which there are 'no conscripts' and 'no one dies'," Ukrainian vice prime minister Mykhailo Fedorov said in a Telegram post.

Critics of the technology say that the use of facial recognition in the Ukrainian conflict could give companies a way to expand without regulator oversight and that mistakes made in a war zone can have even more dire consequences. Already, facial recognition is being used relatively freely by journalists and officials to spread individuals' identities across the internet.

"War zones are often used as testing grounds not just for weapons but surveillance tools that are later deployed on civilian populations or used for law enforcement or crowd control purposes," said Evan Greer, a deputy director for Fight for the Future digital rights group. "Companies like Clearview are eager to exploit the humanitarian crisis in Ukraine to normalize the use of their harmful and invasive software."

Greer added that this sort of privacy-invading technology is already a favorite way to crack down on protests against authoritarian regimes like the one in Russia.

"Expanding the use of facial recognition doesn't hurt authoritarians like Putin - it helps them," Greer said.

"It is one thing for a company to build a face recognition system designed to help individuals find their celebrity doppelgänger or painting lookalike for entertainment purposes," Garvie continued. "It's quite another to use these techniques to identify criminal suspects, who may be deprived of their liberty and ultimately prosecuted based on the match."