There is Clearview AI Scraping the web for photos and applying facial recognition to give police and others an unprecedented ability to peek into our lives gives rise to controversy. Now the company’s CEO wants to make Clearview’s surveillance equipment more powerful using artificial intelligence.
This can make it more dangerous and error-prone.
Clearview has collected billions of images from various websites including Facebook, Instagram and Twitter and uses AI to identify a specific person in the image. Police and government agents use the company’s Face Database to help identify suspects by tying the suspects to their online profiles.
The company’s co-founder and chief executive, Juan Ton-That, told Wired that Clearview now collects more than 10 billion images across the web যা three times more than previously reported.
Ton-That says a large pool of images means users, often law enforcement, are more likely to find similarities when searching for someone. He further claims that the larger data set makes the company’s equipment more accurate.
ClearView combines web crawling techniques, advances in machine learning that have improved facial recognition and created an incredibly powerful tool that ignores personal privacy.
Ton-The Reporter took a picture and demonstrated the technology through a smartphone app. The app has created dozens of images from numerous US and international websites, each showing the right person in an image captured over a decade. The greed for this kind of tool is obvious, but so is the possibility of its misuse.
Clearview’s actions have sparked public outrage and widespread controversy over privacy expectations in the age of smartphones, social media and AI. Critics say the company is violating personal privacy. ACLU has filed a lawsuit against Clearview, Illinois, under a law that restricts the collection of biometric data; The company faces class action lawsuits in New York and California. Facebook and Twitter have claimed that ClearView will stop scraping their sites.
The pushback didn’t stop Ton-That. He says he believes most people accept or support the idea of using facial recognition to solve crimes. “Those who are worried about this are very vocal, and that’s a good thing, because I think over time we can address more of their concerns,” he says.
Some of Clearview’s new technologies could spark further controversy. Ton-That says it’s creating new ways to find a person with “Deblur” and “Mask Removal” tools. The first takes a blurry image and sharpens it using machine learning to see what a clear picture would look like; The second tries to visualize the covered part of a person’s face using a machine learning model that fills in the missing details of an image using a best guess based on the statistical patterns found in another image.
These capabilities can make ClearView technology more attractive but more problematic. It is not yet clear how accurately the new strategies work, but experts say they could increase the risk of misidentifying a person and further increase the underlying bias of the system.
“I would expect the accuracy to be very poor, and beyond the accuracy, without careful control over the data set and training process I would expect an excess of unintentional bias to occur,” said Alexander Madri, a professor at MIT, who specializes in machines. Without proper care in learning, for example, the method may increase the likelihood of people with certain characteristics being misidentified.
Even if technology works as promised, Madrid said the morality of unmasking people is problematic. “Think of people who wore masks to take part in peaceful demonstrations or were vague to protect their privacy,” he said.
Ton-The says that tests have found that new tools have improved the accuracy of Clearview results. “Any enlarged images should be noticed that way, and extra care should be taken when evaluating the results of an enlarged image,” he says.