Clearview AI, the firm behind a contentious face recognition system that scans social networking sites for photos of people to add to its database, is about to get a patent for its technique. On Saturday, the company stated that it had received a notice of authorization from the US Trademark and Patent Office, indicating that Clearview’s application will be granted if the company pays administrative fees.
Politico first reported the notice on Saturday, saying that critics are concerned that obtaining the patent may hasten the development of similar technologies before lawmakers have had time to grapple with them.
Clearview, The AI system employed by law enforcement agencies such as the FBI and the Department of Homeland Security has been chastised for searching social networking sites and capturing pictures of people without their consent to feed its database of billions of images. According to the company, the images it collects are publicly available and should be considered fair game. However, Facebook, Twitter, and others have issued cease-and-desist letters in response to the strategy. Officials in Australia, the United Kingdom, and Canada have also chastised the corporation for violating data privacy regulations.
The company’s system is intended to identify criminal suspects rather than be used as a surveillance tool; according to the statement of Hoan Ton, CEO of Clearview and Clearview is “dedicated to the appropriate use” of its technology, which includes working with policymakers on face recognition regulations. “We do not intend to produce a consumer-grade version of Clearview AI,” the company said in a statement to CMT on Saturday. According to critics, apps or other consumer versions of such technology may allow a bystander to take your picture with a smartphone and then access sensitive information about you.
According to Politico, Clearview AI’s patent application contains language that suggests uses other than police identifying suspects.
“In many instances, it may be desirable for an individual to learn more about a person they meet, such as through business, dating, or another relationship,” the patent application states, adding that traditional methods such as asking questions, conducting internet searches, or conducting background checks can fall short. “As a result, a better method and system for obtaining information about a person and selectively providing the information based on specified criteria is urgently needed.”
Facial recognition systems have been chastised for their inaccuracy, which has resulted in false arrests and other issues. In particular, when it comes to recognizing persons of color and women, the systems have struggled. Privacy advocates are also concerned about the possibility of suffocating dissent by monitoring political demonstrations and marches, for example. On the other hand, law enforcement personnel claim that the systems have been utilized to solve crimes ranging from stealing to child sexual exploitation to murder.
Clearview informed Politico that the company is unaware of any cases in which its technology has resulted in a wrongful arrest. In addition, the outlet noted that the Commerce Department’s National Institute of Standards and Technology judged Clearview’s technology to be highly accurate in a recent audit. “As a person of mixed ethnicity,” Ton-That has stated, “accuracy is extremely essential to me.”
The ideal way to govern facial recognition is still up for debate among lawmakers. There are guidelines in place in a few states and some towns in the United States. Still, no federal laws govern the technology yet. Even though the systems are widely used, and a rising number of US agencies rely on them, this is the case. In June, the Government Accountability Office said that 20 US agencies were utilizing facial recognition systems but that many of them lacked critical data on them.
The GAO stated at the time that “thirteen federal agencies are unaware of what non-government systems using face recognition technology are used by personnel.” “As a result, these agencies have not thoroughly considered the possible hazards of employing these tools, such as privacy and accuracy problems.”