A confidential document from Chinese company huawei, obtained by investigative agencies, recently revealed that Huawei has tested face recognition software that systematically identifies Uighurs and automatically sends “Uighur alerts” to government agencies. This technology has raised international concerns about the use of high technology by authoritarian regimes to violate human rights.
According to the Washington Post, a confidential Huawei document obtained by video surveillance researcher IPVM recently revealed that Huawei worked with Pennsylvania-based face recognition startup Megvii in 2018 to test an artificial intelligence camera system that could scan faces and estimate the age, gender and race of identified people. The test report specifies that if the system detects the face of a Uighur, it will automatically send a “Uighur alert” to the Chinese Communist authorities. Following IPVM’s request for comment, Huawei immediately removed the test report from its website and denied that the facial recognition technology was race-specific. In addition, China’s Foreign Ministry responded to CNBC, the U.S. media outlet reporting on the technology, on Dec. 11 last year that IPVM’s findings were “slanderous.
China’s public security authorities have been using big data collection, face recognition and other technologies to deploy a nationwide “SkyNet” surveillance system. Meanwhile, in the Xinjiang Uyghur Autonomous Region, biometric detection technology for Uyghur Muslims has been widely used to track and detect Uyghur activities in the region.
Authoritarian Regimes Misuse High Technology to Threaten World Democracy
Timothy R. Heath, a senior fellow at the RAND Corporation, a Washington think tank, is concerned about the misuse of facial recognition technology: “The use of facial recognition technology to spy on individuals raises serious human rights and privacy concerns, and the technology could easily be misused by authoritarian governments. This technology could also misidentify and expose individuals to harm due to false suspicion by the government.”
Xie Tian, a business professor at the University of South Carolina, similarly believes that the Chinese Communist authorities’ use of artificial intelligence to consolidate centralized rule can cause significant human rights violations: “This is a most typical monster created by technology, which in turn harms human beings. In other words, all the Chinese government has to do is turn on a camera and it can know where and what either Han or Uighur is doing, which turns it into a completely closed police society.”
He Tianmu believes that the success of facial recognition technology in China would allow China to promote its big data surveillance model to other countries, threatening world democracy: “China is likely to sell this technology to many other countries, especially authoritarian regimes, because the technology would help their governments more easily identify, locate and suppress critics of their governments. “
However, He Tianmu said the big data surveillance model based on facial recognition would not necessarily be successful in other countries: “For it to be effective, other countries would have to develop large databases, as China has done, and that could be a challenge for some of the poorer authoritarian regimes.”
China has exported and promoted its big data surveillance to African countries to help centralized governments maintain their rule, Xie Tian said. Therefore, to preserve democracy around the world, Western countries should protect their own high tech and intellectual property: “The Chinese Communist Party will not use tech benignly, and the best way is for Western countries to keep tech out of the hands of centralized governments and keep it out of the hands of the Chinese Communist Party. The Trump administration has already imposed sanctions against Huawei and others, but it’s not too late. There is a process of updating these technologies, and if they are curbed at the next update, the U.S. has a chance to prevent technology outflows and theft.”
Human rights group calls for tighter controls on high-tech exports to China
Merel Koning, a technology and human rights policy officer at international human rights group Amnesty International, told the station that countries should impose a blanket ban on the use of facial recognition technology for racial identification purposes: “Racial recognition is a dangerous technology that opens up the possibility of automatic racial discrimination. In addition, states should ensure strong human rights safeguards for the development and deployment of AI, and crucially, adequate monitoring and human rights impact assessments prior to deployment.”
Koning’s Amnesty International has previously called on Western companies, particularly European companies that are leaders in biometric surveillance technology, to stop selling surveillance-related technology to China altogether and urged countries to establish norms for the export of such technology.
According to the BBC, IPVM also found technology specifically aimed at identifying Uighurs in patents filed by Chinese artificial intelligence companies such as Shangtang Technology and Kuangwei Technology. Meanwhile, Baidu and Alibaba also mentioned race recognition in their patents for face recognition, but did not specify that the patents were for Uyghurs.
Recent Comments