“Now go to the sales office to buy a house, you have to bring a helmet! Why? The camera ‘knows’ you!”
A while ago, such a news caused a sensation: a property sales office in Jinan, Shandong Province, a house hunter in order not to disclose their personal information, wearing a motorcycle helmet to see the house.
Investigations by several media, including the Southern Metropolis Daily and Beijing Daily, showed that the sales office used monitoring technology with facial recognition to identify whether a customer was a homebuyer “on his own” or brought in by an agent; different sales channels would use different pricing strategies, and whether a person was facially recognized as a “direct sales”, there would be a difference of up to hundreds of thousands of dollars in the middle; and this technology has been applied in quite a few places.
To protect personal information, wear a helmet to see the house
All of this is done without the customer’s knowledge. You are watched clearly, but kept in the dark.
The more immediate reaction of many people who heard about this was “scary” or even “creepy”. As if in an invisible place, there are countless eyes watching you, your identity, along with every move, are exposed to an omniscient watcher.
More questions come to mind: Where are the other personal data stored, along with facial information? Is it being used for profit? Is there a risk of leakage? Is facial recognition technology, is it being misused?
In 2018, in the exhibition area of an artificial intelligence conference, I saw my face being “recognized” on at least 20 screens inside. In addition to smart cities, security monitoring, “brush face” is also used by developers in virtual boarding passes, scenic tickets, unmanned supermarkets, can be said to be pervasive. When boarding a plane, after the security check, the cameras all over the airport know where you are and when to send you a “warm reminder” to tell you to board the plane and not to run away; and the one that makes me feel “uncomfortable” is the monitoring system developed by a company. system, the face and the last four digits of the ID number directly naked on the monitor screen.
My identity, along with every move, is exposed to an omniscient and omnipotent watcher
Where does this sense of fear come from? Which part of the process requires our special vigilance? Will we be “boiled in warm water” and get used to the existence of surveillance technology all around us, or is there really a certain bottom line of humanity and privacy that cannot be touched?
The hidden worries of “smooth” face painting
In fact, if we talk about the technology itself, facial recognition technology has been developed for quite a long time, and its technical principle is not very complicated. Like other image recognition, its principle is to extract the features in the image, and then determine whether the features match with those in the database and how likely it is to be the same face. In recent years, because of the rapid development of machine learning, the machine’s ability to extract and recognize facial features has also made rapid progress. 2016, Google‘s ImageNet-based image recognition model has surpassed humans in accuracy and speed.
This is still far from real “artificial intelligence”. But because its application is too wide, embedded in countless use scenarios, and the application process is very smooth, giving people an illusion of “intelligence”.
“Smoothness” is one of the main features of facial recognition technology. In the eyes of people who respect this technology, it imitates the interaction between human and human to the greatest extent, i.e., it can be done by “looking”. After all, recognizing a person’s face with our eyes is a function we are all born with. You pick up your iPhone and it unlocks automatically; you walk into your home and the door opens automatically; you enter schools and offices and other places that require security without stopping to check your ID ……
But this smooth interaction is, in another way, “no interaction. Everything happens seamlessly and silently. That is, many times I don’t give my consent in an informed way. I use my fingerprint to have to press on the lock, and this “press” is me “talking” to the machine – I need to open the door now. And I just “appear” in front of the camera, my face has given all the signals, whether I want to or not.
I just “appear” in front of the camera, my face has given all the signals, whether I want to or not.
This is why facial recognition can be very “invisible”. It is a technology that can be applied without giving “informed consent”. It appears, it is applied, but you do not know. The lack of informed consent also makes it extremely easy for privacy to be compromised. How many cameras are around me, tracking me, recording me? Have I unknowingly “lost” my face? These questions are slightly scary to think about.
How to “reconstruct” a you with your face?
But the further fear comes from the logic behind facial recognition – how it processes, how it matches, how it uses facial information.
We have many biometric features that can be identified, including fingerprints, irises, facial features, and even genes, all of which have the potential to be data-driven. They can be recorded, they can be copied, they can be disseminated, and they can be scattered all over the Internet – our fingerprints are in the public security system, our voices are recorded in WeChat, and our faces are on the photos we upload to social networks. Kelly Gates, a professor of sociology of technology at the University of California, San Diego (UCSD), calls this “disembodied identity.
Our identity has “left” our bodies, and we no longer even have control over it.
But the key to facial recognition technology is how this unique biometric data can be pieced together with the various pieces of our identity in more technologies, larger databases, and broader systems to form a complete “self”.
Using a person’s facial features as encrypted information and opening my phone to “swipe my face”, and decoding a person’s face into “my identity” and circulating it in a larger system (matching the surveillance with the public security system) are both “swiping”, but they are very different methods.
Our identity has “left” our body, and we no longer even have control over it.
If your face is only used to open the door of your house, then your face has no meaning other than to reconstruct “you are the owner of this house”.
But when this data comes from your small world to the “cloud”, and the larger database intersects and corroborates each other, then your own identity, there is nothing to hide – who you are, where your home is, what kind of position you are in society, what you bought yesterday, your “health code”, your ID number …… even if the important information such as ID number is less likely to leak, but as long as there are enough other data, “rebuild” a you, is not too difficult.
There is not much solid protection between us and the abyss.
Profit makes technology run faster and faster
Anton Alterman, a technology ethicist at Long Island University in New York, points out in a paper that the privacy risks of biometrics lie in several areas.
1) Whether a technology can accurately identify the true social identity of a particular individual.
2) whether the information remains only locally, or flows and is used in the broader network
3) whether the use of a technology requires the collaboration of the user
4) whether the technology developer has an incentive to protect privacy.
Today’s facial recognition technology is challenging all four areas. Facial data can accurately identify individuals; data interoperability poses a risk of misuse; and use can occur without even informed consent. And the last point may be the key to the solution – where do the incentives to protect data privacy come from? What benefits can they bring to all parties? And who profits from them? Who has a say in it?
At that AI conference, I really got a sense of the dynamics on the production side, and the buying power of customers. Artificial intelligence as an emerging industry is getting a lot of investment, and investors need to see the technology put to use and see the returns. As a result, facial recognition technology is packaged into a variety of “solutions” and sold to commercial customers as a package. It is they who have the most ability to buy these technologies as “solutions”, and this purchasing power largely determines the direction of facial recognition technology development.
A cell phone with facial recognition unlock function, or can brush the face to open the smart door lock, to our ordinary users is not so attractive. After all, fingerprints can also unlock, password can also open the door, not “cloud” “smart” also does not matter. But for the operators of public places and management at all levels, comprehensive and effective monitoring, accurate identification, and even the collection and use of data to benefit, the benefits are huge, more important than “privacy protection”. This is a powerful driving force for technology development to move farther and farther in favor of operators, managers, and customers, rather than actual users.
Facial recognition technology is increasingly skewed toward operators, managers, and customers, rather than actual users
In many application scenarios of facial recognition technology, the voice of the user is limited and sometimes even powerless. Do we have the right to “opt out”? Are our wishes valued? Are we able to judge where a technology is appropriate and where it is excessive? In an ideal market scenario, we can vote with our feet; in the current market, the power of capital combined with the opacity of information puts users at a disadvantage of being powerless and voiceless.
Ubiquitous cameras make the society more and more like Foucault’s “Panopticon” – surveillance is everywhere, seeing everyone, identifying everyone, while the watched can’t even know where the surveillance is and where it comes from. Where it comes from. This is not how a healthy technology should evolve.
The “subtext” of technology
Once technology is put into society, it is no longer just a “tool”. The use of a technology, the data and the identity constructed by the data, sets the tone for the relationship between people and people, people and institutions, and gives social meaning to these “data”. Being “identified” at a home buying office means that the relationship between us and the sales office becomes that between the “crook” and the customer to be slaughtered; being “identified” in the hallway of a company for squatting in the toilet for 10 minutes longer means that the relationship between us and the company In the hallway of the company, being “identified” for squatting in the toilet for 10 minutes more means that our relationship with the company has become that of a capitalist and a laborer who has to be squeezed out of every minute of labor.
But must, must our relationship with each other be like this? Do we really have no other choice? In a business transaction, the relationship should be one of negotiating a deal on the basis of equal dialogue; in wage labor, the relationship should be one of trusting each other, giving and receiving labor in return. The misuse of facial recognition and surveillance technology has distorted this relationship and trampled fairness and trust underfoot.
The misuse of facial recognition and surveillance technology has trampled fairness and trust underfoot
Privacy is part of human dignity – it represents a person’s ability and right to dominate his or her own space without interference from others; it also means the protection of privacy that maximizes respect for human freedom and creativity, rather than seeing a person as a screw-up that needs to be monitored and manipulated at all times. No matter how technology develops, our bottom line is that a safe and convenient society should not be at the expense of human dignity.
Therefore, the “subtext” behind the technology may be more important than the technology itself. Asking the reason for the existence of technology, asking the relationship between technology and people is the first step to make a technology less “scary”.
For example, in crowded public places, airports, train stations, etc., we are actually giving up part of our privacy in exchange for convenience and security; this may be difficult to achieve in other ways and not economical enough; it may be able to prevent a huge disaster – when 9/11 happened, the security camera had actually captured the terrorists’ faces, but did not recognize them.
We give up part of our privacy in exchange for convenience and security
But in a neighborhood, in a school, in a hot pot restaurant, each scenario is different and has its own “subtext” – why use it? What is the purpose of the data? Where is the data stored? Is there any method other than this one? What is the benefit to the person being monitored and identified? Is there knowledge and consent? These are all questions that need to be asked before the technology can be applied.
Hopefully, our future society will still allow us to “face” them openly.
References
[1] Gates, K. A. (2011). Our biometric future: Facial recognition technology and the culture of surveillance (Vol. 2). NYU Press.
[2] Alterman, A. (2003). “A piece of yourself”: Ethical issues in biometric identification. ethics and information technology, 5(3), 139-150.
[3] Van Oenen, G. (2006). A machine that would go of itself: interpassivity and its impact on political life. theory& event,9(2).
[4] Introna, L. D. (2005). Disclosive ethics and information technology: Disclosing facial recognition systems. ethics and information technology,7(2),75-86.
Recent Comments