May algorithms show bias and discriminate?

Artificial intelligence (AI) not only has potential benefits, but can also be a threat to the individuals.

An excellent example of this is the recent events in the United States, which we are currently witnessing. The wave of protests has swept through the country after the murder of Georg Floyd, who was brutally detained by a police officer in Minneapolis. The policeman handcuffed him and pressed his neck to the ground. The officer ignored Floyd’s pleas, who could not breathe. As a result, George Floyd died. This dramatic event was recorded and published in the social media, starting the citizens’ protests. This death was part of series of killings of unarmed Afro-Americans during police intervention. Over time, the protests spread throughout the United States and intensified afterwards.

At the end of June, the ACLU (American Civil Liberties Union) filed a complaint against Detroit Police after a black man, Robert Williams, was arrested for alleged theft. Williams was held in a cell overnight for no reason. The arrest was wrong and based on a flawed algorithm that identified him as a criminal suspect. The fuzzy CCTV image was matched by a facial recognition algorithm to an image from Williams’ driver’s license.

Face recognition is a biometric technology that automatically matches a person’s face based on an image (e.g. video) to databases of recorded faces.

These events have led to a reflection on how the misuse of technologies, such as facial recognition, can affect society. Detroit’s Chief of Police, James Craig, admitted that AI facial recognition does not work in the vast majority of cases. According to the research conducted by the Georgetown Law Center, one in four American police departments has access to facial recognition technology, and almost half of all adult Americans are in the police database.

Another problem has been raised as to whether programs such as facial recognition technologies will strengthen racial bias. Activists began to point out that this technology could lead to unfair enforcement.

The problem became even more relevant when MIT (Massachusetts Institute of Technology) announced that it had removed a dataset, created in 2008, called 80 Million Tiny Images. It was designed for AI training to detect objects and used machine learning. It was noted that some labels were misogynistic and racist, and artificial intelligence systems were later created based on such data. MIT was not aware of the existence of such offensive labels, apologized and asked not to use this data and to remove any copies.

This is not the only example of using data that can lead to human rights violations.

Amazon, IBM and Microsoft have withdrawn from the sale of facial recognition technology to law enforcement authorities without adequate laws to protect against misuse. Such laws do not exist, and should have existed BEFORE deciding to use technology that directly affects our lives. Companies have demanded that the U.S. Congress must introduce appropriate human rights-based regulations, and to initiate a social dialogue on how these technologies can affect the society. Just because these three technology giants have decided to do so, does not mean that technology cannot be purchased from other vendors, and there are many who offer it. These include Clearview AI, Cognite and NEC. According to Gartner’s 2019 report, there are more than 80 vendors worldwide offering face recognition or face verification technology.

It is well known that current AI algorithms can cause significant racist problems. MIT and Stanford University have conducted research indicating that face recognition is more accurate when applied to white people. In 2018, the ACLU proved that the algorithm used by Amazon – Rekognition – shared with U.S. government agencies and Orlando police – was inaccurate and mistaken U.S. congressmen with criminals on the wanted baillists.

In the United States, more and more cities (Boston, San Francisco, Oakland) decide to ban facial recognition technology on the grounds of human rights violations. In our European backyard – information leaked to the media that the European Commission at the beginning of 2020 was considering a five-year ban on the use of facial recognition technology in public places. The ban would not apply to security, research and development projects. The EU has officially withdrawn from these plans and EU member states can use this technology. Some countries – such as Germany and France – already use it widely. The former introduced automatic face recognition at 134 railway stations and 14 airports.

These are not the only problems you can raise in relation to this technology. In the article A Deep Neural Network Model to Predict Criminality Using Image Processing it is stated that the AI will be able to predict whether a person will become a criminal only on the basis of automatic face recognition. In response, more than 1,000 researchers and experts, including employees working on the AI from Microsoft, Google or Facebook, have signed an open letter opposing the use of this technology to predict the commission of crimes. In the letter, the signatories present the problematic issues faced by today’s AI technology, which make it dangerous to use it to predict crimes. One of the main reasons for concern is the mentioned racial bias of the algorithms. Any current facial recognition system is more accurate in detecting white people and often incorrectly marked people with a different skin colour.

Let’s take a look at China, where facial recognition is becoming common practice. There, this kind of technology is already available almost everywhere. It is part of a system of social trust that rewards or punishes for certain behaviors. This vision appeared with one of the popular series on Netflix – Black Mirror and now is becoming a reality.

The face recognition system is to reach every citizen in China. The quality of life will depend on the person’s assessment: the risk is that the Internet may slow down or even prevent them from travelling. In some cases, even if you call a person in debt, you will hear a warning that you are contacting a person in debt. The assessment is made up of many factors, e.g. social (acquaintance with the unwanted person), financial (arrears with payments), political. So there is constant surveillance.

China has gone one step further and has also dealt with the problem of how to recognize people in masks. The Beijing-based company Hanwang Technology (Hanvon) has developed a system capable of recognizing people who have their faces covered.

Face recognition technology is not only used in smartphones or on Facebook to tag photos. It is a powerful tool used by states and can pose a huge threat to democratic society, allowing mass surveillance. Algorithms are also ineffective when identifying people with a different skin colour than white, and this can raise additional concerns. Technology should help people, not spread injustice and be a threat. It should be legally regulated as soon as possible in order to prevent violations of privacy and further violations of human rights.

Agata Konieczna

Leave a Reply

Your email address will not be published. Required fields are marked *

en_US