Aug. 23, 2023

Law professor explores racial bias implications in facial recognition technology

Gideon Christian receives funding from Office of the Privacy Commissioner of Canada
Facial Recognition Technology illustration
Facial recognition technology illustration. iStock

We live in an era marked by rapid technological advancements that promise to make our lives easier and more efficient. Artificial intelligence (AI), algorithms and facial recognition have immense potential to positively transform our lives; in fact, they are used daily by millions around the world.  

But these technologies are still in development and many currently operate unchecked and with standardization gaps. It's within those gaps where technology can inflict the most harm. That’s why Faculty of Law assistant professor Dr. Gideon Christian's area of expertise is on how AI and the law intersect.  

Christian’s work was recently awarded a $50,000 Office of the Privacy Commissioner Contributions Program grant for a research project titled Mitigating Race, Gender and Privacy Impacts of AI Facial Recognition Technology to identify the complex issues surrounding private-sector development and deployment of AI-based facial recognition technology in Canada.  

The effects of racially biased AI  

“There is this false notion that technology unlike humans is not biased. That’s not accurate,” says Christian, PhD. “Technology has been shown (to) have the capacity to replicate human bias. In some facial recognition technology, there is over 99 per cent accuracy rate in recognizing white male faces. But, unfortunately, when it comes to recognizing faces of colour, especially the faces of Black women, the technology seems to manifest its highest error rate, which is about 35 per cent.”  

This is an unacceptable error rate, with damaging effects, Christian says. For people of colour, he says, “Facial recognition technology can wrongly match your face with that of some other person who might have committed a crime. All you see is the police knocking on the door, arresting you for a crime you never committed.”

Christian cites cases in the U.S. when Black men were misidentified by facial recognition software, arrested and detained. The headline of a May 2023 article in Scientific American minces no words: "Police Facial Recognition Software Can't Tell Black People Apart."  

But facial recognition technology misuse is not limited to the U.S., Christian notes, adding: “We know this technology is being used by various police departments in Canada. We can attribute the absence of similar cases to what you have in the U.S. based on the fact that this technology is secretly used by Canadian police. So, records may not exist, or if they do, they may not be publicized.  

“What we have seen in Canada are cases (of) Black women, immigrants who have successfully made refugee claims, having their refugee status stripped on the basis that facial recognition technology matched their face to some other person. Hence, the government argues, they made claims using false identities. Mind you, these are Black women — the same demographic group where this technology has its worst error rate.” 

Gideon Christian

Gideon Christian

Facial recognition can be trained to be biased 

Facial recognition technology is as pervasive as much as it is invisible and the effect it renders on people’s lives hides behind impenetrable lines of code, says Christian. “Racial bias is not new,” he notes. “What is new is how these biases are manifesting in artificial intelligence technology. And this technology — and this particular problem with this technology, if unchecked — has the capacity to overturn all the progress we achieved as a result of the civil rights movement.”  

Which is why this is such an important area of research. Facial recognition technology is not only used by law enforcement; its applications are much broader, with the technology also being used in health care, education and finance — and, as the technology is still relatively new, the long-term consequences are as yet unknown.  

It’s not that the technology is inherently biased, Christian says; it’s how it’s developed that can embed bias. If the data used to train the AI technology itself is biased, such as training facial recognition technologies using mostly white male faces and not faces of colour, he argues, the technology itself will inevitably make biased decisions.  

The importance of addressing racial bias in technology  

“The majority of us want to live in a society that is free of racial bias and discrimination,” says Christian. “That is the essence of my research, specifically in the area of AI. I don’t think we want a situation where racial bias, which we have struggled so hard to address, is now subtly being perpetuated by artificial intelligence technology.  

“My research takes a microscopic view of these technologies with the purpose of identifying elements of racism so it can be stripped from this technology. If we are able to do that, AI will have the most transformative and positive impact on our lives, irrespective of our race or gender.”  

And, with his new grant from the federal government and in collaboration with the Alberta Civil Liberties Research Centre, Christian will develop a framework to address the effects of AI facial recognition technology to ensure that the technology is a force for good to all.  

Sign up for UToday

Sign up for UToday

Delivered to your inbox — a daily roundup of news and events from across the University of Calgary's 14 faculties and dozens of units

Thank you for your submission.