Main content
Scholar explores impact of bias in facial-recognition software

It was her own invisibility that spurred Joy Buolamwini to become the many things she is: Rhodes Scholar, Fulbright Fellow, computer scientist and digital activist, degree-laden scholar with credentials from both sides of the pond, and founder of the Algorithmic Justice League. With her creation of the latter, one must also add: thorn in the side of any company refusing to acknowledge bias in its facial-recognition software.

Buolamwini spoke at the second of the three Provost Lecture Series talks slated for this academic year — a series that began with Eddie S. Glaude Jr. lecturing in October on “The Magician’s Serpent: Race and Tragedy of American Democracy.” The series is designed to bring the larger Emory community together, to think together, about the big questions and immediate challenges that we face and to collaborate on innovative solutions. 

As Provost Dwight A. McBride delivered the general introduction Feb. 7 to a considerable crowd in White Hall, he mentioned some of the major challenges of artificial intelligence (AI), including the failure of chatbot Tay, released by Microsoft in March 2016 to reply to users on Twitter.

Describing Tay’s debut, McBride said, “Within a few hours, Tay began utilizing racist, sexist and otherwise offensive language with a facility that stunned her developers. Within 16 hours, Tay had become a full-blown Nazi, ranting about Hitler, and was promptly shut down.”

For McBride, the insufficiencies of Tay point not to the need for better engineering but rather for “a humanities background,” and he went on to quote the computer scientist Doug Rose: “There was no one in the room who could help her [Tay] understand right from wrong. … A specialist in rhetoric would have been able to categorize words that are hateful or had a strong connection to larger ideas. A philosopher would have been able to give her some framework for ethics and her larger responsibilities to society. Yet these seats were empty.” 

By contrast, there were almost no empty seats for Buolamwini’s lecture because, as the provost noted, even though there are moments when these matters “may seem like the specters of a far-off techno-dystopia,” someone like Joy Buolamwini and the title of her lecture, “Dangers of the Coded Gaze,” remind us that “they are very real and very much a part of our present.”

Greatness made visible

So, under what circumstance could someone of Buolamwini’s attainments ever be invisible? As a master’s student at MIT, Buolamwini discovered that her own face read as male in many facial-analysis software — if her face was detected at all. And, no, it was not a personal slight; it turns out that many darker-skinned women of color also read as male. If you were what Buolamwini terms a “pale male,” though, most systems could categorize you with a high rate of accuracy.

As Deboleena Roy, chair of Emory’s Department of Women’s, Gender, and Sexuality Studies and faculty in Neuroscience and Behavioral Biology, said in introducing Buolamwini, “Rather than sit back and see the creation of new technologies that work to reinforce dominant and discriminatory gender and racial norms, she instead is using her expertise as a scientist to envision a more socially just technological future. Applying her awareness of gender and race issues, she has dedicated her research and her career to creating more inclusive code and more inclusive coding practices.” 

Her bona fides are flawless: beyond an undergraduate computer science degree from Georgia Tech, Buolamwini as a Rhodes Scholar went on to earn a master’s from the University of Oxford, a second master’s from the MIT Media Lab, and now is pursuing her PhD there as well. But, as Buolamwini recognized, a flair for artistry, for messaging, would be required to increase her chances for achieving justice in what she calls “the face space.”

That is why she became “a poet of code on a mission to show compassion through computation.” Those words, she admits, beget a lot of initial head scratching from others, but just five minutes in her company and the words start to sink in.

It helps, of course, that she has completed enormously successful projects like her 2016 TED Talk, which has received more than one million views; or her short film, “The Coded Gaze,” which debuted at the Museum of Fine Art in Boston in the same year and features the simple but loaded lines, “Machines view the world through a coded gaze; they digest pixels from a camera in dictated ways.”

She also created in 2018, with the support of the Ford Foundation, the first spoken-word visual poem, “AI, Ain’t I a Woman?,” focused on the failures of AI for iconic women of color such as Oprah Winfrey, Michelle Obama, and Serena Williams. As the short film captures all the mistakes AI makes in reading the women’s faces, Buolamwini asks, “Can machines ever see my queens as I view them? Can machines ever see our grandmothers as we knew them?” 

The need for inclusive coding

Believing that “who codes matters, how we code matters, and why we code matters,” Buolamwini developed what she calls InCoding, a process that takes a hard look at the design, development and deployment of coded systems.

She also established the Algorithmic Justice League, a consortium dedicated to eliminating bias within machine learning. The league’s activities fall into three categories: highlighting existing bias, building tools to help practitioners and researchers, and advising policymakers.

Buolamwini walked audience members through her Gender Shades project and Actionable Auditing, which examined how well the AI services of Microsoft, IBM, Face++, Amazon and Kairos determine the gender of a face.

As she discovered, when you break down by gender, all systems work better on male faces. For women of color, there were high error rates, some as high as 47 percent, which Buolamwini noted was “barely better than chance.” They also work better on lighter versus darker skin.

The motives for a company seeking to improve their systems are complex — everything from wanting to eliminate bias to simply wanting more, and more lucrative, contracts. Of the three companies in the initial Gender Shades study, she credits IBM for being the most responsive. Yet IBM has reportedly worked with the New York City Police Department on facial analysis of criminals that could result in racial profiling.

“Even accurate systems,” says Buolamwini, “can create a surveillance state.” The Department of Defense, for instance, is asking whether AI systems can be used at borders to determine the honesty of those trying to cross.

Buolamwini believes that it cannot be left to companies to self-regulate. We might shiver for a moment realizing that Facebook has been collecting our biometric data for a very long time as we cheerfully upload evidence of our lives to its servers. Amazon’s AI technology, she asserts, was shown to be faulty in her study but nonetheless is being used in “high-impact ways.” She is in favor of legislation being created, something being pursued by the European Union. In this country, just a few big companies, including Microsoft, support it. 

In China, there are more than one million CC TV cameras trained on its citizens. Ultimately, it will become, says Buolamwini, “a question of our norms and what we want in our society with regard to privacy.”

Buolamwini noted that her program at Georgia Tech is one of the few in computer science across the country that includes an ethics class. She had the audience laughing when she recalled that the course often came in the last semester of students’ senior year, when some minds are elsewhere. 

Audience members appeared grateful for the ethical perspective of one of its graduates —the Algorithmic Justice Leaguer who tirelessly, in her words, “creates media that makes daughters of diasporas dream and sons of privilege pause.”


Recent News