
In Conversation: Dr. Jaleesa Trapp on Rewriting AI’s Biased Code
Behind every complex AI and ML system lies a question: Who does this technology serve, and who does it fail?
For Dr. Jaleesa Trapp, Senior Education Advisor at the Algorithmic Justice League (AJL), the answer often lies in the damage caused by AI biased code, which can lead to real-world harm for marginalized communities.
Dr. Trapp is helping drive a global movement for algorithmic justice alongside Dr. Joy Buolamwini, AJL’s founder and the MIT Media Lab researcher who exposed racial and gender bias in AI through her Gender Shades project. Their work is central to the Netflix documentary Coded Bias, which examines the far-reaching consequences of AI biased code, particularly in facial recognition systems.
In this conversation, Dr. Trapp takes us inside AJL’s campaigns, from challenging airport surveillance through the Freedom Flyers initiative to inspiring the next generation of ethical tech creators.
How did you get started in tech and education?
Dr. Trapp: Before coming to grad school, I was a high school computer science teacher. I also ran an after-school program called The Clubhouse, which is part of a larger network where I worked with young people to learn to use technology as a tool.
One of the things I really emphasized as an educator is being ethical creators of technology.
How did that passion lead you to meet Dr. Joy Buolamwini at the MIT Media Lab?
Dr. Trapp: Dr. Joy and I met when we were both students at the Media Lab. Given my background in teaching and focusing on equity and ethics, working with Dr. Joy was a natural transition.
How did you become involved with the Algorithmic Justice League (AJL)? Can you give us a deeper insight into the mission and range of work AJL focuses on?
Dr. Trapp: Joining AJL felt like the perfect next step. The work aligned closely with my values as an educator. AJL’s mission to highlight the real-world harms of AI, especially to marginalized communities, spoke to me.
AJL works on a wide range of projects, but the biggest thing we do is highlight the harms AI can cause in people’s lives. One of our key areas of focus is facial recognition technology.
Can you tell us about one of AJL’s most recent initiatives and the impact it aims to have?
Dr. Trapp: One of our latest initiatives is the Freedom Flyers campaign. It’s centered around facial recognition technology being used in airports. We’re raising awareness that travelers have rights regarding biometric data collection.
Many people don’t even realize they can opt out of having their facial data included in these surveillance systems. We’re making sure people know they don’t have to passively accept this kind of tracking.
The Portia Woodruff case received a lot of media attention, but do you think the public fully grasps how dangerous AI misidentification can be?
Dr. Trapp: Honestly, no. Surveillance has become so normalized that most people are not actively looking for harms. It perpetuates discrimination, and invades our privacy. Portia was eight months pregnant when she was misidentified by a facial recognition system and jailed. She even started having contractions while in custody. This is a real-world consequence of AI bias.
Many people still don’t see how prevalent and harmful this can be. Misidentification put her and her unborn child at risk. And we are already in a world where the Black maternal mortality rates are double and triple that of their counterparts depending on the location in the US.
How do these misidentifications happen?
Dr. Trapp: It happens because those building these algorithms aren’t testing them on a diverse enough population. Often the datasets are limited, so they perform well on lighter-skinned men but fail on darker-skinned women.
Dr. Joy’s work with the Gender Shades project has exposed this issue powerfully.
How does AJL create awareness beyond the Freedom Flyers campaign?
Dr. Trapp: Well, one of the big things is the Coded Bias documentary. Five years ago, Coded Bias was released. To celebrate the anniversary, the film team has launched an international tour of screenings, featuring speakers from the cast to lead audience Q&As and reflections to discuss what’s changed since its premier. By award-winning director Shalini Kantayya, the film shows the origin of the Algorithmic Justice League and highlights the expertise of leading AI researchers of our time, including AJL’s founder Dr. Joy Buolamwini, Timnit Gebru, and Deborah Raji, whose work revealed the harmful biases embedded in AI systems. In the five years since the film released, the use artificial intelligence by private, public, and government organizations has expanded far past current protections. It has never been more important to stay informed on what is at stake when it comes to AI.
While we’re based in the U.S., our work is global. The documentary itself covers places like South Africa, and recently, Dr. Joy presented the film in Paris. I was in Taiwan a few weeks ago, continuing to engage with communities worldwide.
Have tech companies acknowledged these issues and responded to AJL’s research?
Dr. Trapp: Yes, AJL’s research has definitely reached them. In the documentary, you can see how IBM actually responded by correcting their systems. Others, like Amazon, were less receptive. But overall, I think the work has helped create a broader awakening—not just among companies but the general public.
Have you encountered people who have shared personal stories of how this technology affected them?
Dr. Trapp: Absolutely. I once had a student who initially dismissed concerns about data collection, saying, “The government has all our data anyway.” Years later, that same student went on to focus on AI bias because they realized how much harm it can cause. I also have a friend who is a parent. Her child’s school uses facial recognition for parent access, and as a darker-skinned Black woman, she’s had issues. She knows it’s her right to opt out of using this surveillance system, so she calls the school and asks for administrative staff to open the door instead.
How accessible is the opt-out process at airports and similar spaces?
Dr. Trapp: Technically, it’s your right to opt out. But some airports make it difficult. Once, I was told I’d have to go to the back of the line just to opt out—that’s not true, they just didn’t want to do the extra work. It’s as simple as saying, “I’d like to opt out,” or “Please use the traditional method to verify my identity.” The issue is that most people don’t even know they have that right because surveillance has become so normalized.
What do you think is missing from the broader conversation about AI and justice?
Dr. Trapp: I think it’s important to emphasize that this movement was sparked by a Black woman—Dr. Joy—and it took her experiencing the failure of these systems firsthand to shine a light on it. Historically, voices like hers haven’t been heard. So, making sure women, especially women of color, are leading these conversations is critical to creating ethical technology.
It’s about education and organizing. Whether that’s partnering with AJL or building local initiatives, people need to know their rights and push for change. Dr. Joy has testified in Congress, helping to ban police use of facial recognition in San Francisco.
It starts with learning and sharing—whether that’s through films like Coded Bias or grassroots advocacy.
Do you think changing how we educate the next generation can shift this trend?
Dr. Trapp: Absolutely. If we integrate these conversations into education early on, young people will enter the tech space already thinking about ethics. We don’t want them fixing mistakes later—we want them to build responsibly from the beginning.
We need to make sure that young people go into the tech space already thinking about ethics and the real-world consequences of AI. It shouldn’t be something they have to revisit later—it should be embedded in their learning from the beginning. That way, they’re prepared to build systems that avoid the mistakes we’re seeing today.
Discover more voices shaping the Responsible AI.
