It feels like AI has come out of nowhere and kind of taken over, right? It's everywhere we look, from Google suggesting AI answers (sometimes with laughably ridiculous results) to virtual assistants helping us with online shopping.
But for people inside the tech world like Dr Joy Buolamwini, this explosion of AI into our everyday lives has been a long time coming. Back in 2015, when Apple first released the Apple Watch, and Instagram was only just taking off as a popular platform, Dr Joy was in her dorm room at the Massachusetts Institute of Technology (MIT), already grappling with some of the problems that AI can present. “I didn’t realise how long it’s been!” she laughs as we chat over Zoom. “The field that I loved didn’t love me back.”
While working on a project for her Masters degree, Dr Joy found that facial-recognition technology couldn’t see her dark-skinned face. Determined to make progress, she grabbed a white Halloween mask from her desk and put it on. Incredibly, the software now recognised her.
This was the moment that changed everything for Dr Joy. Here was an accomplished woman in STEM who had to wear a Halloween mask so that facial recognition technology could see her. “Coding in whiteface was the last thing I expected to do when I came to MIT,” she writes in her book Unmasking AI: My Mission To Protect What Is Human In A World of Machines. “But ≠ for better or for worse – I had encountered what I now call ‘the coded gaze’.”
Cally Jane Beech reflects on her deepfaking ordeal (and how to prevent it from happening to anyone else)

Sure, AI isn't all about race and gender. But when it comes to the life-changing decisions that AI is used for – like being profiled by police or trying to get a job – we need to make sure these artificial intelligence tools aren't biased. And what Dr Joy discovered is that they are. In fact, artificial intelligence doesn't just mirror biases; it reinforces and amplifies them.
“I often think about the use of AI when it comes to the criminal justice system,” Dr Joy tells GLAMOUR. “In the UK, the Metropolitan Police did audits that showed disparities in terms of who were being flagged as suspects.”
In May 2024, a British woman was erroneously identified as a shoplifter by AI facial recognition technology in a Home Bargains store. Her bag was searched by security, she was led out of the shop, and told that she was now banned from all stores. All of this happened even though the woman, who chose to remain anonymous, had never stolen a thing.
The company that runs the facial-recognition software, FaceWatch, wrote to the woman to acknowledge their error, but the damage had already been done, and she feared her life would never be the same again.
Women are tired of waiting for legislation to protect them from image-based abuse.

For women in particular, one of the most insidious and dangerous applications of AI is the use of deepfake technology. What’s incredibly striking is that a 2023 report found that 98% of deepfake videos online are deepfake pornography, and 99% of those videos target women. “Part of the conversation around AI bias and AI harm is that no one is immune,” Dr Joy tells GLAMOUR. “Anyone with a basic internet connection and some time can produce these photo-realistic images. It’s deeply concerning how accessible these tools are.”
The same 2023 report found that it takes less than 25 minutes (so not even half an episode of Bridgerton) and costs absolutely nothing to create a 60-second sexually explicit deepfake video of anyone using just one clear face image. “Now that deepfakes are so cheap and easy to produce, there need to be real consequences and deterrents,” says Dr Joy. “I would call on tech companies to do better and there need to be legal consequences.”
But although it can feel like AI is an unstoppable monster, people like Dr Joy are working to ensure it is used for good. During our chat, it’s clear how passionate and excited she is about the future of AI and her confidence in our collective power to positively shape it. “I want to remind people we have agency. We get to shape the future,” says Dr Joy. “What I want to see is technology that works well for all of us, not just the privileged few. We have a voice and we have a choice.”
From a young girl fascinated by robots to a leading voice in AI ethics, Dr. Joy's journey is both powerful and inspiring. She went from a student at MIT discovering that facial-recognition technology was discriminating against her as a dark-skinned woman, to developing pioneering research that highlights this bias and makes people in power pay attention. Her ground-breaking "Gender Shades" research uncovered gender and skin-type bias in commercial AI products, and got companies like Google and Microsoft not only to pay attention, but to take action.
We're calling on the government to take urgent action.

In 2016, Dr Joy started the Algorithmic Justice League (AJL) to fight for fair AI. The AJL is all about making sure AI works for everyone by using research, experiments and recommendations to galvanise action among policy-makers in both government and commercial businesses. Dr Joy is showing us how tech can be a force for good, and how everyone can have a say in it.
“At the AJL, we have a harm reporting platform for people who have been subjected to deepfakes or know others who have,” explains Dr Joy. “Or if you see the release of a tool that is really problematic, or even if you’re on the inside of a company and are seeing something that you think should be reported.”
Dr Joy explains how everyone has the power to influence and guide AI technology. “You might think you don’t have a tech background or the language to understand, but you don’t need those things,” she says. “So long as you have a curious mind and a caring heart, that’s all you need to be part of this movement for algorithmic justice.”
“I want to encourage people to understand that their stories matter. You’re not alone.”
Dr Joy Buolamwini’s best-selling book “Unmasking AI: My Mission To Protect What Is Human In A World of Machines” is available in paperback from her website. You can watch her documentary “Coded Bias” on Netflix.



