It’s important to continuously look at the impact exponential technologies have on society and this International Women’s Day we want to highlight a few of the many women working to mitigate the negative impact of algorithmic bias.
When we look at the current growth and adoption of artificial intelligence, we see that it follows a predictable exponential curve – as one would expect. Right now we’re seeing the unfettered expansion of this technology. This technology has the potential to change humanity in ways not yet imagined. We need to think deeply about how we are building these systems today and the negative implications they may create in the future without knowing.
Right now, somewhere, an algorithm is being unintentionally coded with negative bias. This unfortunate reality is having serious implications on the world of AI that will only continue to send us on a downward spiral if left unchecked.
Common types of negative bias found in algorithms include:
If you’re surprised by the above, it’s possible you’ve been on the positive end of this bias. So, what can we do about it? Why should you care? When looking at the unintended side effects of Big Tech, many are quick to say, “how could they have predicted these issues?” In 2021, we are well aware of the power technology has to change society -- both positively and negatively. We now understand the need to preemptively think about hard coding ethics into technology advancements. At the same time, the processes are still being created rather than having to live with the outcomes 5, 10, 20 years down the line.
This International Women’s Day, we want to highlight a few of the outstanding women actively working to raise awareness and aid in mitigating the negative impacts of algorithmic bias.
Joy Buolamwini is a digital activist based at the MIT Media Lab and the founder of the Algorithmic Justice League, which aims to create a world with more ethical and inclusive technology. You may recognize her from her TED Featured Talk on algorithmic bias, which was seen over one million times. Her MIT thesis methodology uncovered large racial and gender bias in AI services from companies like Microsoft, IBM, and Amazon has championed the need for algorithmic justice at institutions like the World Economic Forum and the United Nations.
Ruha Benjamin is a professor of African American studies at Princeton University, founding director of the IDA B. WELLS Just Data Lab and the author of People’s Science and Race After Technology, and editor of Captivating Technology. Her article Assessing Risk, Automating Racism, published in Science breaks down how racial bias in cost data leads an algorithm to underestimate the health care needs of Black patients.
Marietje Schaake is the international policy director at Stanford University’s Cyber Policy Center and international policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence. She was named President of the Cyber Peace Institute.
Timnit Gebru is a widely respected AI ethicist and computer scientist who works on algorithmic bias and data mining. She is an advocate for diversity in technology and co-founder of Black in AI, a community of black researchers working in artificial intelligence.
These women are not doing the work alone and we acknowledge everyone across gender and around the globe working on this growing issue. Each quarter we release a research paper with a complimenting technical asset for Singularity Enterprise Members. This quarter the topic focused on the importance of rooting out algorithmic bias and outlines why addressing algorithmic bias is vital from a business and societal lens. Members also received access to a part of the code from our Algorithmic Bias workshop that will allow you to truly experience the impact that algorithmic bias has on our society.
Looking to dive deeper into this topic? Check out the following resources: