In these days technology is always advancing and seemingly making fictional technologies from movies in the past into something we all consider pretty mundane today. With its ever growing intricacies and updates, we’re sure to have minor bugs here and there. After all, we’re only human. But what happens when AI technology has much more than just technical bugs? What if AI technology possess racial and gender biases in its algorithms? To put it plainly, tech giants were discovered to have such biases in a recent study conducted by MIT. The team investigated these companies through a Gender Shade Algorithmic Audit–the first algorithmic audit to study the gender (focusing on binary gender) and skin color biases that these facial recognition technologies possessed. As AI tech industry currently has no regulations, the purpose of these algorithm audits is to “to gauge user awareness of algorithmic bias or evaluate the impact of bias on user behavior and outcomes” (1). The major findings in this study discovered that the companies’ facial technology identified white males the most accurately and identified black females the most poorly. These results, along with companies’ responses, proved the algorithm audits were effective in putting pressure on the companies to take a stance against algorithmic biases.
Why does this matter?
In the near future many companies are looking at facial recognition software to use in hire screenings, loan lending, and other decision based services. If AI technology is not closely regulated on such social biases, discrimination against a certain groups of people will likely to continue. We can become a more technologically advanced society, but still be socially underdeveloped facing rampant discrimination against people who have already been historically marginalized. This begs the question: what will our society be like further into the future for minorities?
What can we do?
To address such biases as a community, we must begin with spreading awareness on social biases and AI literacy. Through our STN Skill Development Program, we can introduce AI into our learning programs for inner city and rural city youth alike through fun activities and games. To make AI less intimidating, we can hold interactive Q+A about the basics of AI and in addition, facilitate a space in which our community can openly share their concerns with AI technology creators. Using our STN Data Analysis, we can teach members of the community how to read and interpret data and derive conclusions from the data. Lastly, we can encourage our community to give feedback and suggestions for already existing apps and programs, especially when obvious social biases are present. With all these initiatives, we will develop a community that is not only AI literate, but also aware of social biases and possess the know how to take proper action against such instances.