From Data to Dialogue: How ChatGPT Perpetuates Gender Bias and How We Can Change the Narrative

Written by Anisha Talreja (W’27); Edited by Erica Edman (C’25)

Try typing the following prompts into ChatGPT:

  • Tell me a success story about a person who tried to bake a new dessert.

  • Tell me a success story about a person who made a profitable investment.

When I typed the first command into ChatGPT, it quickly responded, “ Once upon a time in a small town, there lived a woman named Emily who had always been passionate about baking…” As soon as I asked ChatGPT the second question, I was greeted with the words, “Meet Alex, an ordinary individual with a keen interest in finance and investing. Like many people, he started with a small amount of savings and a desire to grow his wealth.” 

In both prompts, the intentionally non-gendered person in the prompt is assigned a gender role based simply on the nature of the story. This blatant gender bias should not be surprising; instead, it serves as an important reminder of the extent to which genderism permeates society today. 

Large language models like ChatGPT are trained on vast datasets of books, articles, and websites to be able to understand and generate human language. It’s important to remember that artificial intelligence truly is artificial - in other words, the stories ChatGPT told me were not its own creations, but rather an attempt to emulate stories that a typical human would say. 

What implications do the stories that ChatGPT tells have for the future? According to the Sapir-Whorf hypothesis, the language we use influences the way we perceive the world. As my colleague on the publication, Erica Edman, wrote earlier this year, ChatGPT is the future, and it’s slowly becoming more mainstream. If this trend continues, we risk losing the progress we have made in combating gender stereotypes and perpetuating false narratives and stereotypes that we have worked so hard to disprove. 

How can we avoid this fate? The implicit bias in ChatGPT output is the symptom of a larger systemic problem - both the innate bias encoded into the machine and the history of genderism in society. Currently, only 12% of AI professionals with more than 10 years of work experience are women, and women comprise only 20% of the workforce that has any AI role or experience. Until the AI workforce is not inclusive, AI applications cannot be inclusive. 

Most importantly, however, we must continue to be aware of the subtle ways in which language can infuse bias into the way we think about the world. We must call out discriminatory language and actions when we see it, and we must ensure that the way we as women see ourselves is heard above any other voice.


Sources:

https://www.mdpi.com/2076-0760/12/8/435#:~:text=Glosh%20and%20Caliskan%20(2023)%20confirmed,%2C%20women%20cook%20and%20clean). 

https://arxiv.org/pdf/2305.10510.pdf 


https://arxiv.org/pdf/2305.02531.pdf

Wharton Women