• Artificial Intelligence
  • Emerging Technology
  • Insight
  • User Research

Are chatbots sexist?

By Josephine Young18 December 20197 min read

Josie Young asks the question of how our conversations with bots might reinforce stereotypes on a micro-scale.

Josie Young has been nominated for ‘Young Leader of the Year’ at the Women in IT Awards Series taking place on the 29th January 2020 in London. Here we explore one of Josie’s research projects that led to her nomination…

In 2017, I set out to understand what a feminist chatbot could look like. I read up on Siri, Alexa, Cortana and Google Assistant – and learned how creepy, harassed and based on stereotypes they are. The design of these personal assistant voicebots reinforce the idea that women exist only to serve others (a recent UNESCO report agrees). On the other hand, chatbots that provide more analytical or ‘serious’ services like financial or legal advice are often represented as men.

These chatbots speak to thousands and thousands of people a day, giving them a greater reach than a single human customer service worker can ever dream of. Therefore, when the design and personalities of these bots are based on stereotypes, they’re actually reinforcing those stereotypes on a micro-scale with each conversation they have.

When building chatbots, it’s important to think critically about the design choices we make. That’s why I developed the Feminist Chatbot Design Process – it aims to support the teams that design and deploy chatbots to think more carefully about the potential implications of their designs. Not only does the FCDP help with identifying bias in chatbot design, it also encourages teams to be more innovative and creative. Because, truly, this technology is really exciting and we shouldn’t let outdated stereotypes hold it back.

I was fortunate enough to work with the Feminist Internet in 2018 on their ‘Building a Feminist Alexa’ programme at UAL. We sat down and adapted the FCDP to a 3-day workshop series for UAL students to design and create early prototypes of Feminist Alexa skills. It was a brilliant experience to see what the students came up with. The prototypes ranged from a voicebot that answered embarrassing questions about puberty with a compassionate and non-judgemental tone, to a voicebot to help students prepare for life after university (and a brilliant come back to foul language from its user!).

The ‘Building a Feminist Alexa’ programme demonstrated that we can use feminist and ethical tools like the FCDP to come up with more innovative and creative uses for emerging technology. Approaching the design and development from a social impact perspective pushed the boundaries of what a voicebot could be, rather than hindering it.

Designing Feminist Chatbots

This exploratory research drew on a Technofeminist framework (Wajcman, 2004) to examine the ‘mutually shaping’ relationship between Artificial Intelligence-based chatbots, gender stereotypes and gender power dynamics.†While not all bots and chatbots are given a gender, when represented as women the chatbot design is generally based on gender stereotypes and then reinforces those stereotypes in society. Thus, the idea that technology design is ‘neutral’ is a misnomer and only serves to limit our imaginations for how the technology can be designed. The aim of the research project was to examine this dynamic and create an intervention to disrupt the relationship between chatbots and entrenched gender power dynamics.

In order to develop a feminist chatbot intervention, I conducted expert interviews with professionals working in Artificial Intelligence (AI) in London (United Kingdom) and explored contemporary feminist techniques for designing technology. Issues raised by experts included the need to be aware of biased training data; ethical concerns are best raised at the start of a chatbot project; and an intervention should bridge the gap between AI technicians and social analysts. Based on this fieldwork, I created a new intervention for designing feminist chatbots, which I tested at a half-day Hackathon focused on social justice and young people’s leadership.

The intervention is called the Feminist Chatbot Design Process (FCDP), which is a series of reflective questions incorporating feminist interaction design characteristics, ethical AI principles and research on de-biasing data.‡ The FCDP encourages design and development teams to follow the reflective questions at the conceptual design phase, and can be used by all team members (technical and non-technical). The outcome of the FCDP is that teams produce a chatbot design which is sensitive to feminist critiques of technology and AI, and grow their own awareness of the relationship between gender power relations and technology.

Overall, the intervention was found to be impactful on the conceptual design process and resulted in the team considering the ways in which their design might perpetuate gender bias or inequality. One of the Hackathon participants commented, “the emphasis on bias, ethics and feminism will definitely stick with me as I explore the world of AI further”. The intervention’s questions on purpose and ecology of the chatbot were particularly impactful, whereas the questions on the gender of the chatbot required further refinement.

Next Steps

We have an opportunity to design Artificial Intelligence-based technology in a way that challenges gender stereotypes and transforms gender-based power dynamics. Based on the Hackathon findings, I have iterated the FCDP (overleaf) and would like to test the process in a mainstream context. I encourage you to have aread of the FCDP, share it with your colleagues and use it with your team. If you have any feedback or queries about this research, or how to apply the FCDP to your project, don’t hesitate to get in touch.

Feminist Chatbot Design Process : Questions for the Conceptual Design Phase

The aim of the following questions is to deepen how you think about the values you will be embedding in your chatbot during the conceptual design phase. These questions aim to make your chatbot better by ensuring it doesn’t knowingly or unknowingly perpetuate gender inequality. These are based on Shaowen Bardzell’s Feminist Human-Computer Interaction (2010) and the IEEE’s Ethically Aligned Design (2016).

Strategic Level

  • Purpose & Ecology
    • What is the purpose of the chatbot?
      • Does the chatbot service meet a meaningful human need or address an injustice?
      • Does the chatbot aim to augment human capabilities; or scale a service so that it is more
        democratically available?
    • What ecosystem does the chatbot sit in?
      • Technology does not sit in isolation of political, social, economic, cultural, technological, legal and environmental issues. Do you have a good understanding of the ecosystem your product will be part of?
      • Do you understand both the opportunities and challenges with in this ecosystem, and the different stakeholders who form the ecosystem?
      • Does your chatbot conflict with anything in the ecosystem?
  •  Data
    • How will you treat data throughout the development of your chatbot?
      • Do you know how to apply to latest techniques to de-bias the data you use to train your chatbot, and do you do this routinely?
      • Are you able to draw from a diverse set of training data so that everything your chatbot learn isn’t just from one source?
      • How will you treat the privacy of your users’ data and empower them to understand how you plan to use their data? What role does their data play in your revenue model?

Team Level

  • Team / Reflexivity (not a thinker from nowhere)
    • Has the team reflected on their values and position in society?
      • Has the team reflected on the ways in which their values and position in society (i.e. as white,
        left-wing, young, Australian, woman) mean that they are more likely to choose one option
        over another or that they hold a specific, not universal perspective on the world?
      • How do your values and position in society sit in relation to the ecosystem and stakeholders
        your chatbot seeks to engage?

We all come from places and experiences that have shaped our thinking and perspectives, and we tend to unconsciously embed these perspectives in the things that we make. The risk of not reflecting on these questions is that your chatbot may reinforce negative stereotypes about particular groups of people, which could be harmful to your users.

User Level

  • Marginal user & participation
    • Rather than design a chatbot for ‘universal usability’ – aka a single, universal user – can you identify a ‘marginal’ user who would benefit from your chatbot?
      • What are their specific needs and what are the specific barriers and pain points that face them? What are their specific strengths and viewpoints?
      • How do your own values and position in society compare to that of your marginal user?
      • What are you putting in place to ensure you aren’t imposing your values and expectations on your marginal user?
      • What are the different participatory methods you have available to you so that your marginal user can co-create or have direct input into the development of your chatbot?

Design Level

  • Representation of the chatbot
    • How are you planning to depict or represent your chatbot to your users?
      • Have you thought about the ways in which your users will likely form social attachments to your chatbot, and will tend to socially and emotionally connect with it as if it was human?
      • Are you planning on assigning a gender to your chatbot? Why? In what ways might this reinforce or challenge gender stereotypes? In what ways might this prompt your user to
        behave unethically or in a prejudiced way?
      • Have you considered a genderless chatbot? What possibilities does a genderless chatbot open up for your design?
  • Self-disclosure
    • In the design of the chatbot, are there any assumptions about how your user will engage or act towards the chatbot?
    • For example, will the chatbot learn from the user’s behaviour in order to predict future behaviour – and if so – are you assuming the chatbot will always get the prediction right?
    • Have you considered mechanisms or features which would make these assumptions visible to your user, and empower them to change these assumptions if they wanted to?
    • For example, allowing a user to change the preferences set by the chatbot, or providing an opt-out feature during certain interactions or

 

Please feel free to get in touch if you are interested in learning more about this or if you have any thoughts or comments: josephine.young@methods.co.uk.

† Wajcman, J. (2004) Technofeminism. Cambridge: Polity.

‡ Incorporating principles from Bardzell, S. (2010) ‘Feminist HCI: taking stock and outlining an agenda for design’, inProceedings of the SIGCHI conference on human factors in computing systems, April, ACM, pp.1301-1310; and IEEE. (2016) Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems, Version 1. Available at: http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html