Unmasking AI’s Detrimental Effects on the Trans Community

Unmasking AI’s Detrimental Effects on the Trans Community

Photo by Delia Giandeini on Unsplash

Discussions around the risks of AI often gravitate towards the hypothetical dangers of artificial general intelligence (AGI) and doomsday scenarios. Robots are not going to take over the world. Yet, the current level of AI does pose tangible risks. Particularly to the trans and gender non-conforming community who have already been impacted by this technology.

We will outline the dangers to this community with a focus on:

  • Automatic gender recognition
  • Limitations of medical models
  • The amplification of transphobic content on social media

While the trans community feels the immediate consequences, these dangers affect us all. They spread hate and limit the richness of diversity, constraining our collective capacity to fully express ourselves. We must understand how our roles as tech professionals can support trans people and create a stronger society.

We are at a point that we can deploy AI at scale, only because we have a significant amount of data and computational power. The worry is that AI is not meeting the ethical challenges.

— Alex Hanna

Face filters

We’ll ease into the dangers with an example which, on the surface, may not seem serious. If you are on social media, you know what a face filter is. They use machine learning to warp your face, make you look old or even turn your dog into a Disney character. Most would agree these are just harmless fun. When it comes to gender, things can be more complicated. Although, the negative consequences should not be overstated.

I’m only an ally and can’t speak for trans people. It does seem that, even amongst the trans community, the consequences of gender-swapping filters are debatable. They can allow you to explore your gender identity and expression. However, they can also enforce gender stereotypes and exclude non-binary people. Some have even used the filters as a tool to mock the transitioning process.

Exploring gender and different genders than the one you were assigned is a good thing, I encourage it. You may learn new things about yourself that surprise you, and you may find yourself kinder to trans people.

— Charlie Knight

When discussing this type of tech, a distinction should be drawn between the applications that allow you to choose a gender and those that attempt to predict it. For example, see the first video in the Pixar filter compilation. The algorithm struggles when users do not have traditional male or female characteristics.

https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F-xAzPEvxQz0%3Ffeature%3Doembed&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D-xAzPEvxQz0&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F-xAzPEvxQz0%2Fhqdefault.jpg&key=a19fcc184b9711e1b4764040d3dc5c07&type=text%2Fhtml&schema=youtube

This reveals the issue with these types of applications — the underlying tech is based on the assumption that you can predict someone’s gender identity. This is pseudo-science. Carrying the assumption over to other applications can have significant consequences.

Automatic gender recognition (AGR)

AGR or gender recognition software is a branch of machine learning that attempts to predict a person’s gender. This is done by analysing facial characteristics, body shape, clothing, voice patterns, or behavioural traits. Yet, gender is complex and cannot be fully captured by these aspects. This is especially true when it comes to trans people.

A study on 4 AGR systems, seen in Figure 1, showed they misgender transwomen 12.7% and transmen 29.5% of the time on average. This is compared to 1.7% and 2.4% for cis women and men [1]. These systems also completely ignore other gender groups.

Figure 1: accuracy of AGR systems (source: M. K. Scheuerman, et. al.)

It is disrespectful to misgender trans people. It can also have serious mental health effects. Consistently being referred to as the gender you do not identify as can be both exhausting and demoralising. Now imagine a world where this is automated and baked into our everyday systems.

You don’t have to think too hard. These systems have already been deployed:

The harm caused by these types of systems is well known—so much so that the EU has been urged to band them.

Healthcare models that depend on gender

AGR involves machine learning where gender is the target variable. Issues also arise when we include gender as a model feature. More specifically, when we do not distinguish between sex (gender assigned at birth) and gender (socially constructed roles). This issue is prevalent in healthcare models.

In healthcare, sex and gender are often confounded. So much so that the term sex–gender-based medicine has been proposed [2]. In fact, little data has been collected that considers trans and other gender groups as a category. The result is models trained using a single binary feature — male/female, with sex assigned at birth being the proxy for both sex and gender [3].

False assumptions that sex and gender are binary, static, and concordant are deeply embedded in the medical system.

— Kendra Albert, Maggie Delano

The problem is there are many diagnoses and treatments where the interaction between sex and gender is important [4]. This is true for HIV prevention, reproductive health, hormone replacement therapy, and mental health. By combining sex and gender into one variable, we are ignoring trans people in our medical systems. The result is poorer care in comparison to their cisgender counterparts.

The amplification of transphobic content on social media

Until now, we’ve focused on more direct impacts. Through entrenching gender expectations and poorer model performance, AI can lead to negative experiences for trans people. AI can also have a less direct impact. That is by influencing others’ opinions of trans people.

Social media recommendation algorithms have one job — to keep you on the platform. Unfortunately, anger, particularly towards a group that you don’t belong to, is effective at driving engagement [5]. There are also concerns that the algorithms may enforce preexisting beliefs [6]. That is by only recommending content similar to that which you have engaged with in the past.

Gender is central to modern societal norms and expectations. The existence of trans people can challenge these. For some, this is met with fear, anger, and an unwillingness to accept scientific facts. These are conditions ripe for increased engagement and creating transphobic echo chambers.

We’ve seen this on Facebook. Here users are getting a biased and unfactual understanding of the issues that impact trans people. As seen in Figure 2, posts about trans issues on right-leaning pages earned nearly twice as many interactions. The majority of these are on posts made by anti-trans websites.

Figure 2: Facebook interactions on all trans-related posts by page ideology from October 2020 through September 2021 (image source: author)(source: media matters)

Facebook is not the only platform with a problem. After interacting with transphobic content, TikTok leads you down a rabbit hole of extremism, hatred, and violence. My experience of being recommended transphobic content on YouTube shorts is what motivated me to write this article, an experience shared by others.

The content on these platforms seeks to push the false narrative that being trans is an ideology or a mental illness. It is not. It also tries to divert and trivialise the debate away from basic human rights and towards sports, bathrooms, and pronouns. The most insidious seeks to reframe the pursuit of equality as an attack on children.

The trans community poses no risk to children. Yet, this content poses a significant risk to them. In 2023, 79 anti-trans bills were passed in the US. Social media is believed to have contributed to these policy changes. The transphobic content also results in negative social changes.

82% of transgender individuals have considered suicide and 40% have attempted suicide. The most significant factor contributing to this figure is brief and commonplace daily insults or slights [7]. The same behaviour that anti-trans content normalises and promotes.

Interpersonal microaggressions, made a unique, statistically significant contribution to lifetime suicide attempts

— Ashley Austin, et. al.

Based on these consequences, social media platforms are morally obligated to curtail this content. At the very least, label it as false and unscientific. We should all reject transphobia. As workers in tech, we should also use our unique positions of influence. We have the power to push back against these trends and to shape the very systems that harm trans people.

We can start by educating ourselves about what it means to be trans. We can push for inclusive training data and more diverse teams. We should also advocate for regulation aimed at increased transparency, explainability, and human oversight of AI systems. In doing so, we should not allow ourselves to be distracted by hypothetical doom scenarios but focus on the immediate risks of AI.

All medium partnership funds from this article will be donated to TGEU. See the videos below if you want to learn about what it means to be transgender or how to be a better ally. Happy Pride Month 🙂

What Does Transgender Mean?

The Neuroscience of Being Transgender

Trans 101 — Being a Trans Ally

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *