Emergent Biases, Part 2

The most popular post on this site is “Emergent Biases” where I talk about Tay, Microsoft’s AI which went quite sour as she learned neo-Nazism and misogyny from Twitter users. 

Due to the post’s popularity, I will be doing a brief series on biases in computer systems, based on Batya Friedman and Hellen Nissenbaum’s “Bias in Computer Systems.”  They define three types of biases – pre-existing biases, technical biases, and emergent biases.  This post details one of these biases.

My 2016 post on Emergent Biases happens to be my most popular post.  Who knew?  Because of its popularity, I am starting this mini-series with emergent biases.

Friedman and Nissenbaum define emergent bias as arising “in a context of use” (336).  When they were writing in 1996, they noticed that these biases emerged after the completion of a system and after the users changed in some significant ways.

If we are looking at the very original definition that Friedman and Nissenbaum studied, we can find some examples of emergent biases.  One very obvious one isn’t even related to a computer system.  The Americans with Disabilities Act (ADA) passed in 1990 and provided specifications for accessible buildings.  Part of building accessibility included ramps for those individuals who could not use stairs easily.  In 1990, though, wheelchairs were much smaller than today (2017).  So, if builders are using the specifications recommended by the ADA – and not going larger – many wheelchair users will be unable to use the ramps.  This is an example of a system whose user population changed (they’re using larger chairs), thus creating a bias in the system.

In 2017 (as I’m writing this post), emergent biases can also appear when people use the system in certain ways.  I wrote about Tay in my previous emergent biases post.  But there are so many other instances of emergent bias.  (She just happened to be quite an interesting one.)  I used to play video games (a lot more than I do now that I have a full-time job).  The internet at home was too slow to enjoy online gaming, so I never did that.  But I hear it would have been an “interesting” experience.

Let’s look at Twitter.  I, personally, love using Twitter.  You may have even discovered this blog via my Twitter account.  So, I think it’s interesting to study how a system I personally like and use is biased.

Twitter in and of itself is just a platform where people can post little snippets of whatever is on their mind.  Seems pretty benign.  But people can use the software to inflict plenty of damage.  For example, many people of color and women receive offensive, upsetting, or threatening tweets – sometimes en masse.  Think of Leslie Jones’s departure from Twitter after users sent her pornography, hateful memes, and racist remarks.  Twitter the system isn’t inherently hostile to people of color or women (or women of color, like Leslie Jones’s) but its users can make the system incredibly hostile to them.

That’s why policies and terms of use – and strong enforcement of them – are incredibly important with Web 2.0 systems (like Twitter).  Users can transform a system that is not inherently biased against a certain group…into an incredibly hostile system and environment for that group.  That’s an emergent bias.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s