See my second post on emergent biases here.
Librarians are (or should) always be thinking of ways to make information and information systems more accessible to more people. Which can be quite a challenge. (Databases, anyone?)
Information systems and technology always have biases. It may be that a technology is biased toward sighted people; for example, Google’s Calendar function has come under fire for not being accessible for people who are visually impaired. It also might be that the creators have biases, and those appear as pre-existing biases. For example, many dating sites only offer two gender (male/female) options when gender identity and expression are much more varied than that.
Then, there are emergent biases. These biases do not exist (per se) in the technology straight “out of the box.” Instead, these biases emerge because of the interaction between the technology and the users. The smarter, the more interactive, the more self-sustained that technology becomes, the more common these emergent biases are.
Last week, a technology with quite the emergent bias came into the media spotlight. Microsoft created an artificial intelligence (AI) which would communicate with human users via Twitter. This AI was named Tay and was meant to mimic a teenage girl. She came pre-programmed with “teen girl speak” and would learn ideas and ways of communicating through interacting with human users.
Well, as you probably expected, Tay became a Neo-Nazi sex doll.
Microsoft didn’t program Tay to be a white supremacist or a sex object. (Or at least not immediately.) But they did program her to learn from human users. And when the human users are trolls, misogynists, and white supremacists, Tay would (and did) quickly learn to mimic her human companions.
This case of bad parenting for a teenage AI created a technology that creates a hostile environment for a (very large) community of users. It’s a shining example of emergent bias. When making smart technology, help it to be smart enough not to acquire biases.