Summary List Placement
Twitter is looking into the possibility that its automated tool which selects which part of a picture to preview in tweets might be racially biased against Black people.
For several years, Twitter has used machine learning to find the most “interesting” part of photos and crop accordingly for better image previews. The upshot is that as you scroll through Twitter, you’ll likely see photo previews focused on faces rather than, say, necks or foreheads.
Questions about whether Twitter’s photo preview might be racially biased sprung up in a tweet by PhD student Colin Madland about Zoom erasing a Black man’s face when he used a virtual background.
— Colin Madland (@colinmadland) September 19, 2020
This tweet prompted other Twitter users to test out the automated cropping, including developer Tony Arcieri.
Arcieri conducted a small test using two pictures containing photos of Barack Obama and Mitch McConnell separated by a wide white space — essentially forcing the algorithm to pick just one face for an image preview.
In each picture the position of Obama and McConnell was inverted, but in both instances, the preview zeroed in on McConnell’s face.
Trying a horrible experiment…Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama? pic.twitter.com/bR1GRyCkia
— Tony “Abolish (Pol)ICE” Arcieri 🦀 (@bascule) September 19, 2020
Twitter commented on Arcieri’s tweet saying it had tested its cropping algorithm for bias before building it into the platform, but hinted it will be investigating the matter more deeply.
“We tested for bias before shipping the model & didn’t find evidence of racial or gender bias in our testing. But it’s clear that we’ve got more analysis to do. We’ll continue to share what we learn, what actions we take, & will open source it so others can review and replicate,” Twitter said.
Individual Twitter engineers also weighed in to say they’d take a closer look at the algorithm.
Zehan Wang, engineering lead at Twitter’s machine learning research division Cortex, commented on Madland’s original thread: “We’ll look into this,” adding that the current algorithm being used by Twitter was put into force in 2017 and doesn’t use face detection.
Twitter’s chief design officer Dantley Davis also weighed in, saying the algorithm could be picking up on things other than skin color.
Here’s another example of what I’ve experimented with. It’s not a scientific test as it’s an isolated example, but it points to some variables that we need to look into. Both men now have the same suits and I covered their hands. We’re still investigating the NN. pic.twitter.com/06BhFgDkyA
— Dantley 🔥✊🏾💙 (@dantley) September 20, 2020
CTO Parag Agrawal added its systems need “continuous improvement.”
This is a very important question. To address it, we did analysis on our model when we shipped it, but needs continuous improvement. Love this public, open, and rigorous test — and eager to learn from this. https://t.co/E8Y71qSLXa
— Parag Agrawal (@paraga) September 20, 2020
Algorithmic bias is an issue that extends far beyond how Twitter crops photos.
Machine learning algorithms like the one used by Twitter rely on vast data sets. If these data sets are weighted in favor of a particular race, gender, or anything else, the resultant algorithm can then reflect that bias.
NOW WATCH: Epidemiologists debunk 13 coronavirus myths