Culture

People Are Pointing Out That Twitter’s Photo Preview Tool Seems To Be Oddly Racist

People have discovered that Twitter's auto-crop tool continuously cuts out Black faces to focus on white ones.

twitter photo crop racist

Want more Junkee in your life? Sign up to our newsletter, and follow us on Instagram, Twitter and Facebook so you always know where to find us.

The popular ‘Open For A Surprise’ meme was built on the way that Twitter cropped images while in preview.

But when Twitter shifted their image preview algorithm to “smart auto-cropping” that focused on “saliency” over the previous face detection and centred photos, the meme quickly died.

The app explained that their program would now focus on what a person is more “likely to look at when freely viewing the image” over the centre of the photo or the faces in it. Naturally what one defines as salient differs from person to person, and apparently for the Twitter algorithm this determination of what someone’s eyes are drawn to comes down to skin colour.

The discovery was made on Saturday when user Colin Madland took to Twitter to complain about Zoom’s face-detection algorithm “erasing Black faces”. While explaining that his Black colleague was dealing with Zoom removing his head whenever he used a virtual background, Madland discovered that Twitter had much of the same problems.

After sharing screenshots of his Zoom calls, Twitter’s own auto-crop tool focused the preview image on Madland and not on his colleague — even after he flipped the image to see whether the algorithm perhaps had a preference for the left or right side of an image. It did not.


In response, people begun to test the theory themselves by uploading weirdly-sized photos to test how Twitter crops the image in its photo preview. In case after case, with photos being identical all bar skin colour, Twitter’s auto-crop algorithm favoured white faces over Black ones.

In an example using Mitch McConnell and Barack Obama, no matter how the image was set, Twitter’s auto-crop tool chose to focus on the white politician over the ex-president.

To make sure the test was fair, the images were rescaled to the same size, the ties on each man were switched and the photos were edited to include more Obamas than McConnells. Yet, the white man was almost always favoured — unless the image colours were inverted.

A number of people continued this test using stock images of Black and white people to see whether the auto-crop feature would ever focus on the darker skin tone in the image set.

Some even used cartoon examples to see if this racial bias worked on Lenny and Carl from The Simpsons. Shockingly, all crops of the image ended up focusing on the yellow-skinned character, Lenny. This was also the case for side-by-side images of black and white dogs too.

This pattern of previews focusing on white faces over Black faces was clearly evident as people begun to note that Twitter’s algorithm appeared to hold some racial bias built into the program.

In response to the criticism, Liz Kelley, a spokesperson for Twitter, acknowledged the issue and explained that the company found no racial or gender bias while testing.

“Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing. But it’s clear from these examples that we’ve got more analysis to do,” Kelley told Junkee. “We’ll open source our work so others can review and replicate.”

Twitter’s chief technology officer, Parag Agrawal, also supported these claims and explained that while the model was analysed before it shipped, “[the program] needs continuous improvement”.

While the program was tested before launch, it may be as simple as Twitter’s auto-crop feature having an algorithmic bias, where systems reflect the bias of the people who created it. But even though Twitter’s chief design officer, Dantley Davis, offered some excuses as to why the crop tool favoured white faces – like people having facial hair affecting contrast or the use of a brighter background – Davis asserted that it was “100% [Twitter’s] fault” and that “the next step is fixing it”.

Even though Twitter cropping images a certain way isn’t the most-pressing issue going on in the world, this isn’t the first time that technology has shown racial bias. Studies by the National Institute of Standards and Technology have proven that facial recognition software performs worse on non-white faces — with false identification being 10 to 100 times more common for Black and Asian faces than for white ones.

Yet these programs — that are proven to show a clear bias to people of colour and commonly misidentify them — are used by police forces to identify suspects and make arrests. Beyond Twitter photos, algorithmic bias built into new technology clearly has the potential to do some real harm.