Twitter apologises for ‘racist’ image-cropping algorithm

Twitter has apologised for a “racist” image cropping algorithm, after users discovered the feature was automatically focusing on white faces over black ones.

The company says it had tested the service for bias before it started using it, but now accepts that it didn’t go far enough.

Twitter has long automatically cropped images to prevent them taking up too much space on the main feed, and to allow multiple pictures to be shown in the same tweet. The company uses several algorithmic tools to try to focus on the most important parts of the picture, trying to ensure that faces and text remain in the cropped part of an image.

But users began to spot flaws in the feature over the weekend. The first to highlight the issue was PhD student Colin Madland, who discovered the issue while highlighting a different racial bias in the video-conference software Zoom.

When Madland, who is white, posted an image of himself and a black colleague who had been erased from a Zoom call after its algorithm failed to recognise his face, Twitter automatically cropped the image to only show Madland.


Colin Madland
(@colinmadland)

pic.twitter.com/1VlHDPGVtS

September 19, 2020

Others followed up with more targeted experiments, including entrepreneur Tony Arcieri, who discovered that the algorithm would consistently crop an image of US senator Mitch McConnell and Barack Obama to hide the former president.


Tony “Abolish (Pol)ICE” Arcieri ?
(@bascule)

Trying a horrible experiment…

Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama? pic.twitter.com/bR1GRyCkia

September 19, 2020

Similar outcomes were found for stock photo models, Simpsons characters Lenny and Carl, and even golden labradors and black labradors.


Jef Caine
(@JefCaine)

Testing this to see if it’s real. pic.twitter.com/rINjaNvXaj

September 19, 2020

Jordan Simonovski
(@_jsimonovski)

I wonder if Twitter does this to fictional characters too.

Lenny Carl pic.twitter.com/fmJMWkkYEf

September 20, 2020


– M A R K –
(@MarkEMarkAU)

I tried it with dogs. Let’s see. pic.twitter.com/xktmrNPtid

September 20, 2020

In a statement, a Twitter spokesperson admitted the company had work to do. “Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing. But it’s clear from these examples that we’ve got more analysis to do. We’ll continue to share what we learn, what actions we take, and will open source our analysis so others can review and replicate.”

Twitter is by no means the first technology firm to find itself struggling to explain apparent racial bias in its algorithms. In 2018, it was revealed that Google had simply banned its Photos service from ever labelling anything as a gorilla, chimpanzee, or monkey, after the company had come under fire for repeatedly mislabelling images of black people with those racist terms.

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>