X

Facebook AI puts 'primates' label on video of Black men, in what firm calls 'unacceptable' error

The news comes as social networks grapple with racist content and critics attack automated systems over bias.

Edward Moyer Senior Editor
Edward Moyer is a senior editor at CNET and a many-year veteran of the writing and editing world. He enjoys taking sentences apart and putting them back together. He also likes making them from scratch. ¶ For nearly a quarter of a century, he's edited and written stories about various aspects of the technology world, from the US National Security Agency's controversial spying techniques to historic NASA space missions to 3D-printed works of fine art. Before that, he wrote about movies, musicians, artists and subcultures.
Credentials
  • Ed was a member of the CNET crew that won a National Magazine Award from the American Society of Magazine Editors for general excellence online. He's also edited pieces that've nabbed prizes from the Society of Professional Journalists and others.
Edward Moyer
2 min read
Facebook
James Martin/CNET

Facebook users who recently watched a video featuring Black men were served an automated prompt that asked if they'd like to "keep seeing videos about Primates," a mistake the social network called "unacceptable" in statements to news outlets.

The video, posted last June by UK tabloid The Daily Mail, shows Black men in disputes with white police officers and civilians, The New York Times reported late Friday. Facebook apologized for the prompt and said it's disabled the AI-powered feature and is investigating to ensure the problem doesn't reoccur.

"As we have said, while we have made improvements to our A.I., we know it's not perfect, and we have more progress to make. We apologize to anyone who may have seen these offensive recommendations," Facebook said in a statement sent to media organizations.

The news comes as Facebook and other social media companies continue struggling to address concerns that they're not adequately tackling racism on their sites. Earlier this year, Facebook, Twitter and others were criticized for failing to stop anti-Asian hate on their platforms amid the coronavirus pandemic. Sites were also censured in July for not shutting down racist abuse after England's loss in the Euro Cup final.

The botched Facebook prompt is also another example of problems around AI and race. Facial recognition systems, including those used by law enforcement, are known to have trouble accurately identifying people of color, putting them at risk of being wrongly accused of crimes.

Related: Why tech made racial injustice worse, and how to fix it

More broadly, the error underlines concern about whether bias is coded into the technologies that play key roles in our lives, such as algorithms used to screen applicants for home loans.

In July of last year, Facebook had said it was creating teams to examine potential racial bias in algorithms and products used by the social network and by Facebook-owned Instagram.

Other tech giants have grappled with gaffes by their automated systems. In 2015, Google apologized when an algorithm in its Photo app mistakenly labeled Black people "gorillas."

Watch this: From Jim Crow to 'Jim Code': How tech exacerbated racial injustice