CHAIRMAN: DR. KHALID BIN THANI AL THANI
EDITOR-IN-CHIEF: PROF. KHALID MUBARAK AL-SHAFI

Life Style / Technology

Opinion: The terrible timing of YouTube's Notre-Dame snafu

Published: 17 Apr 2019 - 07:26 pm | Last Updated: 16 Nov 2021 - 11:31 pm
A picture illustration shows YouTube on a cell phone in front of a YouTube copyright message regarding a video on an LCD screen in central Bosnian town of Zenica June 18, 2014. Reuters/Dado Ruvic

A picture illustration shows YouTube on a cell phone in front of a YouTube copyright message regarding a video on an LCD screen in central Bosnian town of Zenica June 18, 2014. Reuters/Dado Ruvic

Alex Webb I Bloomberg Opinion

Almost instantly after the sparks ignited at Notre-Dame de Paris on Monday evening, footage of the flames proceeding to swallow the cathedral’s iconic spire spread through social media. And whenever a disaster becomes international news, as we’ve seen time and again, a Silicon Valley mishap is sure to follow close behind.

This time, it was YouTube’s turn to drop the ball. But the timing was particularly inauspicious, as tech regulation edges closer to becoming part of the statute book in Europe.

Here's what happened: As part of an effort to curb the virality of conspiracy theories, YouTube, a division of Alphabet Inc.'s Google, started placing information panels below some videos. Those boxes of text give videos context, sourced from sites like Wikipedia or the Encyclopedia Britannica, which is supposed to let viewers make an informed decision about a clip's veracity. For example, a video denying the Holocaust would be accompanied by real information about World War II atrocities.

Except on Monday, for videos of the Notre-Dame fire, YouTube's systems decided to display information about the Sept. 11, 2001 terrorist attacks. The two towers of the cathedral’s façade seem to have befuddled the image recognition system.

The unfortunate mix-up comes as EU politicians move to regulate online content. Just last week, the European Parliament’s civil liberties committee endorsed a draft of new rules that could impose fines on the likes of Google, Facebook Inc. and Twitter Inc. if they take more than an hour to wipe terrorist content posted to their sites. Britain, meanwhile, is advancing legislation that is even broader in scope, covering a bevy of categories deemed an "online harm.”

The additional information panels were an attempt to fend off such legislation, by demonstrating that YouTube is able to present content to its users responsibly. Instead, in this case, they backfired.

Facebook and Google repeatedly assert that their content problems can ultimately be solved by automation. But the automated tools have yet to prove they can stem the spread of toxic material, in this case making a mistake that a human moderator never would. Yet another error, at so precarious a time, is catnip for lawmakers seeking tighter regulation.

A version of this column originally appeared in Bloomberg’s Fully Charged technology newsletter. You can sign up here.