Why the tech companies’ live filters fail in terrorist attacks

Why the tech companies’ live filters fail in terrorist attacks

Insight. The Christchurch and Halle attacks have parallels: the assassins streamed their deeds live – and the platforms failed to stop the video.

The assassin of Halle filmed his attack - and all could watch live.

The assassin of Halle filmed his attack – and all could watch live.

Again, an assassin uses livestreaming to make his madness act public at the moment, again filter mechanisms fail the major Internet companies . In the case of Halle, the Amazon subsidiary Twitch had to admit that the video was not only transmitted live to the Internet, but also that the copy went offline half an hour after the end of the transfer. The material was only locked after users used the network’s complaint feature to point out the bloody content. An automatic filter did not work, even the live broadcast was not blocked.

In Halle, two people were shot dead on Wednesday during a service for the Jewish holiday Yom Kippur near the synagogue. The perpetrator previously tried to penetrate the synagogue, but failed. In the early afternoon, the suspect Stephan B. was arrested. It is assumed that a right-wing extremist and anti-Semitic attack.

The incident has parallels with the attack in Christchurch, New Zealand. On March 15, a right-wing extremist from Australia had stormed two mosques there and shot at the faithful. 51 people were killed, 50 others were injured. The right-wing extremist filmed large parts of the act with a helmet camera and transmitted the recordings live to the Internet. Initially, Facebook was unable to stop the live broadcast in time, and subsequently the platform had big problems preventing the uploading of copies. In the first few days alone, Facebook had to delete more than 1.5 million versions of video footage from its platforms. Nevertheless, the video keeps appearing on the social network – in closed groups, as livestream or in the form of animated screenshots.

Problem: Videos contain more information than photos

Speaking to WELT AM SONNTAG, Facebook Chief Technology Officer Mike Schroepfer said a few weeks ago about the three issues that make it difficult for tech companies to filter such content: “A video contains much more information than a photograph. The content of a photo is much easier for a computer to understand than a video. In addition, the Christchurch video was completely new in this form – that was one reason why so many people shared it, and on the other hand it was harder to discover. So far, we just do not have training data to train artificial intelligence on it . ”

Even if a live video is not discovered immediately, Facebook, Amazon or Youtube can then set up specific filters for the material in question to prevent uploading copies. But even that’s not as easy as it sounds, Schroepfer said: “The basic problem is that we have to filter confrontationally against the creativity of a few users. These deceive our systems by changing the video. They chat in chats about how they can outsmart us. We change the filter, then they adapt again – a constant cat-and-mouse game. ”

For example, users use moving content content, alienating or re-intersecting the video to confuse the automatic filters. For Youtube, Facebook and Co., it is almost impossible to control the flood of uploads. On the one hand, they have to find as many copy as quickly as possible on their platforms. For example, if Facebook detects 1.5 percent of videos, only one percent of all copies, the bloody video would still be online 15,000 times.

Users upload too much material too fast

A search algorithm for image or video recognition with a hit rate of 99 percent would still be a technical breakthrough, but it does not currently exist. On the other hand, a targeted search with the right term is enough to find a newly uploaded video. Because the architecture of the networks is designed exactly the same: users should be able to find new content as quickly as possible and share it easily – censorship is not part of the program. Exactly this interpretation works currently against the corporations.

Therefore, these thirdly have a time problem. Classic upload filters such as Youtube’s “Content-ID” system, which has been used for years, are optimized to prevent copyright infringement, such as finding illegal copies of Hollywood films or music videos. Every minute, users around the world charge about 300 hours of material on Youtube. The system has not been able to scan the entire material in real time.

If the algorithm reports doubts, for example because a news channel uses a censored and edited version of the action video, or because a user uses tricks to deceive the algorithm, human moderators must review the material. And that can take hours. In extreme situations like in the case of Christchurch or Halle a far too long time. On Facebook alone, users have released more than 3.5 billion streams since launching their live streaming offer in April 2018. They are the Achilles heel of every platform: they are too numerous to be controlled by human moderators, and too dynamic for an algorithm to do it.

Corporations are in a bind

No computer can currently distinguish between an assassination, a news documentary, and a first person shooter stream from a computer player. People still need to review the content to decide which ones need to be deleted – and which are not. And there’s a fourth problem: the platforms are reluctant to take on the role of a censor who begins deleting before publishing live content.

“Optically, a terrorist attack from the point of view of the computer is very similar to a computer game or a target practice on the shooting range,” Schroepfer said in a conversation with WELT AM SONNTAG. “In addition, thank God, there are only a few training dates we can use to teach the computer the difference. We are now using police training material to get better. We could just put the filters on – but then we lock up a lot of stuff that’s totally okay. This is a basic problem of content moderation by artificial intelligence. ”

Already Facebook, Twitch and Youtube are the focus of politics – and in a bind: On the one hand they should remove hate messages and extremist content, on the other hand, they are always suspected of blocking and deleting legitimate content. But the users themselves could do something. By not pressing play. By turning off the automatic start of videos. And by posting problematic content to the moderators of Youtube, Twitch and Facebook – over and over again.

This article first appeared in Welt.de . 

Please follow and like us:
Share

Post a comment

Your email address will not be published. Required fields are marked *

Contact
close slider
  • +44 (0)203 004 9596

    info@versatilestaffing.co.uk
  • This field is for validation purposes and should be left unchanged.