Terror Scanning Database For Social Media Raises More Questions than Answers


matrix_2_1

On Monday, Facebook, Microsoft, Twitter, and YouTube announced a new partnership to create a “shared industry database” that identifies “content that promotes terrorism.” Each company will use the database to find “violent terrorist imagery or terrorist recruitment videos or images” on their platforms, and remove the content according to their own policies.

By Sarah Jeong | MOTHERBOARD

The exact technology involved isn’t new. The newly announced partnership is likely modeled after what companies already do with child pornography. But the application of this technology to “terrorist content” raises many questions. Who is going to decide whether something promotes terrorism or not? Is a technology that fights child porn appropriate for addressing this particular problem? And most troubling of all—is there even a problem to be solved? Four tech companies may have just signed onto developing a more robust censorship and surveillance system based on a narrative of online radicalization that isn’t well-supported by empirical evidence.

How the Tech Industry Built a System for Detecting Child Porn

Many companies—for example, Verizon, which runs an online backup service for customers’ files—use a database maintained by the National Center for Missing and Exploited Children (NCMEC) to find child pornography. If they find a match, service providers notify the NCMEC Cyber Tipline, which then passes on that information to law enforcement.

read more