Inside Wikipedia’s Attempt to Use Artificial Intelligence to Combat Harassment


Image: MOTHERBOARD
Image: MOTHERBOARD
A range of tactics are going to be needed to solve Wikipedia’s problem with harassment.

By Sarah Smellie | MOTHERBOARD

Despite its noble goals, Wikipedia is notorious for harassment among its editors. Now, research from tech incubator Jigsaw and the Wikimedia Foundation is looking at how artificial intelligence can help stop the trolls.

The research project, called Detox, began last year and used machine learning methods to flag comments that contain personal attacks. The researchers looked at 14 years of Wikipedia comments for patterns in abusive behaviour. Detox is part of Jigsaw’s Conversation AI project, which aims to build open-source AI tools for web forums and social media platforms to use in the fight against online harassment.

The algorithm could determine the probability of a given comment being a personal attack as reliably as a team of three human moderators

A paper published last week on the arXiv preprint server by the Detox team offers the first look at how Wikimedia is using AI to study harassment on the platform. It suggests that abusive comments aren’t the domain of any specific group of trolls, and that diverse tactics are going to be needed to combat them on Wikipedia.

read more