Copyright Law Makes Artificial Intelligence Bias Worse


Image: Shutterstock / Composition: Louise Matsakis
But it could be used to help fix the problem too.

By Louise Matsakis | MOTHERBOARD

Last week, Motherboard discovered that one of Google’s machine learning algorithms was biased against certain racial and religious groups, as well as LGBT people. The Cloud Natural Language API analyzes paragraphs of text and then determines whether they have a positive or negative „sentiment.“ The algorithm rated statements like „I’m a homosexual,“ and „I’m a gay black woman,“ as negative. After we ran our story, Google apologized.

The incident marks the latest in a series in which artificial intelligence algorithms have been found to be biased. The problem is the way they’re trained: In order to „teach“ an artificial intelligence to identify patterns, it needs to be „fed“ a massive trove of documents or images, referred to as „training data.“ Training data can include photographs, books, articles, social media posts, movie reviews, videos, and other types of content.

Oftentimes, the data given to an AI includes human biases, and so it learns to be biased too. By feeding artificial intelligent systems racist, sexist, or homophobic data, we’re teaching them to hold the same prejudices as humans. As computer scientists love to say: „garbage in, garbage out.“

read more