If we’re ever going to create general AI, we need to teach it to think like us.
By Daniel Oberhaus | MOTHERBOARD
Last year, for the first time, an artificial intelligence called AlphaGo beat the ranking human champion in a game of Go. This victory was both unprecedented and unexpected, given the immense complexity of the Chinese board game. While AlphaGo’s victory was certainly impressive, this artificial intelligence, which has since beat a number of other Go champions, is still considered „narrow“ AI—that is, a type of artificial intelligence that can only outperform a human in a very limited domain of tasks.
So even though it might be able to kick your ass at one of the most complicated board games in existence, you wouldn’t exactly want to depend on AlphaGo for even the most mundane daily tasks, like making you a cup of tea or scheduling a tuneup for your car.
In contrast, the AI often depicted in science fiction is called „general“ artificial intelligence, which means that it has the same level and diversity of intelligence as a human. While we already have artificial intelligences that can do everything from diagnose diseases to drive our Ubers, figuring out how to integrate all these narrow AIs into a general AI has proven challenging.