By Yoon Sung-won
The artificial intelligence (AI) AlphaGo, which has marked three victories and one defeat against Korean go player Lee Se-dol in a five-game series, has sufficiently demonstrated its thinking prowess to humans.
This has led to the public's fear that machines with superior intelligence might subjugate humans as they do in many dystopic sci-fi films.
What's most worrisome is that such intelligent machines might fall into the wrong hands.
Amid such concerns, experts say that the world should start preparing for the future by talking about the ethics of AI use.
“Like all other powerful new technologies, AI should be used with ethical responsibility,” said Demic Hassabis, CEO of Google's AI development subsidiary.
The CEO made the remark during a lecture at the Korea Advanced Institute of Science and Technology (KAIST), Friday.
“Though an AI on the human level may be possible after decades, we need to start discussing it now.”
Established in 2010, his company DeepMind was acquired by Google for $400 million in 2014. The London-based company has about 200 researchers working on AI development, which is the world's most for a single research body in this sector.
Pohang University of Science and Technology (POSTECH) President Kim Doh-yeon expects that advanced AI will change the landscape of the labor market.
According to a report released during the Davos Forum in Switzerland last month, more than 7.1 million jobs will disappear in 15 emerging economies in the next five years due to the introduction of advanced AI in the workplace.
Bank tellers, realtors, sports referees and factory machine operators have been considered as jobs that can be easily replaced by machines.
“We may face significant difficulties and disputes if we insist on the existing social system,” Kim said. “We need to reduce working hours and we need to prepare for it and improve the system.”
Ryan Calo, a legal expert at the University of Washington, said, “The current legal system cannot effectively respond to problems in the upcoming era of robots and AI. We urgently need to establish a legal system for it.”
The Korea Information Society Development Institute said in a recent report about AI regulation issues that, “We need proper regulations to address legal responsibility for crimes committed by AI robots, build social agreement on robots' autonomy and control possible AI errors and malfunctions.”
But they also said people need not jump into concerns that machines will “conquer” them because AIs can be controlled and contribute to the quality of living.
“Such responses like sci-fi novels do not help scientific development,” Hassabis said. “We consider AI as a tool that accelerates automation of difficult or boring work for humans.”
The CEO said unlike what are called artificial general intelligence, current AI, dubbed “narrow AI,” is designed for limited purposes. This means the existing AI cannot create something outside the boundaries and regulations set by human operators.
British theoretical physicist Stephen Hawking and Tesla CEO Elon Musk have warned of the negative aspects of “strong AI,” which can operate for general purposes and autonomously learn from unlabeled data. Computer engineer Ray Kurzweil said last year that strong AI can evolve by creating even stronger AI in a phenomenon called “singularity.” He said such a phenomenon may come to reality by 2030.
But Hassabis said, “AI that can replace humans still have a long way to go.”
Stressing that DeepMind aims at building a general purpose AI, Hassabis said AI acts as assistants for humans.
“We will need to work to use AI in a way that helps the development of human being,” he said. “Diverse scientific sectors including genetics, climatology, medicine, energy, macroeconomics and physics will benefit from the use of AI.”
The POSTECH president also said, “Though there will be many jobs replaced by AI, that will not necessarily cause unhappiness.”