In the movie "The Terminator," the AI system known as Skynet is depicted as a ruthless entity bent on annihilating humanity. The late Dr. Stephen Hawking issued a warning, stating that "the development of full artificial intelligence could spell the end of the human race."
Meanwhile, AI is already being used in industries such as manufacturing and logistics, innovating productivity. It is also emerging as a potential solution to address common global crises in energy, the environment, disease control and other various areas. This begs the question: Is AI indeed a looming threat, or does it present an opportunity for humanity?
On Nov. 1, the AI Safety Summit is convening at Bletchley Park in Buckinghamshire, United Kingdom. Leaders and ministers from over 20 countries around the world are coming together to discuss the risks associated with the misuse of AI and to outline the roles of companies, countries and the global community in ensuring the trustworthiness and safety of AI and its contribution to human welfare.
The advent of generative AI is what has been driving the global discussion on the need to establish a normative system for AI and secure AI safety. With its language comprehension and creative abilities that are comparable to those of humans, AI is fast becoming as ubiquitous in our daily lives, society and the economy as the internet and smartphones.
In a world where AI is becoming commonplace, securing its trustworthiness and safety has become paramount to fully harness its advantages. Generative AI, however, has sparked social controversies due to its ability to generate convincing false answers and the proliferation of fake images of celebrities through deepfake technology. If AI is used with malicious intent to distort facts or produce illicit content, the foundations of social order and democracy will be put at risk, potentially stunting the wholesome development of AI.
Last September, the Korean government hosted an event titled "Korea's Great Leap in Hyper-scale AI" with President Yoon Suk Yeol in attendance. During this event, Korean companies announced their global expansion plans based on their unique hyper-scale AI services and made a commitment to voluntarily prepare safety measures. In support of these private sector initiatives, the Ministry of Science and ICT revealed a plan to ensure the ethics and trustworthiness of AI, last October.
Firstly, the plan involves establishing specific guidelines for various fields, including areas such as recruitment and generative AI-based services. These guidelines will encourage business operators to voluntarily comply with AI ethics and promote autonomous efforts within the private sector to verify and certify AI trustworthiness.
Secondly, the plan focuses on promoting the development of technologies that can address the challenges posed by plausible false answers and biases, which have been recognized as limitations of generative AI.
Thirdly, the government plans to review industrial cooperation and institutionalization measures for implementing watermarks on AI-generated outcomes. Additionally, a system will be established to ensure trustworthiness by providing commentary on high-risk AI applications.
Lastly, the plan aims to promote the establishment of international standards for AI trustworthiness guidelines and strengthen global cooperation in order to align Korea's AI policies and systems with those of leading countries.
AI products and services lacking safety assurances cannot be competitive in domestic and global markets. Therefore, the Korean government's policy efforts to secure the ethics and trustworthiness of AI will serve as a strong foundation for AI services developed by Korean companies to expand into the global market, rather than as burdensome regulations.
To leverage AI as an opportunity rather than a threat, it is imperative for the private sector, government, and the global community to collaborate as one global team. This collaborative approach will allow us to proactively respond to AI-related risks and implement measures for safe and responsible AI use.
Korea played a leading role in shaping the OECD AI recommendations in 2019, and received a top grade for two consecutive years in the 2023 Artificial Intelligence and Democratic Values Index evaluation.
Last September, the Korean government introduced the "Digital Bill of Rights," which lays out fundamental principles for a new digital order, with the goal of positioning itself as a global rules-setter in the era of deepening digitalization. During the AI Safety Summit, Korea made proposals to advance global cooperation on AI, including joint research initiatives and the establishment of a global AI governance organization.
Building upon these capabilities and achievements, the Ministry of Science and ICT will stay committed to its role as the ministry responsible for digital policies, so that Korea can lead the way in establishing a global AI governance framework and serve as a model nation in the digital era.
Lee Jong-ho is minister of the Ministry of Sciecne and ICT.