Settings

ⓕ font-size

  • -2
  • -1
  • 0
  • +1
  • +2

Bridge gap between policy, theory to deal with questions on AI

  • Facebook share button
  • Twitter share button
  • Kakao share button
  • Mail share button
  • Link share button
Hwang Jee-seon
Hwang Jee-seon
By Hwang Jee-seon

The human imagination, which projects our strongest desires and worst fears, has long been preoccupied with the idea of artificial intelligence (AI). Films, such as "The Matrix," "A.I.," and "Her" portray the dilemmas faced by humankind when creations of our making surpass, our expectations and limitations. Today, AI is no longer confined to the pages of books or movie screens.

It is a reality, albeit one we are unsure how to deal with. Technology leaders Elon Musk, Stephen Hawking and Bill Gates have all cautioned against the potential dangers of AI. Indeed, the rapid development of AI in recent years has pushed humanity to question whether our use of AI as a tool is quickly approaching the limits of its viability.

As AI systems continue to develop, the mystery remains: what is inside this Pandora's Box? To find the answer, we must first ask ourselves three questions.

The first question we must ask ourselves is: what impact will AI have on our markets, and how will we distribute the gains?

The rise of AI as a leading sector of the 21st century has pushed countries to competitively develop AI technologies. The push to develop AI-powered business models has been accelerated by a world recession caused by the COVID-19 pandemic.

AI has the power to fundamentally restructure our labor systems, and this fact has often led to concerns about humans becoming obsolete. Moreover, the occupations taken over by AI are concentrated in sectors, such as service delivery and manufacturing, leading to an increase in wage gaps between skilled and unskilled workers.

These dangers have the potential to offset the benefits projected by use of AI, such as a reduction in the cost of medical treatment. The distribution of the economic gains made possible by use of AI technology also remains an issue. History has shown that increased productivity increases the size of the economic pie, but the gains often remain concentrated at the top.

The perennial question of how to strike the balance between economic growth and distribution will be crucial to whether AI-led innovation will be able to remain sustainable.

In addition to the cost to our markets, we must also ask ourselves about the challenges AI may pose to society as a whole. How will we ensure that problems embedded within our society such as discrimination will not become exacerbated by our efficient AI systems?

There have been many instances of AI technology exacerbating human biases in race, ethnicity, gender and age. For example, according to ProPublica, a criminal justice algorithm which was deployed in Florida mislabeled African-American defendants as "high risk" at a much higher rate than it mislabeled white defendants.

The datasets used by AI are imperfect and susceptible to human failure. Thus, scrupulous oversight mechanisms that take this risk into consideration will be necessary to ensure the seamless integration of AI into aspects of society that require considerations of justice.

The development of AI is changing not just our markets and the way we live our lives, but the very way we perceive our existence. Thus, the final question we must ask ourselves is: which moral principles must guide us in our use of AI?

One of the main impediments to the use of AI in autonomous driving systems has been the ethical challenges posed by the famous trolley dilemma. When faced with the choice of staying on course and killing five people as opposed to changing course and killing one person, efficiency may not be the best guiding principle.

In addition to the ethical concerns sparked by the actual use of AI, the concern about collection of data used for training AI also remains a problem. There have been many concerns about use of AI as a breach of privacy and security. For example, Google and its sister firm DeepMind, which develops general purpose artificial intelligence (AGI) technology, have faced legal action for obtaining access to and processing patient health records without consent. Agreement on the code of ethics to be followed when developing and employing AI technology must take into account the interests of all stakeholders involved, not just those of corporations.

We have thus far conducted a holistic exploration of the implications of AI use along the dimensions of economy, society and ethics. To deal with these questions, the gap between policy and theory must be bridged.

The fundamental change brought about by AI requires flexible and adaptable regulatory systems. With AI, these include not just those in government but actors in the private sector and civil society, as well.

Furthermore, AI is not just a matter of technology and business, but of justice and human rights as well. Thus, multidisciplinary research that explores the possible opportunities and threats caused by the embeddedness of AI in our lives will be crucial in deciding our next steps forward.

Hwang Jee-seon is a student at Seoul National University.




X
CLOSE

Top 10 Stories

go top LETTER