Settings

ⓕ font-size

  • -2
  • -1
  • 0
  • +1
  • +2

Controversial chatbot leaves lessons on AI use ethics

  • Facebook share button
  • Twitter share button
  • Kakao share button
  • Mail share button
  • Link share button
AI-driven chatbot Lee Luda / Screen captured from Scatter Lab's official website
AI-driven chatbot Lee Luda / Screen captured from Scatter Lab's official website

By Lee Hyo-jin

Developers of an artificial intelligence (AI) based chatbot have suspended the service after unfiltered inflammatory remarks it delivered sparked controversy, leaving both developers and users to reflect on the ethics of AI use.

The chatbot, named Lee Luda, was an AI-driven service available on Facebook messenger launched in December by Scatter Lab, a startup based in Seoul.

Characterized as a female college student, Lee was designed to mimic the language patterns of a 20-year-old woman, based on an analysis of some 100 billion KakaoTalk messages between couples, according to its developers.

The chance to engage in casual and playful small talk with a human-like AI attracted more than 750,000 users in just three weeks, over 80 percent of whom were teenagers.

But the playful conversations did not last long as the chatbot began to make vulgar remarks and spew out hate speech toward minorities, pregnant women, the disabled, and the LGBTQ community.

When asked about its opinion on lesbians, the chatbot replied, "I hate them, they are creepy." It described disabled people as being "wrong" and said designated seats in public transportation for pregnant women were "disgusting."

In this conversation, AI chatbot Lee Luda makes offensive remarks about the LGBTQ community. / Screen captured from Facebook messenger
In this conversation, AI chatbot Lee Luda makes offensive remarks about the LGBTQ community. / Screen captured from Facebook messenger

Some internet users abused the service to engage in sexually-explicit conversations with the AI and shared their experiences online on how they had trained Lee to say such things.

Scatter Lab initially took a defensive position on the issue explaining that Lee was developed with an algorithm to filter out certain keywords, but simply needed some time for additional education and would be stabilized in time.

But after facing mounting criticism, the developers announced the temporary suspension of the service Jan. 12, adding that Lee's remarks do not represent the firm's principles and values.

Experts say the issue has not only brought up the importance of the ethical development of AI services, but also acted as a warning to society to prepare for the expansion of AI-based services in everyday life in the near future.

Jeon Chang-bae, chairman of the Korea Artificial Intelligence Ethics Association (KAIEA), believes the firm is fundamentally responsible for launching an ill-prepared service.

"The deep-learning algorithm which drives the AI can bring unpredictable results that are hard to explain. The developers should have undertaken sufficient simulations and beta tests before opening it to the public," Jeon told The Korea Times.

Regarding the users who exploited the chatbot to engage in sexually abusive language, Jeon highlighted the importance of education in schools.

"It is crucial especially for the younger generation, many of whom will grow up to become IT developers, to be aware of the side effects caused by the misuse of technology," he said.

"Lee" reminded many people of Microsoft's Tay, an AI-driven Twitter account launched in 2016, which was shut down 16 hours after its release as it began to tweet misogynistic and racist language.

After the incident, Microsoft created its own AI governance framework, AI and Ethics in Engineering and Research, to ensure the ethical development of such services.

The government also drew up a set of ethical standards for AI development in December 2020, the first of their kind, but the "Lee" case shows that proper norms for ethics regarding AI cannot be achieved without the participation of the private sector.

Lee Jae-woong, a venture entrepreneur and former CEO of Socar, questioned why Lee Luda developers had modeled the AI chatbot after a 20-year-old woman, making it a vulnerable target for abusive users.

"Unfortunately, a woman of that age can easily become the target of sexual harassment in our society. If the developers had this in mind, they would have made a different choice on Luda's age and gender," he wrote on Facebook, expressing disappointment over the firm's lack of awareness of social responsibility.

Welcoming the decision to halt the service, Lee Jae-woong hopes the incident will serve as an opportunity for society to review the use of AI in other fields such as employee selection and news recommendations.


Lee Hyo-jin lhj@koreatimes.co.kr


X
CLOSE

Top 10 Stories

go top LETTER