Settings

ⓕ font-size

  • -2
  • -1
  • 0
  • +1
  • +2

The ethical issues of AI: fighting the right battles

  • Facebook share button
  • Twitter share button
  • Kakao share button
  • Mail share button
  • Link share button
gettyimagesbank
gettyimagesbank

By Nicolas Bouverot

Will artificial intelligence replace human beings? Could it turn against its creators? Does it represent a danger to the human race?

These are just some of the questions that have been stirring up public debate and the media since the mass deployment of generative AI tools and the sensationalist statements of a few public figures. However, as interesting as the speculation is from a philosophical point of view, most experts agree that it is somewhat premature.

Nicolas Bouverot, vice president of Thales in Asia
Nicolas Bouverot, vice president of Thales in Asia
It is true that artificial intelligence has enormous potential. It is a technology that is going to enable a broad range of tasks to be automated, new services to be created and, ultimately, economies to be more efficient. Generative AI marks a new stage in this underlying trend, its many applications we are only beginning to explore.

However, we must not lose sight of the fact that, despite their remarkable performances, AI systems are essentially machines, nothing more than algorithms built into processors that are able to assimilate large amounts of data.

We have been told that these new tools will be able to pass the Turing test. It's probably true, but the test ― which was previously thought to be able to draw the line between human intelligence and artificial intelligence ― has long since ceased to carry any real weight.

These machines are incapable of human intelligence, in the fullest sense of the term (i.e. involving sensitivity, adaptation to context, empathy), and reflexivity and consciousness ― and probably will be for a long time to come. One cannot help thinking that those who imagine these tools will soon have those characteristics are being over-influenced by science-fiction and mythical figures such as Prometheus or the golem, which have always held a certain fascination for us.

If we take a more prosaic view, we realize that the ethical questions raised by the increasing importance of artificial intelligence are nothing new and that the advent of ChatGPT and other tools has simply made them more pressing.

Aside from the subject of employment, these questions touch, on one hand, on the discrimination created or amplified by AI and the training data it uses, and, on the other, the propagation of misinformation, either deliberately or as a result of "AI hallucinations." However, these two topics have long been a concern for algorithm researchers, lawmakers and businesses in the field, and they have already begun to implement technical and legal solutions to counteract the risks.

gettyimagesbank
gettyimagesbank

Let's first look at the technical solutions. Ethical principles are being incorporated into the development of AI tools. For example, establishing guidelines can ensure that systems are transparent and explainable in order to give visibility to consumers and users.

Organizations should also aim to minimize bias, notably regarding gender and physical appearance, in the design of their algorithms, which can be done by training data used and the makeup of the teams. At Thales, we have been committed for some while now to not building "black boxes" when we design artificial intelligence systems.

Secondly, the legal solutions. The European Union (EU) has unquestionably taken the lead. The European Commission and European Parliament have been working for over two years on a draft regulation aimed at limiting by law the most dangerous uses of artificial intelligence.

Asian countries are expected to follow suit and develop their own frameworks for governing AI applications. As the landscape continues to evolve, responsible use of AI remains a priority for governments. Taking the lead from EU, companies in Asia are closely monitoring the developments to understand the potential impact of these regulations.

A key example of this is Singapore's Infocomm Media Development Authority (IMDA) developing "AI Verify," an AI governance testing framework and software toolkit. This system helps organizations measure AI systems against standardized tests. As AI testing technologies are being developed, the AI Verify Foundation ensures the proper testing tools for the responsible use of AI.

It is also through education and true societal change that we will succeed in guarding against the risks inherent in misusing AI. Together, we must succeed in removing ourselves from the kind of culture of immediacy that has flourished with the advent of digital technology, and which is likely to be exacerbated by the massive spread of these new tools.

As we know, generative AI enables highly viral ― but not necessarily trustworthy ― content to be produced very easily. There is a risk that it will amplify the widely recognized shortcomings in how social media works, notably in its promotion of questionable and divisive content, and the way it provokes instant reaction and confrontation.

Furthermore, these systems, by accustoming us to getting answers that are "ready to use", without having to search, authenticate or cross-reference sources, make us intellectually lazy. They risk aggravating the situation by weakening our critical thinking.

Whilst it would therefore be unreasonable to begin raising the red flag on an existential danger for the human race, we do need to sound a wake-up call. We must look for ways to put an end to this harmful propensity for immediacy that has been contaminating democracy and creating a breeding ground for conspiracy theories for almost two decades.

Taking the time to contextualize and assess how trustworthy content is, and having a constructive dialogue rather than reacting immediately are the building blocks of a healthy digital life.

We need to ensure that teaching them ― in both theory and practice ― is an absolute priority in education systems around the world.

In Singapore, AI Singapore's outreach program LearnAI aims to expand AI skills and literacy beyond specialists. By providing education to students, this can help build the future of the workplace.

As a global leader in technology, we are committed to exploring the collaborative opportunities and potential with AI to better enhance the way we can leverage the technology to its fullest. This is especially crucial for digital identity security, with Thales focusing on embedding AI for enhanced biometric card authentication and using AI algorithms and machine learning for ID fraud prevention. Doing so allows us to offer governments and enterprises high-technology solutions for secure ID management.

If we address this challenge, we will finally be able to leverage the tremendous potential that this technology has to advance science, medicine, productivity and education.

Nicolas Bouverot is vice president of Thales in Asia.




X
CLOSE

Top 10 Stories

go top LETTER