Settings

ⓕ font-size

  • -2
  • -1
  • 0
  • +1
  • +2

Calm down: AI isn't magic, just software

  • Facebook share button
  • Twitter share button
  • Kakao share button
  • Mail share button
  • Link share button
By Robert D. Atkinson

It's easy to get caught up in artificial intelligence (AI) hype. We are bombarded with claims that AI is a transformative technology: more powerful than a locomotive, able to leap tall buildings in a single bound. In other words, AI is the "Superman" of all technologies, doing things no other technology could ever do.

So, as the narrative goes, we should be afraid, very afraid. After all, AI will destroy most jobs, create a surveillance state, generate Terminator-like autonomous weapons, make current levels of inequality look like child's play and generate a panoply of other dire consequences.

If policy makers don't step in now to harness the "demon of AI," as Elon Musk called it, it will be too late. We will all be living in a real-world "Squid Game."

To which, I must say: nonsense. Stop listening to the hype, either the techno-utopian hype that says the "AI revolution" will be greater than the First Industrial Revolution or the techno-dystopians who say all is lost. The reality is that AI is just computer code. As leading AI scientist Pedro Domingos states:

"… a lot of the talk that we hear as if AGI (artificial general intelligence) is just around the corner…really doesn't understand the history of AI and doesn't appreciate just how hard the problem is… Even if AGI was around the corner, there's still no reason to panic. We can have AI systems that are as intelligent as humans are; in fact, far more, and not have to fear them. People fear AI because when they hear 'intelligence' they project onto the machine all these human qualities like emotions and consciousness and the will to power and whatnot, and they think AI will outcompete us as a species. That ain't how it works."

Or as retired Lt. Gen. Jack Shanahan, the first director of the U.S. Defense Department's Joint Artificial Intelligence Center, stated, "People talk about [AI] it in very unrealistic ways…First of all, I take the discussion of artificial general intelligence off the table. And I give that to the researchers who look 50 to a hundred years down the road."

The problem with AI magical thinking is that it leads to distorted policy conversations and harmful policies. The nonsensical notion that AI will destroy a large share of jobs and lead to a new lumpen-proletariat has no basis in reality.

Technologies that boost productivity have always led to lower prices which meant more demand and job creation. Even if AI could significantly boost labor productivity in a host of occupations (a dubious proposition), the result would be not fewer jobs, but higher incomes.

Moreover, the debate should not be about whether AI creates more jobs than it eliminates; almost no technologies in the past created more jobs than they eliminated. What they did was create more income and that led to more jobs in existing industries and occupations that expanded.

The idea that AI will lead to significantly increased income inequality is even more farfetched, for it implies that there will be a handful of mega-corporations and their founders that make never-before-seen profit rates so large as to suck up all the wealth benefits from AI. The only way that happens is if the laws of economics and competition were repealed.

What about the rise of the AI surveillance state? This is certainly something to worry about if one lives in an authoritarian nation without core human rights, like China. But the idea that just because China uses AI in ways that violate human rights, does not mean that AI will be used to surveil people in democracies. The latter have laws and regulations that protect people from government surveillance, whether through AI or other technologies.

Finally, what about "killer robots"? As Robert Marks, professor of Electrical and Computer Engineering at Baylor University, notes "we need to separate science fiction from science fact. Artificial intelligence will never be sentient. It will never be creative. It will never understand." Moreover, any global ban on AI weapons can never be adequately monitored and enforced (how do you monitor computer code in a drone?), so any ban will mean that allied adversaries will have AI weapons, while allies will not.

All of this means that a good rule of thumb is that, controlling for other variables, there is an inverse relationship between a country's AI fear and its AI progress. Narratives of fear generate policies of precaution. Narratives of excitement generate policies of promotion and competitiveness. The choice should be an easy one for Korea to make if wants to be an AI-enabled economy and society.


Robert D. Atkinson (@RobAtkinsonITIF) is president of the Information Technology and Innovation Foundation (ITIF), an independent, nonpartisan research and educational institute focusing on the intersection of technological innovation and public policy.




X
CLOSE

Top 10 Stories

go top LETTER