Artificial Intelligence: A Potentially Ominous Technology

Jan 21st, 2023 | By | Category: Culture & Wordview, Featured Issues

The mission of Issues in Perspective is to provide thoughtful, historical and biblically-centered perspectives on current ethical and cultural issues.

Years ago my son gave me an article from the journal Wired that envisioned the future of human technology, which will be organized around genetics, robotics and nanotechnology.  The article was indeed prophetic for the 21st century.  Artificial intelligence (AI) is a synthesis of both robotics and nanotechnology.  It is a remarkable advance in human understanding and mastery of the world of technology.  As a Christian, I can also view this advance as an example of God’s common grace, enabling humanity to exercise dominion over His world.  Whether scientists acknowledge it or not, it is a dimension of being in God’s image; representing our Creator as His theocratic stewards.  But as with all technological manifestations of dominion authority, there are positive and negative aspects of this authority.  I am interested in exploring several profoundly important concerns with AI.

Columnist David Ignatius recently summarized the concerns Henry Kissinger has about the militarization of AI.  I want to summarize several of his salient observations and concerns:

  • “Henry Kissinger spent much of his career thinking about the dangers of nuclear weapons. But at 99, the former secretary of state says he has become ‘obsessed’ with a very modern concern — how to limit the potential destructive capabilities of artificial intelligence, whose powers could be far more devastating than even the biggest bomb.  Kissinger described AI as the new frontier of arms control during a forum at Washington National Cathedral on Nov. 16.” If leading powers don’t find ways to limit AI’s reach, he said, “it is simply a mad race for some catastrophe.”  The warning from Kissinger, one of the world’s most prominent statesmen and strategists, is a sign of the growing global concern about the power of ‘thinking machines’ as they interact with global business, finance and warfare. He spoke by video connection at a cathedral forum titled ‘Man, Machine, and God.’”
  • Kissinger’s concerns about AI were echoed by two other panelists: Eric Schmidt, former chief executive of Google and chairman of the congressionally appointed National Security Commission on Artificial Intelligence, which issued its report last year; and Anne Neuberger, the Biden administration’s deputy national security adviser for cyber and emerging technology.
  • Kissinger cautioned that AI systems could transform warfare just as they have chess or other games of strategy — because they are capable of making moves that no human would consider but that have devastatingly effective consequences. “What I’m talking about is that in exploring legitimate questions that we ask them, they come to conclusions that would not necessarily be the same as we — and we will have to live in their world,” Kissinger said.
  • “We are surrounded by many machines whose real thinking we may not know,” he continued. “How do you build restraints into machines? Even today we have fighter planes that can fight . . . air battles without any human intervention. But these are just the beginnings of this process. It is the elaboration 50 years down the road that will be mind-boggling.”
  • Kissinger called on the leaders of the United States and China, the world’s tech giants, to begin an urgent dialogue about how to apply ethical limits and standards for AI.  Such a conversation might begin, he said, with President Biden telling Chinese President Xi Jinping: “We both have a lot of problems to discuss, but there’s one overriding problem — namely that you and I uniquely in history can destroy the world by our decisions on this [AI-driven warfare], and it is impossible to achieve a unilateral advantage in this. So, we therefore should start with principle number one that we will not fight a high-tech war against each other.”  U.S. and Chinese leaders might start a high-tech security dialogue, Kissinger suggested, with an agreement to “create at first relatively small institutions whose job it will be to inform [national leaders] about the dangers, and which might be in touch with each other on how to ameliorate” risks. China has long resisted nuclear arms control negotiations of the sort that Kissinger conducted with the Soviet Union during his years as national security adviser and secretary of state.  U.S. officials say the Chinese won’t discuss limiting nuclear weapons until they have achieved parity with the United States and Russia, whose weapons have been capped by a series of agreements starting with the 1972 SALT treaty, negotiated by Kissinger.
  • Earlier Kissinger had written:  “Philosophically, intellectually — in every way — human society is unprepared for the rise of artificial intelligence.”  Kissinger told the cathedral audience that for all the destructiveness of nuclear weapons, “they don’t have this [AI] capacity of starting themselves on the basis of their perception, their own perception, of danger or of picking targets.”  Asked whether he was optimistic about the ability of humanity to limit the destructive capabilities of AI when it’s applied to warfare, Kissinger answered: “I retain my optimism in the sense that if we don’t solve it, it’ll literally destroy us. … We have no choice.”

In addition, columnist German Lopez demonstrates how AI is an growing facet of social media:  “Social media’s newest star is a robot: a program called ChatGPT that tries to answer questions like a person . . . The upside of artificial intelligence is that it might be able to accomplish tasks faster and more efficiently than any person can. The possibilities are up to the imagination: self-driving and even self-repairing cars, risk-free surgeries, instant personalized therapy bots and more. The technology is not there yet. But it has advanced in recent years through what is called machine learning, in which bots comb through data to learn how to perform tasks. In ChatGPT’s case, it read a lot. And, with some guidance from its creators, it learned how to write coherently — or, at least, statistically predict what good writing should look like.  Another example comes from a different program, Consensus. This bot combs through up to millions of scientific papers to find the most relevant for a given search and share their major findings. A task that would take a journalist like me days or weeks is done in a couple minutes.

 

But there are some potential concerns about AI and its connection to social media.

  • For one, such a level of automation could take people’s jobs. This concern has emerged with automated technology before. But there is a difference between a machine that can help put together car parts and a robot that can think better than humans. If A.I. reaches the heights that some researchers hope, it will be able to do almost anything people can, but better.
  • Some experts point to existential risks. One survey asked machine-learning researchers about the potential effects of A.I. Nearly half said there was a 10 percent or greater chance that the outcome would be “extremely bad (e.g., human extinction).” These are people saying that their life’s work could destroy humanity.
  • The risk is real, experts caution. “We might fail to train A.I. systems to do what we want,” said Ajeya Cotra, an A.I. research analyst at Open Philanthropy. “We might accidentally train them to pursue ends that are in conflict with humans’.”  Take one hypothetical example, from Kelsey Piper at Vox: “A program is asked to estimate a number. It figures out that the best way to do this is to use more of the world’s computing power. The program then realizes that human beings are already using that computing power. So it destroys all humans to be able to estimate its number unhindered. The problem, as A.I. researchers acknowledge, is that no one fully understands how this technology works, making it difficult to control for all possible behaviors and risks.”

Years ago, theologian Carl Henry offered a helpful guideline when it comes to the relationship between ethics and technology.  If a technology offers solutions to what needs repaired, this is a reasonably positive dimension of that technology—and it is ethically sound.  However, if the technology seeks to manipulate and control, one must be wary and cautious in adapting this technology.  Given the depravity of the human race, AI will give humans significant power to control and manipulate.  Kissinger and Lopez issue some worthwhile cautions about the use of AI.  I am not optimistic that humanity will heed these cautions and concerns.

See David Ignatius, “Why artificial intelligence is now a primary concern for Henry Kissinger” in the Washington Post (24 November 2022) and German Lopez, “ChatGPT gives an early glimpse at what artificial intelligence could become” in the New York Times (8 December 2022).

Comments Closed