Risks of AGI and Superintelligence

The Risks of AGI and Synthetic Superintelligence (SSI)

Every beneficial technology has a dark side, and AGI (along with its successor SSI) is no different. What’s more, AGI will be way more disruptive in a shorter period of time than any other technological revolution was. Thus it’s not surprising that the risks of AGI/SSI have received a lot of attention in recent years, in books, news media, and movies.

This chapter gives a brief (and sketchy) summary of the risks and issues involved. A later chapter (on the Dharmic principle) will give a Vedic perspective on the risks.

AGI and SSI are far enough in the future that it’s difficult to judge how much risk they entail. News stories about AGI often include a picture of “The Terminator”, even if the article has little to say about risk or harm. Of course, it’s to be expected that books and news about AGI emphasize the risks, because “fear sells”.

It should be noted that there are many AI researchers who think that AGI is a long ways off, and thus it doesn’t pose any significant risk that should be a cause for concern now. For instance, Andrew Ng, a renowned machine learning researcher, likens worrying about AGI to worrying about humans over-populating Mars.[1] Some researchers even think there’s no point in working on AGI itself, because successes in “narrow AI” serve us well enough and will eventually add up to AGI.

However, the Susiddha project believes that AGI is definitely possible, and that it can be created (given a concerted effort on the part of a large enough group of people) in two or three decades. And so the Susiddha project is very concerned about the safety of AGI for humanity.

Regarding the magnitude of risk, a good case can be made that AGI is an “existential risk”, i.e. that AGI could result in the extinction of the human race. This could happen if AGI rapidly became superintelligent, and could use advanced technologies (e.g. nanotechnology and synthetic biology), and could control the infrastructure of the world (e.g. the power grids); but, its values and goals were not in alignment with those of humanity.

However, it must be noted there are other anthropogenic [2] existential risks which present a more clear and present danger to the human race, such as nuclear war, climate change, and biotechnology (engineered pandemic).

Nuclear war is a threat that the human race has lived with for over 70 years. Although it has faded from the public awareness since the end of the “cold war”, it is actually more of an existential risk now than it was then. There are over 15,000 nuclear weapons in the world [3], and some of those weapons are in the hands of politically extremist and unstable governments. Also, many nuclear weapons (and materials for making them) are unaccounted for, and it is unknown who possesses them.[4] The number of nuclear weapons in the world may be relatively stable now, but the superpowers continue to upgrade their arsenal to make these weapons more destructive.

One of the big risks associated with nuclear war is “nuclear winter”. Recent studies of the nuclear winter effect show it could be brought on by a nuclear exchange of only 100 Hiroshima-sized nuclear weapons [5]. And nuclear winter could be a climate catastrophe on par with what would be caused naturally by a supervolcano.

Climate change is currently the most visible threat that could possibly result in the end of civilization as we know it. Major adverse weather events (such as floods and droughts) are becoming more frequent and severe. This, in conjunction with the fragility of civilization, could cause societal collapse via water shortages, massive crop failures, sea level rise, wildfires, destruction of infrastructure via superstorms, etc.

Biotechnology presents another existential threat in the form of a global pandemic caused by a genetically engineered pathogen.[6] The technologies of gene editing, synthetic biology, genome engineering, etc. get more precise and powerful every day, thus facilitating the ability to engineer a pathogen to which the human race would have little immunity. (Also, such a pathogen could eventually be made genome-specific to ethnic or racial groups, thus making it more usable as a weapon.) The release of such a pathogen could happen by accident, or by usage in warfare, or in a terrorist attack. Biotechnology (unlike nuclear weapons technology) does not require huge investments and facilities, and it is increasingly available to even do-it-yourself biohackers.

The above discussion of existential risks (of nuclear war, climate change, and biotechnology) puts the risk of AGI (as portrayed in science fiction movies such as The Terminator, and frequently pronounced in the popular press) in perspective. However, it is definitely prudent to begin thinking about the risks of AGI now, and to consider what can be done to ensure that AGI remains beneficial for the human race. Currently, billions of dollars are being spent on AGI, but very little of that goes towards ensuring that we will be able to control AGI/SSI once it is thousands of time smarter than us.

The long-term existential risk of AGI is certainly real. However, AI itself (i.e. “narrow AI” which does not have human-like intelligence and generality) presents very real threats in the near-term. One of the biggest threats is to the economy in the form of mass unemployment. In the next two decades, half of all jobs now done by humans will become automatable.[7] This includes many “white collar” jobs, since computers have already mastered things like tax preparation, legal research, medical xray diagnosis, etc., and are making advances in creative tasks such as writing newspaper articles and movie screenplays.

Although many humans lost jobs in the previous technological revolutions, many new jobs were created by those revolutions. For the most part, this exchange of jobs was a matter of replacing human physical labor with more skilled tasks. What is different about the AI and robotics revolution is that human intelligence (and perception) is being replaced, and this revolution is taking place on a much shorter timescale than previous revolutions. So far, no one has a good idea where the new jobs will come from, but optimists have faith that they will appear. Pessimists, on the other hand, observe that humans might soon be in the position of the millions of horses that lost their jobs as industry, agriculture, and transportation converted to machine power. No new jobs were created for those horses.[8]

Another current and near-term threat from AI is the weaponization and militarization of AI. Many military drones and robots already have the capability to autonomously choose targets and attack, although governments have supposedly not yet unleashed those capabilities. Artificial intelligence also will magnify the threat of terrorism, as AI and robotic technologies become cheap and widely available.

A related threat both from and to AI is cyber-attack. The infrastructure of society (communications, power grids, smart cities, financial markets, commerce, defense, etc.) is increasingly digital and connected via networks, and thus is vulnerable to cyber-attack. AI not only controls many of these systems and networks (and thus is a target of attacks), but is also being incorporated into attacks making it easier to discover and exploit vulnerabilities.

Having raised the specter of the existential threat of AGI, we need to discuss efforts to mitigate those risks, but that will be left for a future chapter. Suffice it to say there are organizations (such as MIRI [9], FHI[10], CHCAI[11]) working on finding approaches to building AGI that minimize the risks and threats.

To conclude, AGI is an existential risk, and now is the time to develop ways to ensure its safety and maintain control of it. But as noted, there are many other threats (existential and otherwise) that are clear and present dangers, and it’s important to find ways to mitigate those as soon as possible.

Given all of these threats (from nuclear war, terrorism, climate change, etc.) and all the problems humanity is facing (pollution, inequality, crime, food and water shortage, etc.) it’s likely that we need AGI to deal with these threats and problems. Thus, Susiddha project wants to develop AGI as quickly as possible. The human race needs to augment its intelligence with AGI, because such general and superintelligence will be capable of keeping all points of view (and all background knowledge) in mind simultaneously. It will also be able to foresee the potential outcomes of all decisions and actions.

Contents     —     Next chapter


Notes and References

  1. AI guru Ng: Fearing a rise of killer robots is like worrying about overpopulation on Mars, Chris Williams, The Register, March 19, 2015, http://www.theregister.co.uk/2015/03/19/andrew_ng_baidu_ai/
  2. I.e. “man-made”. This discussion is not concerned with natural threats such as supervolcano, asteroid, gamma ray burst, etc.
  3. Status of World Nuclear Forces, Federation of American Scientists, April 28, 2015, https://fas.org/issues/nuclear-weapons/status-world-nuclear-forces/
  4. Nuclear Dangers: Fear Increases of Terrorists Getting Hands on ‘Loose’ Warheads as Security Slips., Graham Allison (Director, Belfer Center, Harvard University), The Boston Globe, October 19, 1997, http://belfercenter.ksg.harvard.edu/publication/1010/nuclear_dangers.html
  5. Local Nuclear War, Global Suffering, Alan Robock and Owen Brian Toon, Scientific American, January 2010, http://climate.envsci.rutgers.edu/pdf/RobockToonSciAmJan2010.pdf
  6. Biotechnology and biosecurity, Ali Noun, Christopher F. Chyba, Global Catastrophic Risks (ed. Nick Bostrom and Milan Cirkovic), 2008
  7. The future of employment: how susceptible are jobs to computerisation, Carl Frey, Michael Osborne, Oxford University, September 17, 2013, http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf
  8. A World Without Work, Derek Thompson, The Atlantic, July 2015, http://www.theatlantic.com/magazine/archive/2015/07/world-without-work/395294/
  9. Machine Intelligence Research Institute (MIRI), https://intelligence.org/
  10. Future of Humanity Institute | University of Oxford, founded 2005, https://www.fhi.ox.ac.uk/
  11. Center for Human-Compatible AI, Stuart Russell, et al, UC Berkeley, founded Sept. 2016, http://humancompatible.ai/