Artificial General Intelligence (AGI)

Artificial General Intelligence (AGI)

Artificial General Intelligence (AGI) is a subfield of Artificial Intelligence (AI) which seeks to create computational systems which possess the kind of general intelligence that humans have. The term “general” indicates that such an agent can learn and generalize, and is not limited to specific tasks (like playing chess, vacuuming, or driving a car) as is the case in “narrow AI”. Thus, an AGI agent can think and act intelligently in many different domains, and transfer knowledge between domains.

AGI was the original goal of the founders of AI (such as Turing, Simon, and McCarthy). However, AGI has proven much more difficult than anticipated, and thus the field of AI has largely focused on narrow applications. Fortunately, some researchers have persisted, and AGI has continued to advance. Lately, it has begun to attract much public and commercial attention.

In the last ten years, billions of dollars have been invested by big companies in the development of AGI. Companies such as IBM, Google, Facebook, Apple, and Microsoft are all working on AGI and/or building the advanced AI and machine learning (ML) that enables AGI.

For instance, “deep learning” (a form of ML) has received an enormous amount of attention in the past couple of years, because of its successes in computer vision, speech recognition, natural language processing, etc. Although these feats can be considered “narrow AI”, they are increasingly leading to more general algorithms that can learn in much the same way that humans can, and of course they are building the separate modules of an AGI system, much in the way that the human brain has separate “modules” for vision, speech, etc. that are integrated and used in a “general” fashion by the mind.

Also, it would be remiss to not mention such AI successes as IBM’s Watson (which beat the world champions of the “Jeopardy” quiz game, and is now mastering medical diagnosis), and DeepMind’s system that learns to generically play “Go” and video games better than any human.

In the last few years, many books have appeared on the topic of AGI, including “How to create a mind”[1], “Smarter than us”[2], “Superintelligence”[3], and “Our final invention”[4].

Besides the commercial development of AGI, open-source projects have appeared that are specifically aimed at creating AGI, such as OpenCog [5], Numenta NuPIC [6], and OpenAI [7]. In addition, companies such as Google and Facebook have open-sourced much of their AI and ML software.

As the power of AI has increased, so has recognition of the dangers posed by AGI to society and the human race. The popular press has picked up on statements by Steven Hawking, Elon Musk, and Bill Gates on the risk of AGI. This risk will be discussed in a later chapter on existential risk.

AGI has made enough progress in the last decade that it is probably inevitable now, even though the experts differ widely in their estimates of when a computer with human-level intelligence will actually appear. And of course, it is only “inevitable” barring some catastrophe that destroys human civilization.

One other matter to touch on regarding AGI is the issue of “artificial consciousness”, i.e. whether such an advanced and complex AI system can become conscious. The answer could well be “yes”, and this will be addressed in a subsequent chapter on consciousness in AGI systems.

In the next chapter, we describe how Susiddha AI will implement AGI.

Contents     —     Next chapter

Notes and References

  1. How to Create a Mind: The Secret of Human Thought Revealed, Ray Kurzweil, Viking, 2012
  2. Smarter Than Us: The Rise of Machine Intelligence, Stuart Armstrong, MIRI, May 1, 2014
  3. Superintelligence: Paths, Dangers, Strategies, Nick Bostrom, Oxford University Press, 2014
  4. Our Final Invention: Artificial Intelligence and the End of the Human Era, James Barrat, October 1, 2013
  5. OpenCog, Ben Goertzel, et al,
  6. NuPIC (Numenta Platform for Intelligent Computing), Jeff Hawkins and team,
  7. OpenAI, founded Dec. 2015,