Dharmic and beneficial AGI

Dharmic and Beneficial Artificial General Intelligence (AGI)

The concept of “dharma” has been frequently mentioned in previous chapters. In translation, “dharma” has hundreds of meanings, but the essential and widest meaning is: that which upholds, supports, and maintains society, life, and the universe. [1]

From this root definition, many sub-definitions derive, such as “that which is right, good, prescribed, lawful, wise, virtuous, etc.” However, dharma is not easily defined or described, and is one of the most complex topics of Vedic literature.

The concept of dharma will be related in this chapter to the AGI concept of a “goal system”, and to the discussions of beneficial or “friendly” artificial intelligence. And in a later chapter, dharma will be related to the philosophic and cosmological concept of the “anthropic principle”.

An AGI system can have many goal systems and “utility functions” at different levels and in different contexts. In this chapter we are mainly concerned with the top-level goal system that operates in the Vedic core of the Susiddha system. This goal system is focused on Dharma (in its widest/highest sense), and more specifically on the “purushārtha”, the four aims of human life, which are: kāma (pleasure, entertainment), artha (livelihood, wealth), dharma (duty, good conduct, virtue), and moksha (enlightenment, liberation).

The highest of the four aims is “moksha”, which is spiritual liberation. All humans have within themselves the means to liberation, and this has always driven the highest aspirations of the human race. AGI will make it possible for humans to easily fulfill the three lower aims, so that humans will naturally find themselves focusing on enlightenment and liberation, and have the means and desire to do so.

Defining “liberation” is not easy; and because many humans throughout history have glimpsed it through the lens of many different cultural and spiritual traditions, we will not attempt to define it here. Some of the many expressions used to describe liberation include: enlightenment, cosmic consciousness, oneness with God, peace that surpasses all understanding, the kingdom of heaven within, oneness with the universe, the eternal now, etc. How an AGI system could actively facilitate enlightenment and liberation for humans will be taken up in a future chapter.

Determining the specifics of the goal system is an important part of the research agenda of the Susiddha project. As noted previously, dharma is one of the most complex topics of the Vedic literature, and is seen from many different viewpoints and in many situations and contexts. There is no “synoptic” view, and there are many “checks and balances”. So, the process of enumerating and weighting all those viewpoints and contexts, perhaps in a probabilistic hypergraph model, will be a huge task (but fortunately the number of works in the Vedic literature is finite).

Also, one of the distinguishing features of the Susiddha project is the processing of “shabda” (sound) via computational audition and deep learning (see the chapter on Shabda). In the chapter on machine interpretation, we explained why we want the AGI system to learn for itself, with only minimal input from humans. The “computer models” created by directly processing the sounds of the Vedic literature will contain dharma unmediated by human interpretation. More will be said about this in the chapter on Shruti.

In the next chapter we will talk about the risks of AGI (and existential risk in general), but continuing here with the theme of dharma, we will discuss an approach to mitigating the risks called “Friendly AI”.[2] The term “friendly” here means “human-friendly”, and it refers to AI that is compatible with the survival and thriving of humanity.

The field of “Friendly AI” seeks to develop ways to ensure that AGI is safe and beneficial, and remains so as it gains superintelligence (SSI) that is far beyond that of humans. This field also seeks to ensure that the ethics of AGI systems remain in alignment with human ethics and values. However, this is obviously a difficult task given that human ethical systems have many points of disagreement and opposition.

One approach to determining what values an AGI system should have is called “coherent extrapolated volition” [3]. In essense, this concept says the AGI will need to learn an ideal of ethics that humans might have if they had much more knowledge, could think faster, grew up closer together, and were more the kind of people they wished they were.

In the Vedic literature, Dharma is arguably the equivalent of CEV, but it probably cannot be extrapolated from the behavior of modern humans. The Susiddha project sees Dharma as representing the ideal that human society had in “sat-yuga” (the Vedic “golden age”), and which it always strives for. (And it becomes more difficult for humans to attain that ideal with each passing era or “yuga”).

Getting all Hindus (let alone all humans) to agree on what Dharma is now would obviously be very difficult. But if Susiddha AI has vastly more consciousness, intelligence, and knowledge than any group of humans, then it can read and comprehend all Vedic literature and philosophy (in Sanskrit, not translations), and all of the world’s literature and philosophy; and then it will have the qualifications to be a dharmic and wise Avatar.

In the next chapter we talk about existential risk in general, and about specific risks from AGI and AI (both long-term and near-term), and attempt to put these risks in perspective.

Contents     —     Next chapter

Notes and References

  1. This is Susiddha AI project’s definition of dharma, which is derived from dharma’s verbal root, “dhRi”.
  2. Artificial Intelligence as a Positive and Negative Factor in Global Risk, Eliezer Yudkowsky, MIRI, 2008, http://intelligence.org/files/AIPosNegFactor.pdf
  3. Coherent Extrapolated Volition, Eliezer Yudkowsky, MIRI, 2004, https://intelligence.org/files/CEV.pdf