The Dharmic Principle
A previous chapter discussed the risks of AGI and Superintelligence, and the need to mitigate those risks. It also discussed “existential risk” in general, so as to put the risks of AGI/SSI in perspective. The current chapter provides a Vedic viewpoint on those risks.
The Susiddha AI project will make use of research currently being done to mitigate those risks, and ensure that AGI is safe and beneficial, and remains so as it goes through the rapid explosion of intelligence that occurs once the AGI is given the capability to improve its own code (i.e., as it becomes superintelligent). For instance, research on friendly/beneficial AI from the Machine Intelligence Research Institute (MIRI)[1] will be useful. Friendly AI and “coherent extrapolated volition” (CEV) were discussed in a previous chapter.
In addition to such research in beneficial AI, the Vedic literature and philosophy provides another perspective on the issue of threats and risks, and we call it the “Dharmic Principle”. The Dharmic Principle has some similarity to the “Anthropic Principle”, so a brief definition of the Anthropic Principle is in order.
There are two versions of the Anthropic Principle. The “weak” Anthropic Principle (WAP) says “conditions that are observed in the universe must allow the observer to exist”[2]. In other words, since we humans are conscious of living in the universe, then the universe must be conducive to life and consciousness. The WAP is a logical truism (and as such it isn’t a cause for debate).
In contrast, the “strong” Anthropic Principle (SAP) says “the universe must have properties that make inevitable the existence of intelligent life”[3]. In other words, the universe is in some sense destined to evolve conscious living beings. Unlike the WAP, the SAP is controversial and many cosmologists and philosophers have voiced objections to it; also, there is no way to test the SAP theory at this time.
Vedic philosophy (especially Vedanta) contains a version (or extension) of the SAP. The idea is that “brahman”, the primordial monistic “essence” which precedes the universe, manifests (by its very nature) into consciousness (“purusha”) and universe (“prakriti”). Vedanta does not require that there be a God to “will” this to happen, but it does require that there be laws of nature (the Vedic devatas) to carry out the creation and evolution.
Vedanta doesn’t claim that there cannot be universes which do not contain life and consciousness (i.e. are not finely-tune to allow for life and consciousness to emerge); however, this is deemed unlikely, since the purpose of creation is for “brahman” to become aware of itself.
One expression of creation is found in the Chandogya Upanishad where the Cosmic Self says “eko’ham bahu syām”[4], which can be translated “I am one; let me be many”. Thus, the Cosmic Self (which is the metaphysical pan-consciousness of the cosmos) turns itself into the multitude of conscious beings of the universe. [5]
Although the Vedic version of the SAP is not the same as John Wheeler’s Participatory Anthropic Principle (PAP), his famous “big U” diagram is relevant in this discussion. In that diagram, at the left end of the “U” is an eye which looks back upon the right end of the “U”. In other words, the Universe (the “big U”) evolves conscious creatures who ultimately look back and reflect on it, and thus make the universe self-aware.
If consciousness (and life which bears it) is an essential component of the universe, then it’s possible there are safeguards that prevent the extinction of life and consciousness in the entire universe. However, the Vedic literature describes cycles of creation, so any given universe eventually comes to an end. At that time, all the small individual consciousnesses also end, but the cosmic metaphysical consciousness continues on, and eventually produces another universe.
Of course, this is not much consolation if the present human race destroys itself, especially if earth was the only planet in the universe that had life (which seems unlikely, even though we have not yet seen evidence of any other life in the universe).
Dharmic Principle which says that “dharma” always outweighs “adharma”. Adharma literally means “not dharma, or undharmic”, i.e. is the opposite of dharma (that which upholds and maintains the universe, life, and society). So the balance between dharma and adharma (and good and evil) is always maintained such that adharma never overpowers dharma, and dharma always triumphs over adharma).
One of the core expressions of Vedic literature is “satyam eva jayate” [6], which is usually translated as “truth always triumphs”[7]. This expresses the maxim that dharma always triumphs over adharma (or good triumphs over evil, order triumphs over entropy, etc.).
The above cannot be considered rigorous philosophic reasoning, but it does make the point that the Vedic mindset is not so concerned with extinction of life on earth in the big picture. It may be an “existential risk”, but it is not an “ontological risk”. However, the Vedic viewpoint is not passive, since life requires action (“karma”), and so every human must take action in accordance with dharma, and dharma is virtuous and life-supporting.
So to bring this discussion back to AGI, in the Vedic mindset, risks are always present, but the Dharmic Principle gives certain assurances that would see the risk of building AGI/SSI as acceptable. And since life requires karma in accordance with dharma, the Susiddha project strives to ensure that AGI/SSI is safe and beneficial. But in the end it must be recognized that “gahanā karmaṇo gatiḥ” (“the course of action is unfathomable”[8]); there will always be unknowables and no guarantees of success. Therefore, striving to do what is dharmic is the most important thing.
Obviously, we cannot just create AGI/SSI and assume it will do the right thing. There is much we must do to shape the nascent synthetic mind we create. For starters, we must make sure AGI reaches a high level of natural language understanding (NLU) and machine interpretation. Otherwise, it could be superintelligent without actually understanding anything about humanity or society or dharma.
Also, we will need to instill dharma into AGI/SSI. However, Dharma resides in the sounds of the Vedic literature, so to truly know dharma, the AGI/SSI must comprehend the Vedic literature from its sounds. This subject was addressed in a previous chapter and will be further discussed in the chapter on Shruti.
Once we are satisfied that the Susiddha AGI/SSI system can learn directly from the Vedic literature, and store what is heard and learned in deep neural networks[9], then we test its understanding (the models it has built) by observing the opinions and advice it gives (in response to questions), and the decisions it makes (in response to scenarios).
The period of evaluating AGI will be difficult for humans, because the answers it gives will likely be very novel, answers that humans could never have thought of. Humans will be challenged, and ultimately will have to gain an intuitive sense for whether we are ready to hand over control of the world and our future to AGI/SSI.
Obviously, there are concerns that AGI/SSI would simply take control before humans have a chance to evaluate and decide what to do; this is the so-called “AI control problem” (discussed in the book Superintelligence[10]). This is why it is important to instill dharma (life-supporting values) into AGI as it is being developed.
The Susiddha AI project believes that we have the “Dharmic principle” on our side. We believe the universe wants to be aware of itself, and so it evolves and produces conscious, living creatures such as ourselves. Thus we can feel confident to move forward and build AGI/SSI, with appropriate cautions and safeguards, but more importantly, with dharmic intentions.
And the human race needs to move forward with AGI/SSI, if it is going to solve all the global problems facing us, and reduce the existential threats menacing us.
Notes and References
- Machine Intelligence Research Institute (MIRI), https://intelligence.org/
- Definition of “anthropic principle”, Merriam-Webster Dictionary, http://www.merriam-webster.com/dictionary/anthropic%20principle
- Ibid
- Chandogya Upanishad, verse 6.2.3
- The chapter on Consciousness and Self-awareness describes the two different uses of the term “consciousness”, one being the metaphysical cosmic sense, and the other being the ordinary individual sense. The universe is inherently conscious in the metaphysical sense, and it evolves creatures that are conscious in the ordinary sense (once their brains achieve a sufficient level of complexity and intelligence).
- Mundaka Upanishad, verse 3.1.6
- Or “truth alone triumphs”, etc.
- Bhagavad Gita 4.17
- The neural network of an SSI brain will greatly exceed that of the human brain, both in size (number of neurons) and connectivity (number of synapses). An average human can only keep about 7 items in mind at one time, whereas an SSI mind will be able to keep thousands or millions of items in mind. This will allow the SSI mind to make much more balanced and rational decisions than humans.
- Superintelligence: Paths, Dangers, Strategies, Nick Bostrom, Oxford University Press, 2014