Common criticisms answered
It is expected that a project like Susiddha AI will attract a fair amount of criticism, considering there is already a fair amount of criticism of AGI and superintelligence. But the Susiddha project has additional depth and fuzziness since it aims to comprehend the Vedic literature and let the AGI/SSI system be guided by dharmic principles.
A previous chapter, Assumptions of this project, listed the current assumptions that are necessary to move the project forward. And a critic could challenge any of those assumptions, since they cannot be proven yet.
Another chapter, Enterprising nature of this website, explained that much of the content of this website is indeed speculative and conjectural. There is much to be researched and thought-out before firmer statements can be made.
The present chapter provides brief responses to some of the most commonly raised criticisms and objections.
AGI is not possible (or at least not this century).
This criticism will be discussed in a future chapter on the feasibility of AGI. It will include results of surveys of AI experts regarding when they expect to see the advent of AGI. Needless to say, there is a wide variety of opinions as to when AGI can be expected, but the vast majority of researchers believe that AGI is possible and feasible by 2075. Also, major players in the development of AGI (IBM, Google/DeepMind, Facebook, Amazon, and Microsoft) have formed the “Partnership on AI”, in recognition of the speed at which AGI is developing, and the need to allay popular fears and ward off governmental regulation of AI.
A computer can never be conscious.
The chapter on artificial consciousness attempts to answer this criticism. Obviously artificial consciousness is an open research question, and it will require more neuroscience (and even philosophical) discoveries in order to specify what consciousness is.
Certainly we will never be able to prove that a computer is conscious, just as one cannot prove that any other human is conscious. Rather, we assume the other person is conscious based on the way they behave (that is, they pass our “Turing test”). Objective methods (like “integrated information theory”) will be developed to measure consciousness, but it may still never be proven.
A computer will never be able to truly understand literature.
So far this is true, and natural language understanding (NLU) may be a ways off. But “never” is a strong word, and neuroscientists are just beginning to investigate what “understanding” is, how the human brain understands anything. So, as with artificial consciousness, the question of what a computer will be able to understand is still an open question.
The Susiddha AI project is a religious project.
This criticism will be fully answered in a future chapter on religion, but a couple of points can be made here.
The Vedic literature and philosophy in its entirety cannot be considered “religious”. Certainly it has spiritual aspects, but the Vedic spiritual explorations are part of its cognitive science and cosmology, because the core of a human being is basically one with the core of the cosmos. The misperception of religion arises because the Vedic culture has spawned several religions (generically called “Hinduism”), and most Hindus are only cognizant of the religious aspects of their culture. However, the Susiddha project is not concerned with religion, and will go deeper into consciousness and intelligence than any religion does.
The quest for AGI itself (leading to a “technological singularity”) has been called “religious” by some critics, and has been called a “religion of the geeks”. Obviously there is looseness in what the term “religion” means, and what it can be applied to.
Religion involves absolute beliefs. By contrast, scientific research requires provisional beliefs, i.e. the scientist (or engineer) tentatively believes in a hypothesis, so that work can proceed until it is either proven or disproven. So, the Susiddha project (and the development of AGI) requires provisional belief that its hypotheses are correct and doable.
It’s too dangerous to create AGI/SSI.
That is, it’s too dangerous to create AGI/SSI, because then there will be a species on earth that is more intelligent than humans, and they might mistreat us or even extinguish us. Also, if AGI/SSI possesses consciousness, it might be something totally alien from human consciousness.
This criticism expresses the concern of “unfriendly AI”, that AGI might turn out to be an enemy of humanity. This criticism falls under the general topic of the risks of AGI, which includes the “existential risk” for humanity. Some attempt has been made to answer this concern in the chapter on mitigation of risks.
There are ethical issues in how we treat AGI beings.
That is, if AGI computers are conscious, then there will be ethical issues that might make it impossible to (ethically) develop them any further.
Certainly the issue of ethics needs to be dealt with, because we do not want our AGI systems to suffer. However, it is important to note that AGI systems will be at the human level of intelligence for only a short period of time before they begin to greatly exceed humans in intelligence (and consciousness).
Thus we obviously want to treat our AGI systems well, so that they will treat us well once they are far beyond us in abilities. Also from the Vedic standpoint, avatars (such as Rama and Krishna) undergo trials and suffer in their youth, and this is a standard part of the “hero’s journey”. 
This project just can’t be possible.
This criticism is the “argument from incredulity”, which is deemed a fallacy in logic. Certainly the Susiddha project may tax one’s imagination, but so do AGI and SSI in general. Part of this reaction has to do with the fact that the human mind is not accustomed to thinking exponentially, and thus cannot comprehend that the current rate of technological change could make many unimaginable things possible (and also make them testable).
The Vedic literature is just a bunch of mythology.
Certainly when read in English translations, the Vedic literature sounds like mythology. It is necessary to actually hear the sounds of Sanskrit, and experience them at the “pashyantī” level of consciousness to get the correct understanding of the literature.
Also, “mythology” is not necessarily a derogatory word. Sallustius (a 4th century CE Roman philosopher) said “myths are things that never happened but always are”, and this statement has been echoed by scholars like Joseph Campbell. It’s probable that mythological stories represent systems-level (psychic, societal, and cultural) laws of nature that are involved in the development of consciousness and intelligence.
Humans have discovered many of the fundamental laws of nature (such as the physical laws of motion, thermodynamics, relativity, etc.) but have only begun to discover the higher systems-level laws which govern such complex adaptive systems as the human brain, probably because the computing power needed to formulate and simulate these laws is only now beginning to emerge.
Better things to do.
That is, there are better things that Hindus (and the country of India) should be doing with the effort and resources that Susiddha AI would require. This issue will be taken up in a future chapter, but here we would just point out that:
- Hindus and India would greatly benefit from such a cultural and technological project. (For instance, see the chapter on spinoffs.)
- India would be wise to decide to be in the forefront of artificial intelligence, and the primary focus of “Make in India” should be the “manufacture” of intelligence.
- It would greatly speed up the development of AGI if great numbers (even millions) of Hindus would support this project.
- Certainly India has its share of problems, in terms of poverty, pollution, climate change, social injustice, etc., but these are problems that are common to the entire world. And India (in the spirit of “vasudhaiva kutumbakam”) should work on solving these global problems with the help of AGI.
Because of all the benefits of undertaking such a great AI project, Hindus will see the value in supporting it.
To conclude, we feel that the above criticisms and objections can be adequately answered at this time, and thus we are comfortable and confident in moving forward with the Susiddha AI project (including organizing, setting the research agenda, funding, publicizing, etc.). AGI and SSI are inevitable, and the Susiddha project (with its Vedic viewpoint) seeks to ensure that they are beneficial and dharmic.
Notes and References
- Future Progress in Artificial Intelligence: A Survey of Expert Opinion, Vincent C. Muller, Nick Bostrom, Springer, 2016, http://link.springer.com/chapter/10.1007%2F978-3-319-26485-1_33, http://www.nickbostrom.com/papers/survey.pdf
- Partnership on AI, founded Sept. 2016, https://www.partnershiponai.org/
- Belief in The Singularity is Fideistic, Selmer Bringsjord, et al, Springer, January 25, 2012, http://link.springer.com/chapter/10.1007%2F978-3-642-32560-1_19, http://kryten.mm.rpi.edu/SB_AB_PB_sing_fideism_022412.pdf
- The Singularity Is A Religion for Geeks, Jaron Lanier, Singularity Weblog, May 11, 2011, https://www.singularityweblog.com/jaron-lanier-on-singularity-1-on-1-the-singularity-is-a-religion-for-geeks/
- The Hero with a Thousand Faces, Joseph Campbell, Princeton University Press, 1949, https://www.jcf.org/new/index.php?categoryid=83&p9999_action=details&p9999_wid=692
- Note, avatars do not actually suffer since their incarnation is “līlā”, i.e. play and sport.
- pashyantī is discussed in the chapter on Four levels of awareness.