Assumptions and dependencies of this project
The Susiddha AI project is dependent on many assumptions. The main ones are listed below.
Continued increase in the power of computing hardware.
This was discussed in the section hardware improvement in the chapter on Superintelligence.
Continued increase in the capabilities of software.
This was discussed in the section software improvement in the chapter on Superintelligence.
The development of AGI.
AGI and its progress was discussed in a previous chapter, and the apparent inevitability of AGI will be discussed in a future chapter.
The development of SSI from AGI.
The scaling of AGI to reach the level of SSI was discussed in the chapter on Superintelligence. This requires hardware scaling, as well as software scaling, including “recursive self-improvement” whereby the AGI system becomes smart enough to begin rewriting its own code, designing new functions, and gaining new abilities.
Artificial consciousness.
This was discussed in the chapter on Consciousness. Also, it assumed that artificial consciousness will scale to super-consciousness as AGI scales to SSI (superintelligence).
Sound and speech processing improvement
Improvement to the point that a computer can store and process both sound and speech as well as the human brain does. Obviously good speech processing is necessary for NLP and NLU. But in order for personal robots (and other smart devices) to be truly useful to us, they will have to be able to hear and comprehend sound as well as humans can. Consider the myriad of sounds you hear everyday; a short list might include: a car/truck/bus passing near, footsteps, a knock on the door, people talking nearby, honking, a faucet running, children playing, a pet barking or mewing, the whistle of a tea kettle, the beeping of a service truck backing up, sirens, sawing, drilling, lawn mowing, vacuum cleaner, patter of rain, clap of thunder, applause, music playing, etc. (You can think of many more.)
Thus, researchers (in neuroscience, computational audition, NLP, deep learning, etc.) seek to improve sound and speech processing in AI and robotics. The Susiddha project will use results of this research to build a system that can hear the Vedic literature, learn the “texts” aurally, store the oral “texts” in deep neural networks, and be able to process them as well as a human can. Knowing and processing the Vedic literature is necessary for developing a dharmic Avatar.
Machine interpretation
This topic was discussed in a previous chapter. Like other things in the realm of AGI, it depends on continuous improvements in the capabilities of software, including NLP and natural language understanding (NLU).
Knowledge from Shabda
Several places in this website mention the use of computational audition and deep learning to gain knowledge from shabda, i.e. from the sounds of Vedic literature (primarily Shruti). The oldest Vedic language (in which the Rig Veda is composed) is tonal and predates the invention of writing, so the sounds are paramount in its communication. Also, in Vedic philosophy (especially Mimamsa) it is believed there is a natural relationship between the “shabda” (sound) and its “artha” (meaning). This consideration is discussed in the chapter on Vedic theory of language.
For now, this assumption has enough merit so that the Susiddha project can begin exploring the hypothesis that techniques such as unsupervised deep learning can capture useful models and meaning from the Vedic literature (both as sound and text). These models will reside in neural networks; and, explaining how knowledge arises from neural networks is still a challenge for both computer science and neuroscience. But in the absence of full explanations, much progress is being made in generating knowledge via neural networks. One important area of research is question-answering[1][2]; obviously we will want the AGI system to provide good answers and sage advice (based on all the knowledge it has in its information stores and neural networks) for the questions and problems we pose to it.
Mitigation of the risks of AGI
This topic was discussed in a previous chapter. We make the assumption that the risks of AGI can and will be mitigated, and that AGI/SSI can be enormously beneficial for humankind. We also assume that AGI/SSI can be made “dharmic” (in the terminology of the Vedic literature).
The continuance of technological civilization
Development of AGI and SSI obviously depend on civilization continuing at its current level of technology. It’s sad to have to acknowledge that civilization could collapse for many reasons, such as climate change (with its destructive weather) overwhelming our ability to maintain the infrastructure (of power, water, agriculture, transportation, etc.). Another reason might be extreme societal unrest and anarchy, perhaps caused by the growing inequality of wealth and property on this planet.
AGI and SSI could no doubt help solve many of the huge challenges facing humanity (such as climate change), but they require another couple of decades of development to reach the point where they can provide answers to these challenges.
Given all these assumptions, it’s impossible to give any firm timeframe for the completion of such a project. Part of the research agenda will be to develop the “roadmap” and timeframe, along with milestones. However, the above assumptions seem reasonable and provide confidence to move forward on the project.
Given how fast the critical-path technologies are developing, it seems prudent to start working on the Susiddha AI project now, and work as if it can be accomplished within twenty years.
Notes and References
- Neural Generative Question Answering, Jun Yin, et al, IJCAI, 2016, https://arxiv.org/pdf/1512.01337v4.pdf
- A Neural Network for Factoid Question Answering over Paragraphs, Mohit Iyyer, et al, Empirical Methods in Natural Language Processing (EMNLP), 2014, https://cs.umd.edu/~miyyer/pubs/2014_qb_rnn.pdf