Shruti

Shruti

Shruti is the earliest portion of the Vedic literature. Shruti literally means “hearing” or “listening”, and also is translated as “that which is heard or perceived”. It refers to the portion of Vedic literature which was heard or “cognized” by ancient Rishis, and then transmitted orally from generation to generation of Vedic pandits.

The main “text” of Shruti is the Rig Veda, which is the oldest work of the Vedic literature. Shruti is composed in the Vedic language which preceded classical Sanskrit. This Vedic language is a “tonal” language, because intonation, pitch, and accent are necessary to properly render the “text”. Although systems of writing out Vedic works were later developed, pandits and Vedic scholars say that the written text does not capture the nuances of the orally transmitted text.[1]

As such, an AGI system must take an “oral” and “aural” approach to learning the Shruti literature. The chapter on Shabda (sound, speech, and language representation) has already discussed how computational audition and deep learning will enable such “aural” processing within a decade or two. In essence, AGI will learn oral works aurally, and store them in deep artificial neural networks, analogous to how the human brain stores what it learns in biological neural networks.

Also, the field of neuroscience will provide more knowledge as to how the brain actually stores and represents sound and speech, and such research findings will inform the implementation of aural deep learning in the AGI system.

In the next chapter on Rig Veda, we give more details of what we expect from such aural processing. For now, here is a simple example: Let a pandit chant a verse of Rig Veda; the AGI system will immediately recognize it; then it will respond by chanting the verse that follows it. (The recognition and response are done directly from the deep neural network, not from an MP3 audio file.) Such a demonstration as this will soon be doable, because some research systems[2] in the area of “music information retrieval”[3] can almost do this already.

In the chapter on machine interpretation we discussed the possibility of computers being able to interpret literature, although it could take a couple of decades before they do it well. However, as discussed in the next chapter on Rig Veda, there may be good reasons for letting an AGI/SSI system develop its own interpretations of the Vedic literature.

Another item to be explored is what the models formed by deep learning will be able to do. On the surface one might think these models can only perform such tasks as: predict what comes next; compose something in the same style; take a thought (or sentence) as input and find the closest matching thought; and, recognize patterns in the input data. These abilities have already been demonstrated in deep learning systems.

But there is reason to believe that the deep models formed by processing the Vedic literature (especially Shruti) will contain knowledge about systems-level laws of nature (especially those which govern mind and society). The rishis cognized these laws and put their intuitions (received at the pashyantī level of awareness) into sounds. This occurred at the dawn of civilization, at a time when language was in its purest, most intuitive state, before the invention of writing.

To explain this reasoning would take us far afield, and that will be left for a future chapter. It will require considerations of the origins of language, and the Vedic theory of language (including sphota, nāma-rūpa, shabda-brahman, etc.).

At any rate, we plan to make use of the continual and exponential advances (in hardware engineering, computer science, cognitive science, and neuroscience) which are leading to AGI and SSI, and use those to develop ways to process Shruti directly from its sounds, and thus help create the cognitive processes we call the “Vedic core” of our system.

In the following chapter on the Rig Veda, which is the paragon of Shruti, we go deeper into the aural processing that Susiddha AI will do.

Contents     —     Next chapter


Notes and References

  1. Veda Recitation in Varanasi, Wayne Howard, Motilal Banarsidass, 1986, page ix
  2. An Associative Memorization Architecture of Extracted Musical Features From Audio Signals by Deep Learning Architecture, Tadaaki Niwa, et al, Procedia Computer Science, November 3, 2014, http://www.sciencedirect.com/science/article/pii/S1877050914012812
  3. Music Information Retrieval: Recent Developments and Applications, Markus Schedl, Emilia Gomez, Julian Urbano, Foundations and Trends in Information Retrieval, Vol. 8, No. 2-3, 2014, http://www.nowpublishers.com/article/DownloadSummary/INR-042