Four levels of Awareness

Four Levels of Awareness

In the previous chapter we discussed consciousness, and how it might arise (or be engineered) in an artificial general intelligence (AGI) system. Although most AGI researchers don’t believe that consciousness is required for a system to attain human-level intelligence, many of them think it’s possible (even likely) that consciousness will arise (without any specific engineering) in AGI systems having human-level intelligence.

For the purposes of building an Avatar, we assume that consciousness is necessary, and so consciousness (and having a “self” for “self-awareness”) will be a major focus of the Susiddha AI project.

In the Vedic literature on consciousness, there is a concept which goes by various names, often called the “four levels of speech”.[1][2] However, in this phrase, “speech” is just illustrative, since the concept applies to all cognitive and conscious sensory-motor activity, not just speech. Thus the term “four levels of awareness” is a clearer description of this concept, and will be used in this project.

The first level of awareness is the subtlest, finest, and most abstract level; the fourth level is the grossest, most concrete level. The four levels of awareness are named (in Sanskrit): Parā, Pashyantī, Madhyamā, and Vaikharī.


Parā is a state of “pure awareness” in which no thought of any kind is present. The only sound/vibration that could be said to be present is the undifferentiated “cosmic hum” of the universe. This state is typically achieved via yoga meditation and is associated with the term “nirvikalpa samādhi”.


Pashyantī is a state where the beginnings of thought are present as imagery and intuition. In this state, sound or image is at one with meaning, and there is little or no temporal sequence of words and ideas. This state can have a dream-like quality. An entire chapter is dedicated to describing Pashyantī and how it can be implemented in an AGI system.


Madhyamā is the state of ordinary mental thought, where sentences are being formed (from words in sequence), and judgements and decisions are being consciously made. It is a state of inner speech, where humans can “hear themselves think”.


Vaikharī is the outermost, concrete state of thought. It is typically verbal, i.e. we speak something to affect the thoughts of others who hear (or read) our “utterances”. It can also be “silent speech” when we are consciously rehearsing something we plan to say, and nerve impulses going to the mouth and throat can be detected.


Few neuroscientists know about the Vedic four levels of awareness, and thus there’s not much research that focuses specifically on them. However, many research findings provide glimpses into these levels, and are beginning to delineate their operation.

Parā:  Recent neuroscience findings regarding the “default mode network”[3] hint at the parā level of awareness (as glimpsed in meditation). Also, findings regarding the synchrony and coherence of brainwaves and brain regions during meditation[4] (especially in periods when subjects experience “pure awareness” without any thoughts) are describing brain operation at the level of Parā.

Pashyantī:  One neuroscience finding that corroborates the existence of the pashyantī level is that the human brain begins to initiate “voluntary” action before the mind is aware that it has decided to initiate any action. This finding (by Benjamin Libet, in 1985 [5]) has now been replicated many times in many variations of the original experiment (including with fMRI[6] and brain electrodes[7]), it is accepted as fact, although there is still debate about its interpretation, especially in regards to the issue of “free will”.

Madhyamā:  This is the level on which neuroscience is increasingly able to detect the content of thoughts that are occurring. Research in “mind reading” has been done using techniques of fMRI[8], MEG[9], and EEG[10].

Vaikharī:  This is the “grossest” level of awareness and speech, and one doesn’t need neuroscience to observe it. However, the pronunciation of even a single phoneme requires a neural program to be executed in the brain. And, the pronunciation of an entire word is a higher level program which assembles the phonemes, adds pitch, etc. Also, sentences can have accompanying gestures, facial expressions, and other non-verbal cues. So, the brain is quite active at the Vaikharī level, reading off the thoughts, translating them into motor programs, and then experiencing the feedback of hearing oneself talk.

In the next 10 to 20 years, neuroscience will hone in precisely on the neural correlates and functioning of consciousness and the levels of awareness. As these correlates and functionings are discovered, it will become possible to discover the computational equivalents which will make it possible to engineer consciousness in an AGI system.

For the Susiddha AI system, these four levels are important, and how to implement these levels (especially pashyantī) is an important part of the research agenda.

One reason our system needs the four levels of awareness is because sound (“shabda”), language/speech (“vāk”), and hearing (“shruti”) are important to the process of thinking. Since we aim to create a conscious “thinking machine”, the four levels are necessary to provide the proper basis for understanding sound and speech in the Vedic literature. The next chapter will deal with sound and Shabda.

Contents     —     Next chapter

Notes and References

  1. The best exposition of these four levels was given by Abhinavagupta circa 1000 CE, in the Kashmiri Shaivism tradition.
  2. Vac: The concept of the word in selected Hindu tantras, Andre Padoux, SUNY Press, 1990, chapter 4, “Levels of the Word”
  3. Default mode network, Wikipedia,
  4. Review of the Neural Oscillations Underlying Meditation, Darrin Lee, et al, Frontiers in Neuroscience, March 26, 2018,
  5. Unconscious cerebral initiative and the role of conscious will in voluntary action, Benjamin Libet, et al, Behavioral and Brain Sciences, 1985, pages 529-566
  6. Unconscious determinants of free decisions in the human brain, Chun Siong Soon, et al, Nature Neuroscience, April 13, 2008, pages 543-545,
  7. Predicting Action Content On-Line and in Real Time before Action Onset: an Intracranial Human Study, Christof Koch et al, California Institute of Technology, Neural Information Processing Systems (NIPS), 2012,
  8. This ‘mind-reading’ algorithm can decode the pictures in your head, Matthew Hutson, Science Magazine, Jan. 10, 2018,
  9. Tracking neural coding of perceptual and semantic features of concrete nouns, Gustavo Sudre, et al, NeuroImage journal (Elsevier), May 4, 2012,
  10. Can an EEG Read Your Mind?, Brenda Kim, LabRoots, April 4, 2018,