Tuesday, December 10, 2013

VOCATIONALIZATION OF SANSKRIT EDUCATION


VOCATIONALIZATION OF SANSKRIT EDUCATION - +xÉÖ¤ÉxvÉ& SÉiÉÖ¹]õªÉ¨ÉÂ.

´ªÉÉQÉ& ºÉä´ÉÊiÉ EòÉxÉxÉÆ ºÉÖMɽþxÉÆ ËºÉ½þÉä MÉÖ½þÉÆ ºÉä´ÉiÉä* ½ÆþºÉ ºÉä´ÉÊiÉ {ÉÎnÂù¨ÉxÉÓ EÖòºÉÖʨÉiÉÉÆ MÉÞwÉ& ¶¨É¶ÉÉxɺlɱÉÒ*
ºÉÉvÉÖ ºÉä´ÉÊiÉ ºÉÉvÉÖ¨Éä´É ºÉiÉiÉÆ xÉÒSÉÉä%Ê{É xÉÒSÉVÉxÉÆ* ªÉÉ ªÉºªÉ |ÉEÞòÊiÉ& º´É¦ÉÉ´ÉVÉÊxÉiÉÉ EäòxÉÉÊ{É xÉ iªÉVªÉiÉä*

Everyone is a creature of his/her habits, which reflect their cognitive abilities that are both inherited and culturally acquired. According to Ayurveda, the embryo inherits different, but fixed characteristics (crystallized intelligence) from both parents, which, in turn are derived from their respective parents. This way, there is a continuous chain of mutation between the genes passed on to the off-springs. The Gotra system prevalent in our country ensures greater degree of genetic mutation making our country a veritable storehouse of rich genetic strands. The bringing up of a child in different cultural environments modifies the effect of these genetic traits (fluid intelligence). This net modified effect is called “º´É¦ÉÉ´É” – talent or heritability coefficient. One can excel only if his profession fully harmonizes with his talent - “º´É¦ÉÉ´É”. Hence it is called “º´ÉvɨÉÇ” –cognitive ability. If one is engaged in a profession where his heritability coefficients are not harmonized with his activities, it becomes ineffective or generates contradictory thoughts and performance, which may create serious problems. Hence Gita (18/47) says: ¸ÉäªÉÉxº´Évɨ¨ÉÉæ Ê´ÉMÉÖhÉ& {É®úvɨ¨ÉÉÇiº´ÉxÉÖι`öiÉÉiÉÂ* º´É¦ÉÉ´ÉÊxɪÉiÉÆ Eò¨¨ÉÇ EÖò´ÉÇzÉÉ{xÉÉäÊiÉ ÊEòα¤É¹É¨ÉÂ* Even though not performed in the best possible manner, action according to one’s cognitive abilities is better than doing something contrary to it in the best possible manner. Following one’s talent or cognitive abilities is not sinful or wrong.

DEFINING EXCELLENCE: YÉÉxÉÉªÉ EÞòiªÉÆ {É®ú¨ÉÆ ÊGòªÉɦªÉ&*

All descriptions of reality are based on existence (+κiÉi´É) that can be cognized (+ʦÉvÉäªÉi´É) and expressed in a language (´ÉÉSªÉi´É). Objects (ʴɹɪÉ) are special combinations of materials (pù´ªÉ) joined by energy that display different degrees of excellence (MÉÖhÉ) through interactions (Eò¨¨ÉÇ) with other bodies based on density gradient (Ê´É ¶É´nùÉä ʽþ ʴɶÉä¹ÉÉlÉÇ& ʺÉxÉÉäiÉä¤ÉÇxvÉ =SªÉiÉä* ʴɶÉä¹ÉähÉ ÊºÉxÉÉäiÉÒÊiÉ Ê´É¹ÉªÉÉä%iÉÉä ÊxɪÉɨÉEò&). These are described based on commonality (ºÉÉvɨ¨ªÉÇ) and differentiation (´Éèvɨ¨ªÉÇ) of objects with each other. Knowledge (YÉÉxɨÉÂ) is the commonality (ºÉɨÉÉxªÉ) evident in everything. Science (Ê´ÉYÉÉxɨÉÂ) is the special (ʴɶÉä¹É) knowledge about the identifying characteristics of any group or each member of a group. Thus, science is a derivative of knowledge. Vedas contain knowledge. Hence science can be derived from the Vedas. Since knowledge is revealed as science, it is called Ê´ÉtÉ. Without action (Eò¨¨ÉÇ – also technology), knowledge is practically useless. However, knowledge cannot induce action directly. Only science (and technology dependent on it), induces action through freewill. Hence action (Eò¨¨ÉÇ) is called +Ê´ÉtÉ. Ishopanishad uses these two words in this sense (+Ê´ÉtɪÉÉÆ ¨ÉÞiªÉÖÆ iÉÒi´ÉÉÇ Ê´ÉtªÉɨÉÞiɨɶxÉÖiÉä). Knowledge and science are interdependent. Without proper knowledge and science, technology can not only be misleading but also be dangerous.

Knowledge is not data, but the ‘awareness’ of exposure/result of measurement associated with any object, energy or interaction stored in memory as an invariant concept that can be retrieved (º¨ÉÞÊiÉ{ÉÚ´ÉÉÇxÉÖ¦ÉÚiÉÉJªÉ ʴɹɪÉÆ YÉÉxɨÉÖSªÉiÉä). It describes through a language (xÉɨÉ) the defining characteristics of some previously known thing – physical properties (°ü{É) and chemical interactions (Eò¨¨ÉÇ) - by giving it a name that remain the same as a concept at all times – thus immune to spatiotemporal variations - till it is modified by fresh inputs. The variations of the object, energy or interaction under different specific circumstances and the predetermined result thereof form part of knowledge. In a mathematical format (ºÉÉRÂóEäòÊiÉEò), it depicts the right hand side of each equation or inequality representing determinism (EòÉ®úhÉMÉÖhÉ{ÉÚÌ´ÉEòÉ). Once the parameters represented by the left hand side are chosen and the special conditions represented by the equality sign are met, the right hand side becomes deterministic and starts a new chain (|ÉÉ®ú¤vÉ). In ancient times, this knowledge was technically called +ÉÎx´ÉÊIÉEòÒ, which literally means describable facts about the invariant nature of everything. Vedic research methodology is also called +ÉÎx´ÉÊIÉEòÒ.

Engineering and Management which deal with the efficient use of objects or persons respectively; are related to left hand side of an equation – free-will (ÊGòªÉ¨ÉÉhÉ Eò¨¨ÉÇ); which presupposes knowledge of the deterministic behavior of objects or humans that can be chosen or effectively directed to create something or function in a desired manner in a maximally economic and regenerative way. This was called jɪÉÒ – literally the three phases of mass (three states of solid, fluid and plasma - wÉÖ´É, vÉjÉÇ, vɯûhÉ), energy and radiation in all combinations – physical and chemical properties (protestation, loyalty and expectation in the case of humans). Since the three Vedas contain knowledge about these three aspects (@ñSÉɨÉÚÌkÉ& ªÉÉVÉÖË¹É MÉÊiÉ ºÉÉ¨É¨ÉªÉ iÉäVÉ&), they are also called jɪÉÒ. The responsive mechanism was called nùhb÷xÉÒÊiÉ – principles of inducement through reward and punishment (essentially material addition or reduction both for men in management and materials in Engineering). The regenerative mechanism was called ´ÉÉkÉÉÇ – problem solving. It solved the problems of deficiency through regenerative agriculture, diary farming, marketing, etc, and kept machines production-ready through maintenance. These four basic tenets, equally valid for both technology and management, are also immutable - invariant in time, space and culture leading to deterministic results. Hence from Manu to Brihaspati (in his ¤ÉɽÇþ{ÉiªÉ +lÉǶÉɺjɨÉÂ, which was followed by Kautilya in his +lÉǶÉɺjɨÉÂ) to later day authors like Kamandaki, everyone has described +ÉÎx´ÉÊIÉEòÒ jɪÉÒ ´ÉÉkÉÉÇ nùhb÷xÉÒËiÉ SÉ ¶ÉÉ·ÉiÉÒ. Lack of knowledge of the deterministic behavior to guide choice of the freewill components has led engineering and management astray. The fast changing technology and management principles point to their inherent deficiencies that need immediate rectification. But first, what causes such distortion?

According to the Vedic tradition, excellence in action is Dharma and non-excellence is Adharma (EÖò¶É±ÉÉEÖò¶É±ÉÉèEò¨ÉÉê vɨÉÉÇvɨÉÉê). Excellence ({É®ú¨ÉÉä ʽþ ¨ÉxjÉ&) is the harmonization of one’s talents with thought and action (¨ÉxɺªÉäEäò ´ÉSɺªÉäEäò Eò¨¨ÉÇhªÉäEäò ¨É½þÉi¨ÉxÉɨÉÂ). Its opposite is called ÊVÉÀ - ¨ÉxɺªÉäxªÉiÉ ´ÉSɺªÉäxªÉiÉ Eò¨¨ÉÇhªÉäxªÉiÉ nÖù®úÉi¨ÉxÉɨÉÂ. Excellence has two components: 1) acquisition of the presently unavailable - orientation towards growth (ªÉÉäMÉ-¦ÉʴɹªÉ±ÉɦɺªÉ ºÉRÂóMɨÉÉlÉǨÉÂ) and 2) maintenance of the available assets (IÉä¨É- +iÉÒiÉ ±ÉɦɺªÉ ºÉÖ®úIÉhÉÉlÉǨÉÂ), which includes provisions for disaster management (+É{Éi|É{ÉzɺªÉ SÉ ¨ÉÉäIÉhÉÉlÉǨÉÂ). When such principles of ªÉÉäMÉ-IÉä¨É are applied for the body, it is called nourishment ({ÉÖι]õ), for sense organs, it is happiness (ºÉÖJÉ), for mind, it is contentment (iÉÖι]õ) and for the self or consciousness, it is peace (¶ÉÉÎxiÉ). Increasing prosperity of the body and sense organs are called development orientation (+¦ªÉÖnùªÉ) and that for mind and soul are called the ultimate goal (ÊxÉ&¸ÉäªÉºÉ).

THE MATERIALISTIC WORLD - ºÉ´ÉäÇ MÉÖhÉÉ& EòÉ\SÉxɨÉɸɪÉxiÉä*

Everyone wants to be happy. Happiness (ºÉÖJÉ) in the world depends on six factors. These are:
1.      Regular source of income to meet one’s requirements (+lÉÉÇMɨÉÊxÉiªÉ¨ÉÂ).
2.      Freedom from any disease (+®úÉäÊMÉiÉÉ).
3.      A loving spouse (Ê|ɪÉÉ ¦ÉɪÉÉÇ)
4.      A soft spoken and well behaved spouse (Ê|ɪɴÉÉÊnùxÉÒ ¦ÉɪÉÉÇ).
5.      Obedient children (´É¶ªÉ& {ÉÖjÉ&).
6.      Productive vocational education (+lÉÇEò®úÒ Ê´ÉtÉ).

Of these, the last is most important, because, happiness is defined as that, which fulfills the following three conditions:
·        That which is not curtailed by sorrow (ªÉzÉnÖù&JÉä¹ÉÖºÉÆʦÉzÉ),
·        Which does not get exhausted (xÉ SÉ OɺiɨÉxÉxiÉ®ú¨ÉÂ) and
·        That which is in conformity with one’s desires (+ʦɱÉʹÉiɨÉÉ{xÉÉäÊiÉ).
Cherished happiness fulfills all three conditions (ºÉ& ºÉÖJÉ& ºÉ¨{Énùɺ{Énù¨ÉÂ). While we cannot do much about the first two, the third can be achieved in most cases with money. With the right vocational education, we could get a job to earn good money. With money, we could treat the diseases, get a good wife (with luck favoring), and bring up our children properly. In today’s world, money is the most cherished object (vÉxÉÉÊxÉ ¶±ÉÉPÉxÉÒªÉÉÊxÉ). Thus, choosing the right branch of education has become most important. The issue will be clear if we examine the decline in popularity of vernacular languages like Odia and increasing popularity of English in general and obscure languages like Pali in Civil Service Examinations in particular.

In the earlier times, our daily routine was regulated by the seasons. Sporadic hard work in some seasons was followed by long periods of leisure and enjoyment in other seasons. People enjoyed cultural activities including music, drama, etc. As music climaxed, dopamine flooded the right caudate nucleus, correlating with the listener’s experience of anticipation rushing out from the synapses of neurons in the right nucleus accumbens (NAc) increasing heart rate and sweating, rapid breathing and a drop in skin temperature indicating emotional arousal. The music generated interest in vernacular language. But modern day life is stereotyped without variation. Electronic media has demolished cultural barriers. The “variety” in culture is a mix with emphasis on “change”, which is contrary to happiness. Our body’s internal clock is set for two 12-hour periods of light and darkness. Shift duty is affecting the immune system (NFIL3 genes keep the production of TH17 cells synchronized with periods of light and darkness). With this job-shift, searching, calibrated, indifferent, reactant and apathetic boredom increases. Thus interest in literature - which was the main source of entertainment - has declined. Increased competition has forced the students to search for avenues that give maximum return with minimum effort. Thus, English and other foreign languages that facilitate getting jobs are preferred over vernacular languages like Odia or Bengali. Languages like Pali or subjects like geography, are preferred not for interest in them, but for getting more marks by studying comparatively less. In IT and other international business sectors, learning a foreign language is considered an added asset.

In this scenario, what is the future of Sanskrit language or for that matter, the Sanskrit Universities? The job prospects of Sanskrit scholars are very limited. It will be increasingly acute in times to come – particularly when the quality of teachers are declining rapidly. Without jobs, interest in Sanskrit Degrees will decline. Can Sanskrit Universities, where compartmentalization due to reductionism is distorting textual meaning (as the Vedas can be interpreted correctly only after reading all the Vedangas), survive only with Government grant? With increasing emphasis on the weeding out of subsidies to reduce current account deficit, they may not survive long. Their survival depends on their vocationalization. Even top institutions like IIM Calcutta are changing to tech-enabled education and restructuring the curriculum to stay afloat. Why not they.

This is the age of science. But currently physics is at cross-roads. There are a large number of different approaches to the foundations of Quantum Mechanics (QM). Each approach is a modification of the theory that introduces some new aspect with new equations which need to be interpreted. Thus there are many interpretations of QM. Every theory has its own model of reality. There is no unanimity regarding what constitutes reality. Quantum Mechanics is not compatible with Einstein’s Relativity. General Relativity does not work beyond solar system. Most of the ‘established theories’ have been questioned as the latest observations find mind boggling anomalies in theoretical prediction and actual measurement. In hierarchy problem of dark energy, the theory and observation differ by a mind boggling factors ranging from 1057 to 10120. Similarly fantasies like extra-dimensions have not been proved even after more than a century. In short, there is a severe crisis in physics, though no one is publicly admitting it, as they fear that international funding will dry up. The Sanskrit Universities should use this opportunity to fill the void by scientifically interpreting the Vedas and presenting the science inherent in it.

Vedas contain knowledge from which science can be derived. In various international fora, we have challenged most of the leading scientific theories by pointing out their deficiencies and pointing out alternative theories derived from the Vedas. These are in the public fora including in our blog: basudeba.blogspot.com. We are getting many queries indicating interest in Vedic science. There is ample scope for research in Vedic science. But scientists do not understand Vedas and Vedic scholars do not know science.

Knowledge cannot be imparted – it has to be acquired (xÉɪɨÉÉi¨ÉÉ |É´ÉSÉxÉäxÉ ±É¦ªÉÉä xÉ ¨ÉävɪÉÉ xÉ ´É½ÖþvÉÉ ¸ÉÖiÉäxÉ* ªÉ¨Éä´Éè¹É ¤ÉÞhÉÖiÉä iÉäxÉ ±É¦ªÉÉä iɺªÉä¹É +Éi¨ÉÉ Ê´É¤ÉÞhÉÖiÉä iÉxÉÖÆ º´ÉɨÉÂ). Hence the schools are called Ê´ÉtÉ±ÉªÉ and not YÉÉxÉɱɪÉ. If the student contemplates on any question with an open mind (Ê´ÉÊSÉxiɪÉäÊzÉiªÉ¨ÉÖnùÉ®únù¶ÉÇxɨÉÂ), the answer is revealed to him in a flash. But when that flash will appear depends on many factors. We are not discussing these here. Memorization and reproduction of certain facts is not education. The teacher only teaches – tries to make the student enabled – regarding how to solve some problems by himself. The student solves the problems using his talent - heritability coefficients, which is inborn. Education and practice only enhances skill - cognitive abilities. Since talent is inborn, the same education will not develop the same cognitive abilities in all students.

THE PUZZLE: BEòÉänù®ú ºÉ¨ÉÖnÂù¦ÉÚiÉÉ BEòxÉIÉjÉVÉÉiÉEòÉ&* xÉ ¦É´ÉÎxiÉ ºÉ¨ÉÉ& ¶ÉÒ±ÉèªÉÇlÉÉ ´Énù®úÒEòh]õEòÉ&*

Whether and how heritability coefficients vary across specific cognitive abilities has been debated for ages. In the West, the traditional “investment” theory of cognitive abilities classifies intelligence into two main categories: fluid and crystallized (different in detail from our view). Differences in fluid intelligence are thought to reflect novel, on-the-spot reasoning, whereas differences in crystallized intelligence are thought to reflect previously acquired knowledge and skills. Crystallized intelligence develops through the investment of fluid intelligence in a particular body of knowledge. Genetically, this theory predicts that in the general population; in which people differ in their educational experiences, the heritability of crystallized intelligence is expected to be lower than the heritability of fluid intelligence. This theory assumes that fluid intelligence is heavily influenced by genes and relatively fixed, whereas crystallized intelligence is more heavily dependent on acquired skills and learning opportunities.

In a recent study: “On the Nature and Nurture of Intelligence and Specific Cognitive Abilities: The More Heritable, the More Culture Dependent” published in Psychological Science, (DOI: 10.1177/0956797613493292), researchers investigated how heritability coefficients vary across specific cognitive abilities both theoretically and empirically. They assessed the “Cultural load” of various cognitive abilities by taking the average percentage of test items that were adjusted when the test was adapted for use in 13 different countries. The finding suggests that:
1.In adult samples, culture-loaded subtests tend to demonstrate greater heritability coefficients than do culture-reduced subtests; and
2.In samples of both adults and children, a subtest’s proportion of variance shared with general intelligence is a function of its cultural load.
The above finding implies that, the extent to which a test of cognitive ability correlates with Intelligence Quotient (IQ) is the extent to which it reflects societal demands and not cognitive demands. “IQ” here refers to the general intelligence factor: technically defined as the first factor derived from a factor analysis of a diverse battery of cognitive tests, representing a diverse sample of the general population, explaining the largest source of variance in the dataset. Further, in adults, higher heritability of the cognitive test reflects more test-dependence on culture. The effects were medium-to-large and statistically significant. Highly culturally loaded tests such as Vocabulary, Spelling, and Information had relatively high heritability coefficients and were also highly related to IQ. This counterintuitive finding is inconsistent with the traditional investment theory and aggravated the nature-nurture debate of intelligence.

The question: “why did the most culturally-loaded tests have the highest heritability coefficients” – returns many puzzles. The society is a homogenous learning environment – school systems are all the same; everyone in a class has the same educational experiences; yet the cognitive ability varies. If the traditional investment theory is correct and crystallized intelligence (such as vocabulary, general knowledge) is more cognitively demanding than solving the most complex abstract reasoning tests, then tests such as vocabulary would have to depend more on IQ than fluid intelligence. But why tests such as vocabulary would have a higher cognitive demand than tests that are less culturally-loaded but more cognitively complex (such as tests of abstract reasoning)? Also, this theory doesn’t provide an explanation for why the heritability of IQ increases linearly from childhood to young adulthood. One way out is to abandon some long held assumptions in the West. These findings are best understood in terms of genotype-environment covariance, in which cognitive abilities and knowledge dynamically feed off each other. Those with a proclivity to engage in cognitive complexity will tend to seek out intellectually demanding environments. As they develop higher levels of cognitive ability, they will also tend to achieve relatively higher levels of knowledge. More knowledge will make it more likely that they will eventually end up in more cognitively demanding environments, which will facilitate the development of an even wider range of knowledge and skills.

Societal demands influence the development and interaction of multiple cognitive abilities and knowledge, thus causing positive correlations among each other, and giving rise to the general intelligence factor. These findings do not mean that differences in intelligence are entirely determined by culture. The structure of cognitive abilities is strongly influenced by genes also. What these findings do suggest is that there is a much greater role of culture, education, and experience in the development of intelligence than mainstream Western theories of intelligence have assumed. Behavioral genetics researchers - who parse out genetic and environmental sources of variation – have often operated on the assumption that genotype and environment are independent and do not co-vary. These findings suggest they very much do co-vary.

Attempts were made to link perception and intelligence - for instance, do intelligent people see more detail in a scene? Now scientists at the University of Rochester and at Vanderbilt University have demonstrated that high IQ may be best predicted by combining what we perceive and what we cannot. In two studies in the journal Current Biology, researchers discovered that performance on this test was more correlated with IQ than any other sensory-intelligence link ever explored - but the high-IQ participants were not simply scoring better overall. Individuals with high IQ indeed detected movement accurately within the smallest frame - a finding that suggests that the ability to rapidly process information contributes to intelligence. More intriguing was the fact that subjects who had higher IQ, struggled more than other subjects to detect motion in the largest frame. The authors suggest that the findings underscore how intelligence requires that we think fast but focus selectively, ignoring distractions. Earlier, analysts of the US Army data claiming black-white difference invented a “Spearman’s hypothesis” to show that “the magnitude of the black-white differences on tests of cognitive ability are directly proportional to the test’s correlation with IQ”. In “Psychology, Public Policy, and Law” 2005, Vol. 11, DOI: 10.1037/1076-8971.11.2.235, the authors made the case that this proves that black-white differences must be genetic in origin. But the recent findings discussed above suggest just the opposite: The bigger the difference in cognitive ability between blacks and whites, the more the difference is determined by cultural influences. More study on the role of genotype-environment covariance in the development of cognitive ability needs to be done.

This conclusion is the same as that we discussed in the beginning. The Vedic principles are re-establishing itself. According to Samudra, the inventor of Samudrika, the heritability coefficients can be judged from human body by analyzing a conglomeration of eight characteristics: “ºÉƽþÊiɺÉÉ®úÉxÉÖEòºxÉä½þÉäx¨ÉÉxÉ|ɨÉÉhɨÉÉxÉÉÊxÉ* IÉäjÉÉÊhÉ |ÉEÞòÊiɺlÉÉä ʨɸɨÉäiÉnùÊ{É ¶ÉÉ®úÒ®ú¨ÉÂ*”. He has given detailed descriptions. Similarly, according to Manu, analysis of a conglomeration of seven external signs indicates the cognitive abilities: “+ÉEòÉ®èúÊ®úÎRÂóMÉiÉèMÉÇiªÉÉ SÉä¹]õªÉÉ ¦ÉÉʹÉiÉäxÉ SÉ* xÉäjÉ´ÉCjÉÊ´ÉEòÉ®èú¶SÉ MÉÞÁiÉäxiÉMÉÇiÉÆ ¨ÉxÉ&*”. The mechanism of perceptive or cognitive ability has been elaborately discussed by Gautama (|É´ÉkÉÇxÉɱÉIÉhÉÉ nùÉä¹ÉÉ&) and in its commentary by Vatsayana (Eò¨¨ÉDZÉIÉhÉÉ& JɱÉÖ ®úHòÊuù¹]õ¨ÉÚføÉ&, ®úHòÉä ʽþ iÉiÉ Eò¨¨ÉÇ EÖò¯ûiÉä ªÉäxÉ Eò¨¨ÉÇhÉÉ ºÉÖJÉÆ nÖù&JÉÆ ´ÉÉ ±É¦ÉiÉä, iÉlÉÉ Êuù¹]õºiÉlÉÉ ¨ÉÚfø <ÊiÉ). Unfortunately, people have forgotten how to interpret the Vedas and allied texts, as the meanings of most of the technical terms used in the Vedas and other related literatures have been forgotten. We are not discussing these in detail here.

The Vedic Vak is not limited to speech. It includes all perceptions as they are expressed only through speech form. We cannot even think without speech form. Hence Patanjali in his Mahabhashya says: ´ªÉÉEò®úhÉÉä{ɨÉÉxÉEòÉä¶ÉÉ{iÉ´ÉÉCªÉÉnÂù ´ªÉ´É½þÉ®úiɶSÉ iÉjÉ ¶ÉÊHòOÉɽþEòʶɮúÉä¨ÉhÉä´ÉÞÇrù´ªÉ´É½þÉ®úºªÉ <ÊiÉ |ÉɽÖþÌ´ÉrÆùɺÉ&*. In the mechanism of perception, each sense organ moves outward ({É®úÉÎ\SÉJÉÉÊxÉ) to perceive different kind of impulses related to the fundamental forces of Nature. Eyes see by comparing the electromagnetic field (={ɪÉɨÉ) set up by the object with that of the electrons in our cornea, which is the unit. We cannot see in total darkness because there is nothing comparable to this unit. Tongue perceives when the object dissolves in the mouth, which is macro equivalent of the weak nuclear interaction (´ÉʽþªÉÉǨÉ). Nose perceives when the finer parts of an object are brought in close contact with the smell buds, which is macro equivalent of the strong nuclear interaction (+xiɪÉÉǨÉ). Skin perceives when some external object reaches/moves out of it that is macro equivalent of radioactive disintegration (beta decay - ªÉÉiɪÉɨÉ). Ear perceives when the impulse reaches it by traveling through space that is macro equivalent of the gravitational interaction (=tɨÉ). Individually these perceptions have no meaning. They become information and acquire meaning only when they are turned inwards to be pooled in memory (|ÉYÉÉxɨÉxÉ) and interpreted by intelligence (Ê´ÉYÉÉxÉ or ¤ÉÖÊrù).

At any moment, our sense organs are bombarded by a multitude of stimuli. But only one of them is given a clear channel at any instant to go up to the thalamus and then to the cerebral cortex, so that like photographic frames, we perceive one discrete frame at every instant, but due to the high speed of their reception, mix it up so that it appears as continuous. This happens due to an active transport system against concentration gradient with input of energy like the sodium-potassium pump in our body, which moves the two ions in opposite directions across the plasma membrane through break down of Adenosine triphosphate (ATP). The concentrations of the two ions on both sides of the cell membrane are interdependent, suggesting that the same carrier transports both ions. Similarly, the same carrier transports the external stimuli through the sense organs in the opposite direction to the cerebral cortex. This carrier is the mind (|ÉYÉÉxɨÉxÉ).

The first experience of decoding of the signals is the sensory impression that is uni-modal and without any symbolism (ÊxÉÌ´ÉEò±{ÉEò), because it is an impression in isolation. This is the simplest of the transactions that occur between our sensory modes and the observable. A sensation is a combination of sensory impressions and is multi-modal with alternate symbolism (ºÉÊ´ÉEò±{ÉEò). Since measurement is a process of comparison between similars, perception occurs when sensation is accompanied by an interpretation with reference to what is already experienced and stored in memory. Measurement is done at a time t, when the result is frozen for use at other times t’, t’’, etc, even though the observed evolves in time. As experience becomes less immediate and more remote, and as the processes of inference increase, cognition enters the picture. Then thinking and knowing become predominately operative (Ê´ÉYÉÉxÉ or ¤ÉÖÊrù).

In the perception “this (object) is like that (the concept)”, one can describe “that” only if one has perceived it earlier. Then only we give the concept a name (following the principle described by Patanjali) and use it. Perception requires prior measurement of multiple aspects or fields and storing the result of measurement in a centralized system (memory) to be retrieved when needed. To understand a certain aspect, we refer to the data bank and see whether it (+ʦÉYÉÉxÉ – specific defining characteristic) matches with any of the previous readings or not. The answer is either yes or no. This is why the binary number system is used in computers. Commentator Vatsayan calls this naming process of two types (whole by part or one by other) of perceptions (´ªÉ{Énäù¶É& - ÊGòªÉÉEò®úhɪÉÉä& EòkÉÉÇ ºÉ¨´ÉxvɺªÉÉʦÉvÉÉxɨÉÂ) as general definition (ºÉ{ÉÊ®ú¹EòÉ®úÉ ºÉÆYÉÉ). He divides this into two categories: causative definition (ÊxÉʨÉkɺÉÆYÉÉ) and descriptive definition (+ʦɴªÉ\VÉEòºÉÆYÉÉ). Causative definition is the general description that arises out of perception (ºÉɨÉÉxªÉYÉÉxÉ) of a concept following rules of cognitive assimilation (¶ÉÊHòOɽþ). Descriptive definition is the imposition of other characteristics due to similarity (+xªÉäxÉÉxªÉºªÉ ´ªÉ{Énäù¶É&). This imposition differs in the case of each individual based on both heritability coefficient (natural attraction or repulsion) and past experience (description), which is regulated by the cultural environment (naming). This is what the Western researchers found out. It is not Nature versus nurture; it is Nature-cum-nurture.

PREVAILING SYSTEM: ÊxÉ´ÉænùÉnùÉi¨ÉºÉ¨¤ÉÉävÉ&* ´ÉÉSÉɱÉi´É\SÉ {ÉÉÎhb÷i´Éä* {ÉÖVªÉÉ ´ÉÉMnÖùʹÉEòÉ ÊuùVÉÉ&*

In the present day context of industrialization and economic redistribution, diversified vocational courses have been considered as the panacea for all evils. Unfortunately, it has turned into a dumping ground for children, without properly judging their proclivity for learning the subject. During 1936, Gandhiji started “Nai Talim” Sangh, which imparted training to students for identified occupation. Kulandaiswamy described vocational education as something designed to prepare skilled personnel at lower levels of qualification for one or more jobs, (which) must include general education, practical training, and related theory. This is the right approach, but it need not be aimed at skills at lower levels only – it can go for excellence.

The Radhakrishnan Commission (1948), which had eminent academic leaders from universities in the UK and the US among its members, deemed vocationalization not as a break-away from higher education and recommended it in Intermediate Courses. The Mudaliar Commission (1952) advocated it in the post-secondary stage as a terminal which automatically sieved the audience for higher education, as a result of which multipurpose schools were set up. The Education Commission (1964-66) recommended introduction of Work Experience to combine education with work in schools. It recommended the split between the two streams: academic and vocational. Strengthening of research in the university system was also recommended. The National Policy on Education (1986) was brought into force in 1991 and gave shape to the Action Plan of 1992 for the NPE. It recommended that the schools and community should be brought closer through suitable programs of mutual service and support. It led to encouragement of emerging sectors like Information Technology, which witnessed an upsurge following the opening up of the technical education sector, particularly in capacity expansion in the private sector. Although the 1986 policy spoke against commercialization of education, the rapid expansion of private institutions has resulted in deterioration in quality. In some sectors, corruption, ego clash between administrating/regulating agencies (e.g. Medical education) and concerns over quality is alarming necessitating review all deemed universities by the Centre.

In his Independence Day speech of 2011, the Prime Minister has announced setting-up of a commission “to make suggestions for improvements at all levels of education”. The proposed commission should recommend as to what could be a new National Policy on Education. The commission is expected to be headed by an eminent educationist, assisted by experts from the fields of higher, technical, medical, secondary, elementary, vocational and other sectors of education. It will also have inputs from the reports of the National Knowledge Commission, the Yashpal Committee and the Valinathan Committee. The proposed commission would address all sectors of education irrespective of the domain interests of Ministries. Thus, here is an opportunity for Sanskrit Universities to present their case for vocationalization. As can be seen, all the above Committees, which consisted of Western experts or people with Western education, considered the issue from the Western perspective of guided education, which spreads superstition with its “memorize and reproduce” system instead of the desired self-development.

Education sector reviews or analysis consist of conducting a critical analysis of both internal and external aspects of an education system. Education institutions review the internal dynamics of the education system from various angles, as well as the external conditions affecting educational provision such as macro-economic and socio-demographic contexts. Education sector analysis include: (i) Macro-economic and socio-demographic frameworks; (ii) Access and equity issues in education; (iii) Quality of education; (iv) External efficiency; (v) Costs and financing of education; and (vi) Managerial and institutional aspects. In all these high sounding but open-ended goals, real education that should be aimed at enabling the student to solve the problems faced by him in life all by himself, is overtaken by guided education with maximum emphasis on economic and material considerations. Whatever be a country’s level of development, there is a demand for education reform in order to be able to face political, social and cultural changes, scientific and technological transformations, and the need for reconstruction in the wake of armed conflicts and social unrest. Yet, 90% of the text book materials are not required or ineffective to serve the purpose. Value education is more economic than moral. Modern economic theories are more redistributive (exploitative) than regenerative.

If UGC gives only the broadest of guidelines and leaves it to the Universities to decide the content, it would lead to a lot of incoherence between them. The right approach would look at how the curriculum can be slimmed down from an “over-prescriptive” approach that contained too much that is not essential. It would emphasize a core of vital knowledge while leaving teachers free to decide how this should be conveyed. Only competent (in quality - not position) teachers should be allowed to decide the specifics of what is taught within a broad and balanced centrally agreed framework. But in all these, the real goal of education must be preserved.

We have already shown that the West is rediscovering the Vedic values even in the field of educational system including cognitive abilities. Excellence could be achieved only if one has the propensity (|É´ÉÞÊkÉ) for the subject of knowledge. The propensity is generated only when the knowledge is eminently meaningful (+lÉÇEò®úÒ Ê´ÉtÉ) to the person concerned, as all knowledge is not equally meaningful to all persons. “º´É¦ÉÉ´É” is a special attribute (ʴɶÉä¹É) of every individual. Hence it has to be tested differently from the general attributes (ºÉɨÉÉxªÉ). Proficiency in general education cannot determine the inherent “º´É¦ÉÉ´É” – talent or instinct - that determines the cognitive response of someone. Hence for vocational/higher education, dependence on marks obtained in general education or a common entrance test can not only be ineffective, but also counter-productive.

EXCELLENCE IN VOCATIONALIZATION: EòkÉÉÇ®ú& ºÉֱɦÉÉ ±ÉÉäEäò Ê´ÉYÉÉiÉÉ®úºiÉÖ nÖù±ÉǦÉÉ&*

Vocationalization is deemed to be predominantly practical oriented, with theory playing an insignificant role (the difference between a mechanic and an Engineer). Theory without technology is lame, but technology without theory is blind. They require each other. However, the person must “know” what he/she is doing is right. Otherwise it will lead to unintended consequences.  Thus, Vedic method advocates theory before technology. For all these reasons, all ancient texts prescribed four preliminary expositions - +xÉÖ¤ÉxvÉ& SÉiÉÖ¹]õªÉ¨É - that must be fulfilled before any subject is taught. These are:
ʴɹɪÉ& Eò& ¡ò±ÉÆ ËEò Eò& ºÉ¨¤ÉxvÉ& EòÉä%ÊvÉEòÉ®ú´ÉÉxÉ * *

According to the traditional method, any discourse, all books start with four initial or introductory stipulations (+xÉÖ´ÉxPÉ), which define the subject (ʴɹɪÉ&), the necessity of the subject to fulfill any particular need (|ɪÉÉäVÉxÉ), relationship of the knowledge to the object sought to be achieved (ºÉ¨¤ÉxvÉ&), and the category of persons that would benefit best from such knowledge (+ÊvÉEòÉ®úÒ). Let us consider one example: The ´Éè¶ÉäʹÉEò ºÉÚjɨÉ begin with “+lÉÉiÉÉä vɨ¨ÉÈ ´ªÉÉJªÉɺÉɨÉ&”. Literally it means, now begins the exposition about Dharma. ºÉÚjɨÉ literally means a thread, which is used to join objects in a proper sequence like a necklace. Thus, it cannot be seen in isolation. vɨ¨ÉÈ here indicates the subject of the text (ʴɹɪÉ&).

Dharma is different from religion, which is a way of life. The word dharma literally means that which upholds. To understand Dharma, let us take the example of a boat moving in a river. It is held partially by water and partially by air. Both water and air have their distinctive dynamics independent of the boat. If the boat harmonizes its motion with those which hold it – water and air – it will reach its destination safely. Otherwise it will land in problem. This is the concept of upholding vɨ¨ÉÈ - harmonization with one’s environment. But unless we know about the nature or dynamics of our environment, we cannot harmonize our actions with it. Hence there is a necessity of knowing about everything in our environment. The word +iÉÉä literally means by this teaching. Here the word +iÉÉä conveys the necessity of the knowledge of the special nature of the objects in the Universe – hence ´Éè¶ÉäʹÉE. - to fulfill any desired need (|ɪÉÉäVÉxÉ).

The word +lÉ literally means here-after, which indicates that something has been done before this. Traditionally, is used before reciting the Vedas. +lÉ is used before reading everything else. Here the word +lÉ indicates that the ´Éè¶ÉäʹÉEò ºÉÚjɨÉ should be read only after reading the Vedas, which discusses only the theory (YÉÉxɨÉÂ) from which science (Ê´ÉYÉÉxɨÉÂ) has to be derived. This can be done only with separate teaching. Hence the word +lÉ indicates the category of persons that would benefit best from such knowledge (+ÊvÉEòÉ®úÒ). This also is the view of Gautama Buddha in ´ÉÉäÊvÉÊSÉkÉ Ê´É´É®úhÉ (näù¶ÉxÉÉ ±ÉÉäEòxÉÉlÉÉxÉÉÆ ºÉk´ÉɶɪɴɶÉÉxÉÖMÉÉ&). This also indicates that theory should precede trial and error based technology to prevent unintended consequences. Everyone does not possess the same talent. Hence it is necessary to identify the talent needed for learning the technology. Thereafter, the students should be tested to see if they possess that talent or not. Only the deserving candidates with the appropriate talent should be trained in the relevant technology. The less talented should be trained properly to develop them to come up in life in the fields in which they can excel. But selection should be according to talent and not arbitrarily imposed on them.

The text is being discussed after the student has learnt the Vedas – has a theoretical knowledge about the nature of the Universe. But without the requisite technology, we cannot use the theoretical knowledge to fulfill our needs. Hence the text proposes to teach the technology by the word “´ªÉÉJªÉɺÉɨÉ&”. This word indicates that by knowing the technology described there-in, the objective can be achieved (ºÉ¨¤ÉxvÉ&). It may be mentioned that we have references about commentaries on the ´Éè¶ÉäʹÉEò ºÉÚjɨÉ by Atri and Bharadwaja, though these are not available now. Similarly, Ravan had written a commentary named Eò]õxnùÒ on this text. Those were scientific treatises. The book {ÉnùÉlÉÇvɨÉǺÉÆOɽþ” written by +ÉSÉɪÉÇ |ɶɺiÉ{ÉÉnù does not describe its scientific aspects fully. For example, EòhÉÉnù in ºÉÚjÉ 1-1-6 lists 17 MÉÖhÉÉ& and ends the list with . |ɶɺiÉ{ÉÉnù interprets the by adding 7 other MÉÖhÉÉ&, by deriving those from the same text. What distinguishes these 7 from the other 17? The answer is that while the 17 MÉÖhÉÉ& are natural appearances that exhibit the excellence of those aspects based on the net energy content of the objects, the 7 other MÉÖhÉÉ&, are induced (|ÉäÊ®úiÉMÉÖhÉÉ&) by the seven types of gravitational interactions (+ɴɽþÉÊnù ºÉ{iÉ´ÉɪɴÉ&) in Vedic theory (Rk 1-164-15). Similarly, the other ºÉÚjÉÉ& have scientific meanings.

The ¨ÉÒ¨ÉÉƺÉÉ also begins with “+lÉÉiÉä vɨ¨ÉÇ ÊVÉYÉɺÉÉ”. Here, the word ¨ÉÒ¨ÉÉƺÉÉ stands for resolution. This text resolves the doubts or apparent contradictions in the various ¥ÉÉÀhÉ texts, which deal with the operational aspects of the Vedic theories. The ¥ÉÀºÉÚjÉ begins with “+lÉÉiÉä ¥ÉÀ ÊVÉYÉɺÉÉ”, because it resolves the doubts or apparent contradictions in the various ={ÉÊxɹÉnù, which deal with consciousness. The Nyaya school is Vedic research methodology, which has universal application. The Samkhya school deals with the first phase of creation before structure formation. The Yoga school deals with the application part of consciousness.

Sanskrit teachers should understand these properly and teach the Vedas and allied texts scientifically. We have challenged many modern theories by pointing out their deficiencies and explaining them by Vedic principles. It has appeared in many international forums. Till date no scientist in the world had found fault with our analysis. The unfortunate part is that the scientists are taught to follow the beaten path and believe only in the so-called “established theories” and discard everything else. They discard Vedic concepts as metaphysics or philosophy. The Sanskrit teachers are ignorant about the scientific interpretation of the Vedas. They have no idea about modern science, so that they cannot link Vedas to science. They fail to interpret the Vedas correctly starting from +ÎMxɨÉÒ¡âô (not +ÎMxɨÉÒbä÷). Hence we should start with training Sanskrit teachers in both Vedic methods of interpretation and science. While presenting the Vedic theories, we should follow the traditional method of the process (|ÉÊGòªÉÉ), concomitant or connected issues (+xÉÖ¹ÉRÂóMÉ), example, counter arguments, illustrations, analysis, etc., to test the validity of the theories (={ÉÉänÂùPÉÉiÉ) and final conclusion (={ɺÉƽþÉ®ú) scientifically. Then only we can proudly say:

ªÉäxÉ i´ÉªÉÉ ¦ÉÉ®úiÉiÉè±É{ÉÚhÉÇ& |ÉV´ÉÉäʱÉiÉÉä YÉÉxɨɪÉ& |ÉnùÒ{É&*
xɨɺiÉÖRÂóMÉʶɮú¶SÉÖΨ¤É SÉxpùSÉɨɮúSÉÉ®ú´Éä *
jÉè±ÉÉäCªÉ xÉMÉ®úÉ®ú¨¦É ¨Éڱɺiɨ¦ÉÉªÉ ¶É¨¦É´Éä *
 MÉÞhÉÒiÉä iÉk´É¨ÉÉi¨ÉҪɨÉÉi¨ÉÒEÞòiÉVÉMÉiÉ jɪɨÉÂ* ={ÉɪÉÉä{ÉäªÉ°ü{ÉÉªÉ Ê¶É´ÉÉªÉ MÉÖ®ú¤Éä xɨÉ&*
+ÊEòÎ\SÉÎSSÉxiÉEòºªÉè´É MÉÖ¯ûhÉÉ |ÉÊiɤÉÉävÉiÉ&* VÉɪÉiÉä ªÉ& ºÉ¨ÉÉ´Éä¶É& ¶Éɨ¦É´ÉÉäºÉÉ´ÉÖnùɾþiÉ&*
ºÉ´ÉÇYÉÉä ʽþ ʶɴÉÉä ´ÉäÊkÉ ºÉnùºÉSSÉäι]õiÉÆ xÉÞhÉɨÉÂ* iÉäxÉɺÉÉè xÉÉxÉÖMÉÞ¼hÉÉÊiÉ ÊEòÎ\SÉiYɺªÉ MÉÖ®úÉäÌMÉ®úÉ*
+ÊvÉEòÉ®úÒ Ê´É¦ÉÉMÉäxÉ ¶ÉɺjÉÉhªÉÖHòÉxªÉ¶Éä¹ÉiÉ& * ºÉ´ÉÈ xªÉɪªÉÈ ªÉÖÊHò¨ÉiÉÉÆ Ê´ÉnÖù¹ÉÉÆÊEò¨É¶ÉÉä¦ÉxÉ *

Monday, December 09, 2013

SOLUTIONS TO THE BLACK HOLE FIREWALL PROBLEM



THE BLACK HOLE FIREWALL PROBLEM.
INFORMATION PARADOX RESOLVED
USING RUSSELL’S PARADOX OF SET THEORY.

THE PARADOX:

The concept of black-hole firewall postulated by J. Polchinski and others in July 2012 (http://arxiv.org/abs/1207.3123) was extended this year to suggest that typical black holes with field theory duals have firewalls at the event horizon (10.1103/ PhysRevLett.111.171301). This argument makes no reference to entanglement between the black hole and any distant system; hence it is not evaded by identifying degrees of freedom inside the black hole with those outside. During the last one year, more than 100 papers and three conferences/workshops have addressed the idea of firewalls and examined different aspects. We present three different empirical solutions to the paradox by revisiting the foundational principles in each case. In this paper, we reexamine foundations of the Equivalence Principle (EP) using Russell’s paradox of set theory.

First the black hole firewall concept needs to be explained for the uninitiated. Consider a scenario: frustrated Alice wants to commit suicide by jumping into a very large black hole, leaving Bob outside the event horizon, beyond which nothing, not even light, can escape. According to the EP, if the black hole is large enough, Alice will not notice anything unusual as she falls through the event horizon – she will see the same phenomena as an observer floating in empty space. In this scenario, dubbed “No Drama”, the gravitational forces will not become extreme until she approaches a point inside the black hole called the singularity. There, the gravitational pull will gradually tug at her feet more strongly than at her head. As she inexorably plunges downwards, the difference in forces would quickly increase and Alice will be “spaghettified” or crushed and torn (remember the saying in the last century: looking ahead inside a black hole, you will see the back of your head in front of you!). The new hypothesis suggests: as Alice crosses the event horizon, breaking correlation with Bob (her entangled partner) would release lots of energy turning the event horizon into a massive firewall that will incinerate her.

Empty space is full of particles-antiparticles pairs that continually pop up into existence before rapidly recombining and instantly vanishing releasing lots of energy. If a pair forms just outside a black hole’s event horizon, sometimes one particle may fall inside the event horizon, while the other may escape as the Hawking radiation. The first particle would balance the positive energy of the outgoing particle by carrying negative energy inwards. This is allowed by Quantum Mechanics (QM). That negative energy would get subtracted from the black hole’s mass, causing the hole to shrink and steadily lose mass. If no ordinary matter falls in, the hole would eventually evaporate. With this, all information about the black hole would disappear permanently.

But the equations of General Relativity (GR) say that black holes can only swallow mass and grow - not evaporate. Also QM says that information cannot be destroyed. Now consider another possibility. Since the particle pairs have their states ‘entangled’, by measuring the state of the radiation coming out from the emitted particles, we can get all information about the objects falling into the black hole even after the hole evaporates (it must be encoded in the quantum states of the emitted particles). Which of the possibilities is likely? This is the information paradox.


THE PROBLEM:
If somehow lots of radiating twin-particles could break their correlation with their in-falling partners, massive energy should be released like breaking the bonds of many molecules. The released energy should create a firewall around the black hole event horizon. But this violates one aspect of the equivalence principle that free-fall should feel the same as floating in empty space. Thus either firewall exists or information is lost in black holes permanently. The above scenario creates a paradox bringing into focus the inherent conflict between Relativity and Quantum theories, because it means that at least one of the following three established notions of theoretical physics must be wrong.

  • First: the postulates of “No Drama”. According to the EP, there is no difference between free fall - even into the strong gravitational field inside a black hole - and inertial motion in empty space. Since Alice is in free fall when she crosses the event horizon, she should not feel extreme effects of gravity. Is the EP universally valid or it breaks down at the event horizon or somewhere else? Are the mathematics or concepts that lead to singularity or event horizon, correct? What is gravity? Is it like the other interactions? Can gravity be quantized?
  • Second: the postulates of “unitarity”. Alice and Bob are like an entangled particle pair so that they are strongly correlated. The information carried by the radiation is emitted from the region near the event horizon, with low energy effective field theory valid beyond some microscopic distance from the event horizon. Can entanglement be by-passed at the event horizon? Can the notion of monogamous quantum entanglement be changed to two different kinds of entanglements?
  • Third: the postulates of “normality”. Physics works normally far away from a black hole even though it breaks down at some point within the black hole. Is Hawking radiation in a pure state – all information is lost in the black holes? Can quantum Xeroxing - seeing the same information in the Hawking radiation - be resolved by complementarity? What about black-hole particle-jets and blazars?

Together, these concepts make up what is dubbed “the menu from hell”. Since all three cannot be simultaneously true, the paradox is: which of the above three concepts, is/are wrong? One solution lies in Russell’s paradox of set theory and revisiting the foundations of Relativity instead of building on “accepted theories” that goes tangentially in a reductionist manner like “Is time Newtonian or relativistic?” without defining time.

EQUIVALENCE PRINCIPLE REVISITED:

The cornerstone of GR is the principle of equivalence of inertial and gravitational masses: mi = mg. The EP does not flow from any mathematics. No one has given any mathematical reason (like a consistency constraint) why all matter fields should couple universally to gravity. This is not the case for the other fundamental forces or the Higgs field (which is why different particles have different masses). Higgs field is specific as to which particle couples to it. Gravity is a universal field - an all pervading medium. Every particle in the universe, whether massive or not, couples to it. Since F=ma and universal free fall for all mass types hold, F ≈ g ≈ a holds. It can be explained only if gravity acts like river current propelling all objects uniformly based on local density gradient. The apple fell because its coupling with the stem softened and became weak. The galactic and star systems are like a “free vortex” arising out of conflicting currents in which the tangential velocity ‘v’ increases as the center line is approached, so that the angular momentum ‘rv’ is constant. The orbits are not elliptical, but circles with a shifting center. Hence gravity cannot be quantized and gravitons will never be found.
The EP has been generally accepted without much questioning. Actually GR assumes general covariance and the equivalence of the two masses follows. General covariance means invariance under diffeomorphisms. This implies the equivalence principle. This implies that gravitational and inertial masses are equal. It is not a first principle of physics, but merely an ad hoc metaphysical concept designed to induce the uninitiated to imagine that gravity has magical non-local powers of infinite reach. The appeal to believe in such a miraculous form of gravity is very strong. Virtually everyone accepts EP as an article of faith even though it has never been positively verified directly by either experimental or observational physics. All indirect experiments show that the equivalence or otherwise of gravitational and inertial masses is only one of description as is shown below.

No one knows why there should be two or more mass terms. In principle there is no reason why mi = mg: why should the gravitational charge and the inertial mass be equal? The underlying gauge symmetries that describe the fundamental interactions require the fundamental fields to be massless. The Higgs mechanism of spontaneous symmetry breaking appears in the equation of motion of the field particle, i.e., mi (in the classical limit). If we put the particle in a gravitational field, then it will “feel a force” given by the “gravitational charge” times the gravitational field. This appears as two masses “mg” and “mi”, though there is only one mass term associated with each field.

The gravitational mass mg is said to produce and respond to gravitational fields. It is said to supply the mass factor in the inverse square law of gravitation: F=Gm1m2/r2. The inertial mass mi is said to supply the mass factor in Newton’s 2nd Law: F=ma. If gravitation is proportional to g, say F=kg (because the weight of a particle depends on its gravitational mass, i.e. mg), and acceleration is given by a, then according to Newton’s law, ma=kg. Since according to GR, g=a, combining both we get m=k. Here m is the so-called “inertial mass” and k is the “gravitational mass”. But the problem is the difference between the values of G (constant – though it might be changing: doi/10.1103/ PhysRevLett.111.101102) and g (known to be variable).

Alternatively, the inertial mass measures the “inertia”, while the gravitational mass is the coupling strength to the gravitational field. The gravitational mass plays the same role as the electric charge for electromagnetic interactions, the color charge for strong interactions and the particle flavor for weak interactions. Inertial mass mi is the mass in Newton’s law F=mia. Gravitational mass mg is the coupling strength in the Newton’s law of gravitation: Fg = (gm1m2/r2) x mg. Thus, mia = Fg = (gm1m2/r2) x mg. The quantity gm1m2/r2 is the “gravitational field” (say G) and mg is the “gravitational charge”, so that one can write: F x g = mg x G, just like we write: mi x a = q x E for the electric field. This has nothing to do with the Brout-Englert-Higgs mechanism.

Some think that the EP implies that a test particle travels along a geodesic in the background space-time. The EP assumes that in all locally Lorentz (inertial) frame, the laws of Special Relativity (SR) must hold. From this, it is concluded that only the geometric structure of spacetime can define the paths of free bodies. If x is a particle’s world-line, parameterized by proper time, T is its tangent vector, D denotes covariant differentiation along the world-line, and R is the Ricci tensor, then: D(T) = 0 and D(T)=R(T) are both tensorial; hence generally covariant. But only one of them describes a geodesic in a general curved space-time.

Gravity does not couple to the “gravitational mass” but rather to the Ricci Tensor, which works only if space-time is flat. Ricci Tensor does not provide a full description in more than three dimensions. Schwarzschild equations for black holes, where space-time is extremely curved, uses the Riemann Tensor. Using Riemann tensor, instead of Ricci tensor to calculate energy momentum tensor in 3+1 dimensions would not lead to any meaningful results, though in most cases, the Riemann Tensor is needed before one can determine the Ricci Tensor. Thus, there is really no relation between “gravitational mass” and “inertial mass”, except in Newtonian physics. This is why photons (with zero inertial mass) are affected by gravity. Only manipulations of the Standard Model (SM) to include classical gravity (field theory in curved spacetime) leads to effects like Hawking radiation and the Unrih effect. This is where gravitation and the SM can hypothetically meet.

Gravitation and GR are not included in the SM. Hence the SM really cannot say anything about gravitational mass. If any theory conclusively unifies gravitation with the SM, it may be able to explain the equivalence of the inertial mass and the gravitational mass. The Higgs Boson and the Higgs fields are predictions of the SM which incorporates SR. The Higgs mechanism is intended to explain the “rest mass” of fundamental particles such as quarks and electrons that constitute only about 4% of the total theorized mass of the universe. This rest mass of fundamental particles comprises only a tiny fraction (~1%) of the “rest mass” of atoms. Most of the invariant mass of protons and neutrons is the product of quark kinetic energy confinement when bound by the strong interaction mediated by gluons. It is not directly the result of the Higgs mechanism. However, since SR is part of the SM and since e = mc2, the SM may be said to imply that rest mass from the Higgs mechanism and binding energy from the color force will both contribute equivalently to inertial rest mass of all particles.

It is believed that the Higgs field obeys ordinary theory of GR. It means that equivalence of inertial and gravitational masses takes place. The mass-energy of the universe that Dark Energy is said to represent has been reduced from 72.8% to 68.3%. At the same time Dark Matter has been increased from 22.7% to 26.8%. This means the percentage of ordinary matter has gone up from 4.5% to 4.9% only. Yet the constituent particles of these mysterious fields most likely do not couple to Higgs field at all.

EQUIVALENT OR DIFFERENT?

If we think of gravitational and inertial masses outside the context of a generally covariant theory, then there is still no evidence that they are equal. They may differ by an arbitrary factor which may be absorbed into G or by a variable G. The equivalence of the inertial and gravitational masses has been proved by the Eötvös experiment and many later experiments. An analysis of the experiments of Eötvös about the ratio of gravitational to kinetic mass of a few substances by some scientists yields the result that this ratio for the hydrogen atom, and for the binding energies are equal to that for the neutron with a precision of one part in at least 5.105, and 104 respectively. No conclusion can be drawn about these ratios for the proton and the electron separately.

The Eöt-Wash experiment of University of Washington tried to measure the difference in these two masses indirectly by considering “charge/mass” ratios. They have obtained a result, which can be summarized as: (mg/mi) -1│≤ 10-13.

Lunar Laser Ranging (LLR) experiment used to test for 35 years the equivalence principle with the moon, earth and sun being the test-masses to determine whether, in accordance with the EP, these two celestial bodies are falling toward the Sun at the same rate, despite their different masses, compositions, and gravitational self-energies. Analyses of precision laser ranges to the Moon continue to provide increasingly stringent limits on any violation of the equivalence principle. Current LLR solutions give Δ(mg/mi)EP=(-1.0±1.4)×10-13 for any possible inequality in Δ(mg/mi) - the ratios of the gravitational and inertial masses for the Earth and Moon. This result, in combination with laboratory experiments on the weak EP, yields a strong equivalence principle (SEP) test of:
Δ(mg/mi)SEP = (-2.0 ± 2.0) × 10-13.

Also, the corresponding SEP violation parameter η is (4.4±4.5)×10-4, where η=4β-γ-3 and both β and γ are post-Newtonian parameters. Using the Cassini γ, the η result yields β-1=(1.2±1.1)×10-4. The geodetic precession test, expressed as a relative deviation from general relativity, is: Kgp=-0.0019±0.0064. The time variation in the gravitational constant results in G˙/G=(4±9)×10-13yr-1. Consequently there is no evidence for local (1AU) scale expansion of the solar system. (DOI: 10.1103/PhysRevLett. 93.261101). Apart from the technical problems in these indirect methods and the assumed values of various parameters - including latest precisely measured value of G - continuing the uncertainty, the measured result that the Moon is moving away from the Earth at the rate of about 3.8 centimeters higher in its orbit each year shows that these indirect results cannot be fully relied upon.

The indirect methods to prove equivalence or otherwise; are questionable. It has been accepted as given that ma=mg. This equivalence is faulty because the description: F=ma is faulty. Once a force is applied to move the body along any axis and the body moves, the force ceases to act and the body moves at constant velocity v’ due to inertia (assuming no other forces present). The relation between the original velocity v (zero if the body is at rest) and v’ is the rate of change. To accelerate the body further, we need another force to be applied to the body. Without such a new force, the body cannot be accelerated. What is this new force and from where it comes? If any other force acts, then it has to be introduced into the equation. Where is that? Further, the new force will change the velocity v’ to v’’ – a new action. The “rate of change of the rate of change” means relating v to v’, v’’, etc. But why should we compare v’’ with v instead of v’?

When answering a question, one should first determine the framework. If we assume nothing then there can be no answer. However, if we take as given that we are going to formulate theories in terms of Lagrangians then there is essentially only one mass parameter that can appear, i.e., the coefficient of the quadratic term. Thus, whatever mass is there, it is only one mass. The Higgs field clearly modifies the on-shell condition in flat space and general relativity simply says that anyone whose frame is locally flat should reproduce the same result. Thus, the Higgs field appears to modify the gravitational mass. It may also modify the inertial mass by the same amount as can be verified by analyzing some scattering diagrams. However, knowing that we are working within the context of a Lagrangian theory, the fact that inertial and gravitational mass are equal is essentially a foregone conclusion. Are they really different? Let us examine.

RUSSELL’S PARADOX:

Now we will examine EP in the light of Russell’s paradox of Set theory. Russell’s paradox raises an interesting question: If S is the set of all sets which do not have themselves as a member, is S a member of itself? The general principle is that: there cannot be a set without individual elements (example: a library – collection of books – cannot exist without individual books). There cannot be a set of one element or a set of one element is superfluous (example: a book is not a library). Collection of different objects unrelated to each other would be individual members as it does not satisfy the condition of a set. Thus a collection of objects is either a set with its elements, or individual objects that are not the elements of a set.
Let us examine the property p(x): x Ï x, which means the defining property p(x) of any element x is such that it does not belong to x. Nothing appears unusual about such a property. Many sets have this property. A library [p(x)] is a collection of books. But a book is not a library [x Ï x]. Now, suppose this property defines the set R ={x : x Ï x}. It must be possible to determine if RÎR or RÏR. However if RÎR, then the defining properties of R implies that RÏR, which contradicts the supposition that RÎR. Similarly, the supposition RÏR confers on R the right to be an element of R, again leading to a contradiction. The only possible conclusion is that, the property “x Ï x” cannot define a set. This idea is also known as the Axiom of Separation in Zermelo-Frankel set theory, which postulates that; “Objects can only be composed of other objects” or “Objects shall not contain themselves”. In order to avoid this paradox, it has to be ensured that a set is not a member of itself. It is convenient to choose a “largest” set in any given context called the universal set and confine the study to the elements of such universal set only. This set may vary in different contexts, but in a given set up, the universal set should be so specified that no occasion arises ever to digress from it. Otherwise, there is every danger of colliding with paradoxes such as the Russell’s paradox. And in the case of EP, we do just that.

THE THOUGHT EXPERIMENTS OF GR AND EP:

There are similar paradoxes in the theory of SR, GR and the EP. Let us examine EP. All objects fall in similar ways under the influence of gravity. Hence locally, one, it is said, cannot tell the difference between an accelerated frame and an un-accelerated frame. But these must be related to be compared as equivalent or not? Let us take the example of a person in an elevator. The person seats in the elevator that is falling down a shaft. It is assumed that locally (i.e., during any sufficiently small amount of time or over a sufficiently small space) the person in the elevator can make no distinction between being in the falling elevator and being stationary in completely empty space, where there is no gravity. This is a wrong assumption. We have experienced the effect of gravity in closed elevators. Even otherwise, unless the door opens and we find a different floor in front of us, we cannot relate motion of the elevator to the un-accelerated structure of the building – hence no equivalence. The moment we relate to the structure beyond the elevator, we can know the relative motion of the elevator, because unlike the effect of inertia or gravitation, both of which induce motion, the building is stationary.

Inside a spaceship in deep space, objects behave like suspended particles in a fluid (un-accelerated) or like the asteroids in the asteroid belt (accelerated). Usually, they are relatively stationary (fixed velocity) within the medium unless some other force acts upon them. This is because of the relative distribution of mass and energy inside the spaceship and its dimensional volume that determines the average density at each point in the medium. Further the average density of the local medium of space is factored into in this calculation. If the person is in a spaceship where he can see the outside objects, then he can know the relative motions by comparing objects at different distances. In a train, if we look only at nearby trees, we may think the trees are moving, but when we compare it with distant objects, we realize the truth. If we cannot see the outside objects, then we will consider only our position with reference to the spaceship – stationary or floating within a frame. There is no equivalence because there is no other frame for comparison. The same principle works for other examples.

It is said that a ray of light, which moves in a straight line will appear curved to the occupants of the spaceship. The light ray from outside can be related to the spaceship only if we consider the bigger frame of reference containing both the space emitting light and the spaceship. If the passengers could observe the scene outside the spaceship, they will notice this difference and know that the spaceship is moving. In that case, the reasons for the apparent curvature of light path will be known. If we consider outside space as a separate frame of reference unrelated to the spaceship, the ray emitted by it cannot be considered inside the spaceship. The consideration will be restricted to those rays emanating from within the spaceship. In that case, the ray will move straight inside the spaceship. In either case, the description of Einstein is faulty. Thus, the foundation of GR - the EP - is wrong description of reality. Hence all mathematical derivatives built upon such wrong description are also wrong. There is only one type of mass.

The shifting of Mercury’s perihelion that is used to validate GR can be explained by (v/c)2 radians per revolution, where v is not the escape velocity, but the velocity component induced by Sun’s motion in the galaxy, which drags the planets also. Mercury being smallest and closest to the Sun, its effect is most profound. Before Einstein, Gerber has solved the problem differently. Eddington’s experiment about gravitational lensing has been questioned repeatedly. The effect is due to contrasting refractive indices of the media like the time dilation seen in GPS, where light bends and travels a longer path (also slows down) after entering the denser atmosphere of Earth. Every material that light can travel through has a refractive index, denoted by the letter n. The velocity of light in a vacuum is about 3.0 × 108 m/s. The refractive index equals the ratio of the velocities of light in vacuum (c) to that in the medium (v), that is n = c/v. Light slows down when traveling through a medium, thus the refractive index of any medium will be greater than one. By definition, the refractive index of vacuum is 1. For air at STP it is 1.000277. For air at 0 °C and 1 atm., it is 1.000293. This, and not time dilation, slows down light.

SPECIAL RELATIVITY REVISITED:

Now let us examine Lorentz transformation. The description of the measured state at a given instant is physics and the use of the magnitude of change at two or more designated instants to predict the outcome at other times is mathematics. Measurement is a comparison between similars, of which the constant one is called the unit. The factor v2/c2 or (v/c)2 is ratio or comparison of two dynamical quantities where c is the constant - hence a unit of measurement of a dynamic variable. It can be used to measure only the comparative dynamical velocities – not changes in mass or dimension, which is possible only through accumulation or reduction of similars. The two dimensional factor (v/c)2 represents the modifications of incoming light signal (third dimension like the e.m. radiation) as seen by an observer without changing any physical characteristics of the observed. This is why we have three dimensions of ocular perception.

The concept of measurement has undergone a big change over the last century. It all began with the problem of measuring the length of a moving rod. Two possibilities of measurement suggested by Einstein in his 1905 paper (published as Zur Elektrodynamic bewegter Körper in Annalen der Physik 17:891, 1905) were as follows:

(a) “The observer moves together with the given measuring-rod and the rod to be measured, and measures the length of the rod directly by superposing the measuring-rod, in just the same way as if all three were at rest”, or
(b) “By means of stationary clocks set up in the stationary system and synchronizing with a clock in the moving frame, the observer ascertains at what points of the stationary system the two ends of the rod to be measured are located at a definite time. The distance between these two points, measured by the measuring-rod already employed, which in this case is at rest, is the length of the rod”
The method described at (b) is misleading. We can do this only by setting up a measuring device to record the emissions from both ends of the rod at the designated time, (which is the same as taking a photograph of the moving rod) and then measure the distance between the two points on the recording device in units of velocity of light or any other unit. But the picture will not give a correct reading due to two reasons:
·  If the length of the rod is small or velocity is small, then length contraction will not be perceptible according to the formula given by Einstein.
·  If the length of the rod is big or velocity is comparable to that of light, then light from different points of the rod will take different times to reach the recording device and the picture we get will be distorted due to Doppler shift of different points. Thus, there is only one way of measuring the length of the rod as in (a).

Here also we are reminded of an anecdote relating to a famous scientist, who once directed two of his students to precisely measure the wave-length of sodium light. The students returned with two different results – one resembling the normally accepted value and the other a different value. Upon enquiry, the latter replied that he had also come up with the same result as the accepted value, but since everything including the Earth and the scale on it is moving, for precision measurement he applied length contraction to the scale treating the star Betelgeuse as a reference point. This changed the result. The scientist told him to treat the scale and the object to be measured as moving with the same velocity and recalculate the wave-length of light again without any reference to Betelgeuse. After sometime, both the students returned to tell that the wave-length of sodium light is infinite. To a surprised scientist, they explained that since the scale is moving with light, its length would shrink to zero. Hence it will require an infinite number of scales to measure the wave-length of sodium light!

Some scientists try to overcome this difficulty by pointing out that length contraction occurs only in the direction of motion. They claim that if we hold the rod in a transverse direction to the direction of motion, then there will be no length contraction. But how can the length be measured by holding the rod in a transverse direction! If the light path is also transverse to the direction of motion, then the terms c+v and c-v vanish from the equation making the entire theory redundant. If the observer moves together with the given measuring-rod and the rod to be measured, and measures the length of the rod directly by superposing the measuring-rod while moving with it, he will not find any difference because the length contraction, if real, will be in the same proportion for both.

            The fallacy in Einstein’s description is that if one treats “as if all three were at rest”, one cannot measure dynamic variables such as velocity or momentum, as the object will be relatively as rest, which means zero relative velocity. Either Einstein missed this point or he was clever enough to camouflage this when he said: “Now to the origin of one of the two systems (k) let a constant velocity v be imparted in the direction of the increasing x of the other stationary system (K), and let this velocity be communicated to the axes of the co-ordinates, the relevant measuring-rod, and the clocks”. But is this the velocity of k as measured from k, or is it the velocity as measured from K? This is crucial because K and k each have their own clocks and measuring rods, which are not treated as equivalent by Einstein. Therefore, according to his theory, the velocity will be measured by each differently. In fact, they will measure the velocity of k differently. But Einstein does not assign the velocity specifically to either system. His spinning disk and other example in SR and GR also fall for the same reason.

Before we discuss time orderings or whether time is Newtonian or Relativistic, let us define time precisely. In his 1905 paper, Einstein says: “It might appear possible to overcome all the difficulties attending the definition of ‘time’ by substituting ‘the position of the small hand of my watch’ for ‘time’. And in fact such a definition is satisfactory when we are concerned with defining a time exclusively for the place where the watch is located; but it is no longer satisfactory when we have to connect in time series of events occurring at different places, or - what comes to the same thing - to evaluate the times of events occurring at places remote from the watch”.

It is not a precise or scientific definition of time, but the description of the recordings of a clock, which is subject to mechanical error in its functioning. Space, Time and coordinates, like matter, have no physical existence. They arise out of orderings or sequence or our notions of priority and posterity. When the orderings are for objects, the interval between them is called space. When it is for transformations in objects (events), the intervals are called time. When we describe the specific nature of orderings of space (straight line, geodesic, angular, etc), it is called coordinate system. Since measurement is a comparison between similars (Einstein uses fixed speed of light per second to measure distance), we use similar, but easily intelligible and uniformly transforming natural sequence, such as the day or year or its subdivisions as the unit of time. If a clock stops or functions erratically, time does not stop or becomes erratic. Now is a fleeting interface between two events. Hence while at the universal level it is the minimum perceivable interval between two events, in specific cases, it can have longer durations as present continuous or continued existence for that form. For example, all life cycles that are created undergo six stages of evolution:  transformation from quantum state to macro state (from being to becoming), linear growth due to accumulation of similar particles, non-linear growth or transformation due to accumulation of dissimilar particles, transmutation leading to the reverse process of decomposition and disintegration. The total duration is a life cycle and is continued existence for those individuals or objects.  Comparison between two different natural life cycles is the time dilation between them. Hence Einstein’s definition of time is scientifically wrong. His definition of synchronization is also wrong as shown below.

 Einstein uses a privileged frame of reference to define synchronization between clocks and then denies the existence of any privileged frame of reference – a universal “now” - for time. We quote from his 1905 paper: 

We have so far defined only an ‘A time’ and a ‘B time’. We have not defined a common ‘time’ for A and B, for the latter cannot be defined at all unless we establish by definition that the ‘time’ required by light to travel from A to B equals the ‘time’ it requires to travel from B to A. Let a ray of light start at the ‘A time’ tA from A towards B, let it at the ‘B time’ tB be reflected at B in the direction of A, and arrive again at A at the ‘A time’ t’A. In accordance with definition the two clocks synchronize if: tB- tA = t’A-tB.
We assume that this definition of synchronism is free from contradictions, and possible for any number of points; and that the following relations are universally valid:—
  1. If the clock at B synchronizes with the clock at A, the clock at A synchronizes with the clock at B.
  2. If the clock at A synchronizes with the clock at B and also with the clock at C, the clocks at B and C also synchronize with each other.”
 The concept of relativity is valid only between two objects. Introduction of a third object brings in the concept of privileged frame of reference and all equations of relativity fall. Yet, Einstein precisely does the same while claiming the very opposite. In the above description, the clock at A is treated as a privileged frame of reference for proving synchronization of the clocks at B and C. Yet, he claims it is relative! Thus; his conclusion - there are many quite different but equally valid ways of assigning times to events or different observers moving at constant velocity relative to one another require different notions of time, as their clocks run differently - is wrong. Paradoxically, standard formulations of quantum mechanics use the universal “now” frequently.

SPEED OF LIGHT:

            The constant speed of light, which is one of the foundations of SR, only measures equal distance in equal time units in a medium of uniform density. Using this or a multiple or a fraction of this as the unit, the fixed (uniformly accelerating) distance between A and B can be measured by way of length comparison in any uniform medium. But this will not be time measurement, as A and B are not time variant events or states, but time invariant positions. Of course we have the choice of taking the interval between the events when light leaves A and reaches B as the unit and compare the other intervals with it to get the time measured. But light travels at different velocities in different media and the interval for it to cross the same distance in various media will not be the same. The GPS proof has already been discussed. The same is true for particle accelerator experiments that are contained in high flux magnetic tubes. The speedometer reading and the actual kilometer readings in cars do not match. It is always slower due to friction. This puts severe restrictions on Einstein’s proposition, which cannot be used universally. For example, if there is a very hot or very cold cloud of gas between points A and B not equidistant from both, the results would be different as is evident from absorption and emission spectra. Some of the wave-lengths are absorbed by the gas cloud. If the cloud is not at the center, this will happen at different intervals for both way motion.

            After his SR paper of 1905, Einstein has frequently held that the speed of light is not constant. In his 1911 paper “ON THE INFLUENCE OF GRAVITATION ON THE PROPAGATION OF LIGHT”, he says: “For measuring time at a place which, relatively to the origin of the co-ordinates, has the gravitation potential Φ, we must employ a clock which – when removed to the origin of co-ordinates – goes (1 + Φ/) times more slowly than the clock used for measuring time at the origin of co-ordinates. If we call the velocity of light at the origin of co-ordinates c0, then the velocity of light c at a place with the gravitation potential Φ will be given by the relation: c = c0 (1 + Φ/c²)……………(3).

The principle of the constancy of the velocity of light holds good according to this theory in a different form from that which usually underlies the ordinary theory of relativity (italics ours).

4. Bending of Light-Rays in the Gravitational Field
FROM the proposition which has just been proved, that the velocity of light in the gravitational field is a function of the place, we may easily infer, by means of Huyghens's principle, that light-rays propagated across a gravitational field undergo deflexion”.

            Interestingly, it was not the only occasion when Einstein maintained that velocity of light is not constant. In 1912, he wrote “On the other hand I am of the view that the principle of the constancy of the velocity of light can be maintained only insofar as one restricts oneself to spatio-temporal regions of constant gravitational potential. He repeated this in 1913 when he said: “I arrived at the result that the velocity of light is not to be regarded as independent of the gravitational potential. Thus the principle of the constancy of the velocity of light is incompatible with the equivalence hypothesis". In 1915, he wrote in Die Relativitätstheorie on page 259:the writer of these lines is of the opinion that the theory of relativity is still in need of generalization, in the sense that the principle of the constancy of the velocity of light is to be abandoned.
                                                                                                                            
He repeated it again in late 1915, on page 150, “The Foundation of the General Theory of Relativity”, where he says “the principle of the constancy of the velocity of light in vacuo must be modified”. He really spells it out in section 22 of the 1916 book “Relativity: The Special and General Theory”, where he wrote “In the second place our result shows that, according to the general theory of relativity, the law of the constancy of the velocity of light in vacuo, which constitutes one of the two fundamental assumptions in the special theory of relativity and to which we have already frequently referred, cannot claim any unlimited validity. A curvature of rays of light can only take place when the velocity of propagation of light varies with position. Now we might think that as a consequence of this, the special theory of relativity and with it the whole theory of relativity would be laid in the dust. But in reality this is not the case. We can only conclude that the special theory of relativity cannot claim an unlimited domain of validity; its results hold only so long as we are able to disregard the influences of gravitational fields on the phenomena (e.g. of light). Thus, Einstein himself has contradicted one of the fundamental postulates that has gone into developing SR without abandoning the findings based on such wrong postulates.

Einstein has used equations x2+y2+z2-c2t2 = 0 and ξ2 + η2 + ζ2 - c2 τ2 = 0 to describe two spheres that the observers see of the evolution of the same light pulse. The above equation of the sphere is mathematically wrong. Since x2+y2 = 0 describes a circle, x2+y2- c2 = 0 describes a sphere with z-axis zero and x2+y2-c2t2 = 0 describes a circle that evolves in time. Multiplying and not adding another factor z2 will transform a two dimensional circle (representing area) into a three dimensional sphere (volume). Both the equations mentioned by Einstein can at best describe two spheres with origin at (0,0,0) and the points (x,y,z) and (ξ, η, ζ ) on the circumference of the respective spheres. Since the second person is moving away from the origin, the second equation is not relevant in his case (he is there). Assuming he sees the other sphere, he should know its origin (because he has already seen it, otherwise he will not know that it is the same light pulse. In that case, there is no way to relate both pulses) and its present location. In other words, he will measure the same radius as the other person, implying:  c2t2 = c2 τ2 or  t = τ. 
Again, if  x2+y2+z2-c2t2 = x’2+y’2+z’2-c2 τ 2,       t ≠ τ.
This creates a contradiction, which invalidates his mathematics.

            Since space is not empty and local density of space can vary, light emitted from a source moves at constant velocity due to inertia irrespective of the motion of the body, but such velocity is not a universal constant, as it depends on the local density of space. This is proved by the bending of light while passing near big stars. It is not due to relativistic effects, but due to refraction. We have seen how a glass rod immersed in water appears to bend because of the relative density of water and air. Similarly, since most of the mass near a star is concentrated at one area, the local density of space near that area is higher than that of far off places. This variation causes different density gradients that bend the light rays near the star.

Relativity is an operational concept, but not an existential concept. The equations apply to data and not to particles. When we approach a mountain from a distance, its volume appears to increase. The visual perception of volume (scaling up of the angle of incoming radiation) changes at a particular rate. But there is no such impact on the mountain. It exists as it was. The same principle applies to the perception of objects with high velocities. The changing volume is perceived at different times depending upon our relative velocity. If we move fast, it appears earlier. If we move slowly, it appears later. Our differential perception is related to changing angles of radiation and not the changing states of the object. It does not apply to locality. Thus, the Galilean relativity is real and the Lorentz transformation is apparent to the observer only. Einstein’s assertion that the clash between Lorentz invariance and the Galilei invariance of Newtonian mechanics was inconsistent with the physical principle of relativity is misplaced and wrong.

CONCLUSION:

Thus, it is clear that simultaneity - the notion of “now” - is not relative, the universal clock is not fiction, and time is not a proxy for the movement and change of objects in the universe – it is the rate of change in objects. It is not true that two events are truly simultaneous only if they are causally related – unless we assign that cause to application of energy. However, since application of energy at one position on one object cannot generate action (event) at another position involving another object, they cannot be causally related.

Einstein had wrongly assigned several length and time variables in SR, giving them to the wrong coordinate systems or to no specific coordinate systems. He skipped an entire coordinate system, achieving two degrees of relativity when he thought he had only achieved one. Because his x and t transformations were compromised, his velocity transformations were also compromised. He carried this error into the mass transformations, which infected them as well. This problem then infected the tensor calculus and GR. This explains the various anomalies and variations and the so-called violations within Relativity. Since Einstein’s field equations are not correct, Schwarzschild’s solution of 1917 is not correct. Israel’s non-rotating solution is not correct. Kerr’s rotating solution is not correct. And the solutions of Penrose, Wheeler, Hawking, Carter, and Robinson are not correct. The three Friedmann models of the Universe and the equation-of-state parameter are not correct. The so-called expansion of the Universe only at galactic scales and not lesser scales is actually temporary and will be reversed in future, as the galactic clusters are rotating against a common center like the planets around the Sun. The concept of Dark matter and dark energy are not correct because energy is perceived only through its interactions; hence cannot be dark. The smoothness and persistence indicates a background structure, which it is.

“Lorentz Invariance” is the symmetry of SR. General covariance, which comes from SR, is limited to space-time coordinate systems related to each other by uniform relative motions only - “Inertial frames”. It extends Lorentz invariance and treats it as a property of GR. EP deals with the equivalence of gravitational and inertial mass. We have shown both covariance and EP are wrong descriptions of reality. Thus, we have solved one paradox. In the next paper, we will discuss macro representation of entanglement and the mathematics that leads to singularity and event horizon. We will also explain gravity, and discuss misconceptions about dark matter and dark energy to show their true nature.