A survey from 1995 to 2010 has shown that American children are increasingly being born underweight. It is not due to malnutrition as the mothers were found not to smoke and were over-weight. It is not due to cesarean operation which reduces the gestation period. It is dubbed a mystery by the Researchers as they could not find any explanation for this phenomenon. But it can be easily explained only if it is not seen in isolation. The pattern should be seen in the wider global context. Indian and Chinese children are being born increasingly taller – a trend replicated earlier in America. Germans are becoming overweight. European birth rate had declined. Worldwide there is a general deficiency of the immune system. All these are linked like the individual pieces of a jigsaw puzzle. To understand it we have to further expand the base to plant life.
Plants like human beings have a life cycle of their own. Similarities between plants and animals including human beings need not be recounted. Earlier, the fruits, pulses and vegetables were grown smaller in size, but used to stay fresh for longer durations. Gradually, with the use of non-biological chemical fertilizer, the plants grew bigger and started giving bigger fruits of lower quality (at least taste). The fruits also gradually started to shrink and perish at a very fast rate. If the fertilizer are stopped, reduced or maintained at a fixed level, the plants and the produce grew smaller in size over a period. The genetic tampering and use of pesticide has its own problems. Without this intoxicating poison, the plants become infertile. This is the universal pattern.
Since we take these fruits, pulses and vegetables directly, it affects us also, though at a slightly slower rate as unlike the plants, the non-biological chemical fertilizers and intoxicating substances are inducted into our system after being processed by the plant. In India, the toxic content is the yardstick for classifying food into vegetarian and non-vegetarian. Our body maintains a constant temperature of about 37-38 centigrade. It takes about 5 to 6 hours for the body to digest the food. Thereafter the juices are processed and the waste goes to a separate sack. If the juices are from unspoiled food, it nourishes our system. If it is from spoiled food, it creates conditions for disease. Vegetarian food is necessary for survival. This is proved from the fact that vegetarian animals are bigger, stronger and live longer. Non-vegetarian animals are smaller, more agile (they spent energy quickly, hence) live shorter. Even then the non-vegetarian animals prey on animals that are primarily vegetarian. Any food in an edible condition (broken up or cut) that can remain unspoiled for at least 6 hours at a temperature of 370-380 centigrade is described as vegetarian and the rest are non-vegetarian. Thus, milk of pure bred cows (not jersey), which is an animal product, is classified as vegetarian and onion or garlic, which are plant produce, are classified as non-vegetarian. The genetically modified food and the produce modified by use of non-biological chemical fertilizers and intoxicating and poisonous substances are non-vegetarian food, which harms our system.
The rule is that Nature produces sufficient food (including those having medicinal properties to maintain hygienic balance in any area) required by the living beings at any time. Only we have to know the produce and consume the required quantity without wasting unnecessarily. We destroy this balance by tampering with the natural system. For example, the different seasonal fruits and vegetables in a particular place are suited to provide nourishment and keep the immune system functional. This may not be suitable for the people in a different geo-climatic condition. According to the Indian system of medicine, we should consume only seasonal fruits and vegetables as Nature generates these to counter the negative effects of the seasonal change for a particular climatic condition. Thus, transportation of farm produce to another place with a different geo-climatic condition is prohibited. If we want to consume it, we should go to that land. Similarly, un-seasonal food is prohibited as it induces different chemicals into our body that are really not wanted, but could be harmful. According to the Indian system of medicine, the root cause for diabetes can be ascribed to the used of freezers and air-conditioners in summer. Increased blood-pressure and eye-disease problems can be ascribed to it indirectly. Unfortunately, our greed and false vanity has guided our activities and even science now is no longer pursuit of knowledge, but is commercially guided. Thus, the same impact that is seen in the plant kingdom is surfacing among the human race. Gradually it will lead to a world-wide epidemic. The global warming is the Natures ways of tiding over the imbalance created by us. Irrespective of what we do, it is bound to increase. Cutting down emission of green-house gases alone is not going to solve the problem. Besides there is an extra-terrestrial angle to it.
According to an ancient Indian prophecy, ground water will cease to be pure from a date coinciding about the beginning of the twentieth century (kali era 5000). That is the time when the average height of living beings starts reducing. It may go down to 4 feet on an average. Gradually people will move to and stay underground, as the climatic conditions will become unbearable. By 6900 AD (kali era 10000), the human race in the present form will be extinct. We are heading in that direction.
Basudeba.
Sunday, February 28, 2010
Wednesday, February 10, 2010
OVERCOMING "SCIENTIFIC" SUPERSTITION
OVERCOMING “SCIENTIFIC” SUPERSTITION:
“It is easy to explain something to a layman. It is easier to explain the same thing to an expert. But even the most knowledgeable person cannot explain something to one who has limited half-baked knowledge.” ------------- (Hitopadesha).
“To my mind there must be, at the bottom of it all, not an equation, but an utterly simple idea. And to me that idea, when we finally discover it, will be so compelling, so inevitable, that we will say to one another: ‘Oh, How wonderful! How could it have been otherwise.” -----------(John Wheeler).
“All these fifty years of conscious brooding have brought me no nearer to the answer to the question, 'What are light quanta?' Nowadays every Tom, Dick and Harry thinks he knows it, but he is mistaken”. --------------- Einstein, 1954
Twentieth century was a marvel in technological advancement. But except for the first quarter, the advancement of theoretical physics has nothing much to be written about. The principle of mass-energy equivalence, which is treated as the corner-stone principle of all nuclear interactions, binding energies of atoms and nucleons, etc., enters physics only as a corollary of the transformation equations between frames of references in relative motion. Quantum Mechanics (QM) cannot justify this equivalence principle on its own, even though it is the theory concerned about the energy exchanges and interactions of fundamental particles. Quantum Field Theory (QFT) is the extension of QM (dealing with particles) over to fields. In spite of the reported advancements in QFT, there is very little back up experimental proof to validate many of its postulates including Higgs mechanism, bare mass/charge, infinite charge etc. It seems almost impossible to think of QFT without thinking of particles which are accelerated and scattered in colliders. But interestingly, the particle interpretation has the best arguments against QFT. Till recently, the Big Bang hypothesis held the center stage in cosmology. Now Loop Quantum Cosmology (LQC) with its postulates of the “Big Bounce” is taking over. Yet there are two distinctly divergent streams of thought on this subject also. The confusion surrounding interpretation of quantum physics is further compounded by the modern proponents, who often search historical documents of discarded theories and come up with new meanings to back up their own theories. For example, the cosmological constant, first proposed and subsequently rejected as the greatest blunder of his life by Einstein; has made a come back in cosmology. Bohr’s complementarity principle, originally central to his vision of quantum particles, has been reduced to a corollary and is often identified with the frameworks in Consistent Histories.
There are a large number of different approaches or formulations to the foundations of Quantum Mechanics. There is the Heisenberg’s Matrix Formulation, Schrödinger’s Wave-function Formulation, Feynman’s Path Integral Formulation, Second Quantization Formulation, Wigner’s Phase Space Formulation, Density Matrix Formulation, Schwinger’s Variational Formulation, de Broglie-Bohm’s Pilot Wave Formulation, Hamilton-Jacobi Formulation etc. There are several Quantum Mechanical pictures based on placement of time-dependence. There is the Schrödinger Picture: time-dependent Wave-functions, the Heisenberg Picture: time-dependent operators and the Interaction Picture: time-dependence split. The different approaches are in fact, modifications of the theory. Each one introduces some prominent new theoretical aspect with new equations, which needs to be interpreted or explained. Thus, there are many different interpretations of Quantum Mechanics, which are very difficult to characterize. Prominent among them are; the Realistic Interpretation: wave-function describes reality, the Positivistic Interpretation: wave-function contains only the information about reality, the famous Copenhagen Interpretation: which is the orthodox Interpretation. Then there is Bohm’s Causal Interpretation, Everett’s Many World’s Interpretation, Mermin’s Ithaca Interpretation, etc. With so many contradictory views, quantum physics is not a coherent theory, but truly weird.
General relativity breaks down when gravity is very strong: for example when describing the big bang or the heart of a black hole. And the standard model has to be stretched to the breaking point to account for the masses of the universe’s fundamental particles. The two main theories; quantum theory and relativity, are also incompatible, having entirely different notions: such as for the concept of time. The incompatibility of quantum theory and relativity has made it difficult to unite the two in a single “Theory of everything”. There are almost infinite numbers of the “Theory of Everything” or the “Grand Unified Theory”. But none of them are free from contradictions. There is a vertical split between those pursuing the superstrings route and others, who follow the little Higgs route.
String theory, which was developed with a view to harmonize General Relativity with Quantum theory, is said to be a high order theory where other models, such as supergravity and quantum gravity appear as approximations. Unlike super-gravity, string theory is said to be a consistent and well-defined theory of quantum gravity, and therefore calculating the value of the cosmological constant from it should, at least in principle, be possible. On the other hand, the number of vacuum states associated with it seems to be quite large, and none of these features three large spatial dimensions, broken super-symmetry, and a small cosmological constant. The features of string theory which are at least potentially testable - such as the existence of super-symmetry and cosmic strings - are not specific to string theory. In addition, the features that are specific to string theory - the existence of strings - either do not lead to precise predictions or lead to predictions that are impossible to test with current levels of technology.
There are many unexplained questions relating to the strings. For example, given the measurement problem of quantum mechanics, what happens when a string is measured? Does the uncertainty principle apply to the whole string? Or does it apply only to some section of the string being measured? Does string theory modify the uncertainty principle? If we measure its position, do we get only the average position of the string? If the position of a string is measured with arbitrarily high accuracy, what happens to the momentum of the string? Does the momentum become undefined as opposed to simply unknown? What about the location of an end-point? If the measurement returns an end-point, then which end-point? Does the measurement return the position of some point along the string? (The string is said to be a Two dimensional object extended in space. Hence its position cannot be described by a finite set of numbers and thus, cannot be described by a finite set of measurements.) How do the Bell’s inequalities apply to string theory? We must get answers to these questions first before we probe more and spend (waste!) more money in such research. These questions should not be put under the carpet as inconvenient or on the ground that some day we will find the answers. That someday has been a very long period indeed!
The energy “uncertainty” introduced in quantum theory combines with the mass-energy equivalence of special relativity to allow the creation of particle/anti-particle pairs by quantum fluctuations when the theories are merged. As a result there is no self-consistent theory which generalizes the simple, one-particle Schrödinger equation into a relativistic quantum wave equation. Quantum Electro-Dynamics began not with a single relativistic particle, but with a relativistic classical field theory, such as Maxwell’s theory of electromagnetism. This classical field theory was then “quantized” in the usual way and the resulting quantum field theory is claimed to be a combination of quantum mechanics and relativity. However, this theory is inherently a many-body theory with the quanta of the normal modes of the classical field having all the properties of physical particles. The resulting many-particle theory can be relatively easily handled if the particles are heavy on the energy scale of interest or if the underlying field theory is essentially linear. Such is the case for atomic physics where the electron-volt energy scale for atomic binding is about a million times smaller than the energy required to create an electron positron pair and where the Maxwell theory of the photon field is essentially linear.
However, the situation is completely reversed for the theory of the quarks and gluons that compose the strongly interacting particles in the atomic nucleus. While the natural energy scale of these particles, the proton, r meson, etc. is on the order of hundreds of millions of electron volts, the quark masses are about one hundred times smaller. Likewise, the gluons are quanta of a Yang-Mills field which obeys highly non-linear field equations. As a result, strong interaction physics has no known analytical approach and numerical methods is said to be the only possibility for making predictions from first principles and developing a fundamental understanding of the theory. This theory of the strongly interacting particles is called quantum chromodynamics or QCD, where the non-linearities in the theory have dramatic physical effects. One coherent, non-linear effect of the gluons is to “confine” both the quarks and gluons so that none of these particles can be found directly as excitations of the vacuum. Likewise, a continuous “chiral symmetry”, normally exhibited by a theory of light quarks, is broken by the condensation of chirally oriented quark/anti-quark pairs in the vacuum. The resulting physics of QCD is thus entirely different from what one would expect from the underlying theory, with the interaction effects having a dominant influence.
It is known that the much celebrated Standard Model of Particle Physics is incomplete as it relies on certain arbitrarily determined constants as inputs - as “givens”. The new formulations of physics such as the Super String Theory and M-theory do allow mechanisms where these constants can arise from the underlying model. However, the problem with these theories is that they postulate the existence of extra dimensions that are said to be either “extra-large” or “compactified” down to the Planck length, where they have no impact on the visible world we live in. In other words, we are told to blindly believe that extra dimensions must exist, but on a scale that we cannot observe. The existence of these extra dimensions has not been proved. However, they are postulated to be not fixed in size. Thus, the ratio between the compactified dimensions and our normal four space-time dimensions could cause some of the fundamental constants to change! If this could happen then it might lead to physics that are in contradiction to the universe we observe.
The concept of “absolute simultaneity” – an off-shoot of quantum entanglement and non-locality, poses the gravest challenge to Special Relativity. But here also, a different interpretation is possible for the double-slit experiment, Bell’s inequality, entanglement and decoherence, which can rub them off of their mystic character. The Ives - Stilwell experiment conducted by Herbert E. Ives and G. R. Stilwell in 1938 is considered to be one of the fundamental tests of the special theory of relativity. The experiment was intended to use a primarily longitudinal test of light wave propagation to detect and quantify the effect of time dilation on the relativistic Doppler effect of light waves received from a moving source. Also it intended to indirectly verify and quantify the more difficult to detect transverse Doppler effect associated with detection at a substantial angle to the path of motion of the source - specifically the effect associated with detection at a 90° angle to the path of motion of the source. In both respects it is believed that, a longitudinal test can be used to indirectly verify an effect that actually occurs at a 90° transverse angle to the path of motion of the source.
Based on recent theoretical findings of the relativistic transverse Doppler effect, some scientists have shown that such comparison between longitudinal and transverse effects is fundamentally flawed and thus invalid; because it assumes compatibility between two different mathematical treatments. The experiment was designed to detect the predicted time dilation related red-shift effect (increase in wave-length with corresponding decrease in frequency) of special relativity at the fundamentally longitudinal angles at or near 00 and 1800, even though the time dilation effect is based on the transverse angle of 900. Thus, the results of the said experiment do not prove anything. More specifically, it can be shown that the mathematical treatment of special relativity to the transverse Doppler effect is invalid and thus incompatible with the longitudinal mathematical treatment at distances close to the moving source. Any direct comparisons between the longitudinal and transverse mathematical predictions under the specified conditions of the experiment are invalid.
Cosmic rays are particles - mostly protons but sometimes heavy atomic nuclei - that travel through the universe at close to the speed of light. Some cosmic rays detected on Earth are produced in violent events such as supernovae, but physicists still don’t know the origins of the highest-energy particles, which are the most energetic particles ever seen in nature. As cosmic-ray particles travel through space, they lose energy in collisions with the low-energy photons that pervade the universe, such as those of the cosmic microwave background radiation. Special theory of relativity dictates that any cosmic rays reaching Earth from a source outside our galaxy will have suffered so many energy-shedding collisions that their maximum possible energy cannot exceed 5 × 1019 electron-volts. This is known as the Greisen-Zatsepin-Kuzmin limit. Over the past decade, University of Tokyo’s Akeno Giant Air Shower Array - 111 particle detectors have detected several cosmic rays above the GZK limit. In theory, they could only have come from within our galaxy, avoiding an energy-sapping journey across the cosmos. However, astronomers cannot find any source for these cosmic rays in our galaxy. One possibility is that there is something wrong with the observed results. Another possibility is that Einstein was wrong. His special theory of relativity says that space is the same in all directions, but what if particles found it easier to move in certain directions? Then the cosmic rays could retain more of their energy, allowing them to beat the GZK limit. A recent report (Physical Letters B, Vol. 668, p-253) suggests that the fabric of space-time is not as smooth as Einstein and others have predicted.
During 1919, Eddington started his much publicised eclipse expedition to observe the bending of light by a massive object (here the Sun) to verify the correctness of General Relativity. The experiment in question concerned the problem of whether light rays are deflected by gravitational forces, and took the form of astrometric observations of the positions of stars near the Sun during a total solar eclipse. The consequence of Eddington’s theory-led attitude to the experiment, along with alleged data fudging, was claimed to favor Einstein’s theory over Newton’s when in fact the data supported no such strong construction. In reality, both the predictions were based on Einstein’s calculations in 1908 and again in 1911 using Newton’s theory of gravitation. In 1911, Einstein wrote: “A ray of light going past the Sun would accordingly undergo deflection to an amount of 4’10-6 = 0.83 seconds of arc”. He did not clearly explain which fundamental principle of physics used in his paper and giving the value of 0.83 seconds of arc (dubbed half deflection) was wrong. He revised his calculation in 1916 to hold that light coming from a star far away from the Earth and passing near the Sun will be deflected by the Sun’s gravitational field by an amount that is inversely proportional to the star’s radial distance from the Sun (1.745” at the Sun’s limb - dubbed full deflection). Einstein never explained why he revised his earlier figures. Eddington was experimenting which of the above two values calculated by Einstein is correct.
Specifically it has been alleged that a sort of data fudging took place when Eddington decided to reject the plates taken by the one instrument (the Greenwich Observatory’s Astrographic lens, used at Sobral), whose results tended to support the alternative “Newtonian” prediction of light bending (as calculated by Einstein). Instead the data from the inferior (because of cloud cover) plates taken by Eddington himself at Principe and from the inferior (because of a reduced field of view) 4-inch lens used at Sobral were promoted as confirming the theory. While he claimed that the result proved Einstein right and Newton wrong, an objective analysis of the actual photographs shows no such clear cut result. Both theories are consistent with the data obtained. It may be recalled that when someone said that there are only two persons in the world besides Einstein who understood relativity, Eddington had replied that he does not know who the other person was. This arrogance clouded his scientific acumen, as was confirmed by his distaste for the theories of Dr. S Chandrasekhar, which subsequently won the Nobel Prize.
Heisenberg’s Uncertainty relation is still a postulate, though many of its predictions have been verified and found to be correct. Heisenberg never called it a principle. Eddington was the first to call it a principle and others followed him. But as Karl Popper pointed out, uncertainty relations cannot be granted the status of a principle because theories are derived from principles, but uncertainty relation does not lead to any theory. We can never derive an equation like the Schrödinger equation or the commutation relation from the uncertainty relation, which is an inequality. Einstein’s distinction between “constructive theories” and “principle theories” does not help, because this classification is not a scientific classification. Serious attempts to build up quantum theory as a full fledged Theory of Principle on the basis of the uncertainty relation have never been carried out. At best it can be said that Heisenberg created “room” or “freedom” for the introduction of some non-classical mode of description of experimental data. But these do not uniquely lead to the formalism of quantum mechanics.
There are a plethora of other postulates in Quantum Mechanics; such as: the Operator postulate, the Hermitian property postulate, Basis set postulate, Expectation value postulate, Time evolution postulate, etc. The list goes on and on and includes such undiscovered entities as strings and such exotic particles as the Higg’s particle (which is dubbed as the “God particle”) and graviton; not to speak of squarks et all. Yet, till now it is not clear what quantum mechanics is about? What does it describe? It is said that quantum mechanical systems are completely described by its wave function? From this it would appear that quantum mechanics is fundamentally about the behavior of wave-functions. But do the scientists really believe that wave-functions describe reality? Even Schrödinger, the founder of the wave-function, found this impossible to believe! He writes (Schrödinger 1935): “That it is an abstract, unintuitive mathematical construct is a scruple that almost always surfaces against new aids to thought and that carries no great message”. Rather, he was worried about the “blurring” suggested by the spread-out character of the wave-function, which he describes as, “affects macroscopically tangible and visible things, for which the term ‘blurring’ seems simply wrong”.
Schrödinger goes on to note that it may happen in radioactive decay that “the emerging particle is described … as a spherical wave … that impinges continuously on a surrounding luminescent screen over its full expanse. The screen however, does not show a more or less constant uniform surface glow, but rather lights up at one instant at one spot …”. He observed further that one can easily arrange, for example by including a cat in the system, “quite ridiculous cases” with the ψ-function of the entire system having in it the living and the dead cat mixed or smeared out in equal parts. Resorting to epistemology cannot save such doctrines.
The situation was further made complicated by Bohr with interpretation of quantum mechanics. But how many scientists truly believe in his interpretation? Apart from the issues relating to the observer and observation, it usually is believed to address the measurement problem. Quantum mechanics is fundamentally about the micro-particles such as quarks and strings etc, and not the macroscopic regularities associated with measurement of their various properties. But if these entities are somehow not to be identified with the wave-function itself and if the description is not about measurements, then where is their place in the quantum description? Where is the quantum description of the objects that quantum mechanics should be describing? This question has led to the issues raised in the EPR argument. As we will see, this question has not been settled satisfactorily.
The formulations of quantum mechanics describe the deterministic unitary evolution of a wave-function. This wave-function is never observed experimentally. The wave-function allows computation of the probability of certain macroscopic events of being observed. However, there are no events and no mechanism for creating events in the mathematical model. It is this dichotomy between the wave-function model and observed macroscopic events that is the source of the various interpretations in quantum mechanics. In classical physics, the mathematical model relates to the objects we observe. In quantum mechanics, the mathematical model by itself never produces observation. We must interpret the wave-function in order to relate it to experimental observation. Often these interpretations are related to the personal and socio-cultural bias of the scientist, which gets weightage based on his standing in the community. Thus, the arguments of Einstein against Bohr’s position has roots in Lockean notions of perception, which opposes the Kantian metaphor of the “veil of perception” that pictures the apparatus of observation as like a pair of spectacles through which a highly mediated sight of the world can be glimpsed. According to Kant, “appearances” simply do not reflect an independently existing reality. They are constituted through the act of perception in such a way that conform them to the fundamental categories of sensible intuitions. Bohr maintained that “measurement has an essential influence on the conditions on which the very definition of physical quantities in question rests” (Bohr 1935, 1025).
In modern science, there is no unambiguous and precise definition of the words time, space, dimension, numbers, zero, infinity, charge, quantum particle, wave-function etc. The operational definitions have been changed from time to time to take into account newer facts that facilitate justification of the new “theory”. For example, the fundamental concept of the quantum mechanical theory is the concept of “state”, which is supposed to be completely characterized by the wave-function. However, till now it is not certain “what” a wave-function is. Is the wave-function real - a concrete physical object or is it something like a law of motion or an internal property of particles or a relation among spatial points? Or is it merely our current information about the particles? Quantum mechanical wave-functions cannot be represented mathematically in anything smaller than a 10 or 11 dimensional space called configuration space. This is contrary to experience and the existence of higher dimensions is still in the realm of speculation. If we accept the views of modern physicists, then we have to accept that the universe’s history plays itself out not in the three dimensional space of our everyday experience or the four-dimensional space-time of Special Relativity, but rather in this gigantic configuration space, out of which the illusion of three-dimensionality somehow emerges. Thus, what we see and experience is illusory! Maya?
The measurement problem in quantum mechanics is the unresolved problem of how (or if) wave-function collapse occurs. The inability to observe this process directly has given rise to different interpretations of quantum mechanics, and poses a key set of questions that each interpretation must answer. If it is postulated that a particle does not have a value before measurement, there has to be conclusive evidence to support this view. The wave-function in quantum mechanics evolves according to the Schrödinger equation into a linear superposition of different states, but actual measurements always find the physical system in a definite state. Any future evolution is based on the state the system was “discovered” to be in when the measurement was made, implying that the measurement “did something” to the process under examination. Whatever that “something” may be does not appear to be explained by the basic theory. Further, quantum systems described by linear wave-functions should be incapable of non-linear behavior. But chaotic quantum systems have been observed. Though chaos appears to be probabilistic, it is actually deterministic. Further, if the collapse causes the quantum state to jump from superposition of states to a fixed state, it must be either an illusion or an approximation to the reality at quantum level. We can rule out illusion as it is contrary to experience. In that case, there is nothing to suggest that events in quantum level are not deterministic. We may very well be able to determine the outcome of a quantum measurement provided we set up an appropriate measuring device!
The operational definitions and the treatment of the term wave-function used by researchers in quantum theory progressed through intermediate stages. Schrödinger viewed the wave-function associated with the electron as the charge density of an object smeared out over an extended (possibly infinite) volume of space. He did not regard the waveform as real nor did he make any comment on the waveform collapse. Max Born interpreted it as the probability distribution in the space of the electron’s position. He differed from Bohr in describing quantum systems as being in a state described by a wave-function which lives longer than any specific experiment. He considered the waveform as an element of reality. According to this view, also known as State Vector Interpretation, measurement implied the collapse of the wave function. Once a measurement is made, the wave-function ceases to be smeared out over an extended volume of space and the range of possibilities collapse to the known value. However, the nature of the waveform collapse is problematic and the equations of Quantum Mechanics do not cover the collapse itself.
The view known as “Consciousness Causes Collapse” regards measuring devices also as quantum systems for consistency. The measuring device changes state when a measurement is made, but its wave-function does not collapse. The collapse of the wave-function can be traced back to its interaction with a conscious observer. Let us take the example of measurement of the position of an electron. The waveform does not collapse when the measuring device initially measures the position of the electron. Human eye can also be considered a quantum system. Thus, the waveform does not collapse when the photon from the electron interacts with the eye. The resulting chemical signals to the brain can also be treated as a quantum system. Hence it is not responsible for the collapse of the wave-form. However, a conscious observer always sees a particular outcome. The wave-form collapse can be traced back to its first interaction with the consciousness of the observer. This begs the question: what is consciousness? At which stage in the above sequence of events did the wave-form collapse? Did the universe behave differently before life evolved? If so, how and what is the proof for that assumption? No answers.
Many-worlds Interpretation tries to overcome the measurement problem in a different way. It regards all possible outcomes of measurement as “really happening”, but holds that somehow we select only one of those realities (or in their words - universes). But this view clashes with the second law of thermodynamics. The direction of the thermodynamic arrow of time is defined by the special initial conditions of the universe which provides a natural solution to the question of why entropy increases in the forward direction of time. But what is the cause of the time asymmetry in the Many-worlds Interpretation? Why do universes split in the forward time direction? It is said that entropy increases after each universe-branching operation – the resultant universes are slightly more disordered. But some interpretations of decoherence contradict this view. This is called macroscopic quantum coherence. If particles can be isolated from the environment, we can view multiple interference superposition terms as a physical reality in this universe. For example, let us consider the case of the electric current being made to flow in opposite directions. If the interference terms had really escaped to a parallel universe, then we should never be able to observe them both as physical reality in this universe. Thus, this view is questionable.
Transactional Interpretation accepts the statistical nature of waveform, but breaks it into an “offer” wave and an “acceptance” wave, both of which are treated as real. Probabilities are assigned to the likelihood of interaction of the offer waves with other particles. If a particle interacts with the offer wave, then it “returns” a confirmation wave to complete the transaction. Once the transaction is complete, energy, momentum, etc., are transferred in quanta as per the normal probabilistic quantum mechanics. Since Nature always takes the shortest and the simplest path, the transaction is expected to be completed at the first opportunity. But once that happens, classical probability and not quantum probability will apply. Further, it cannot explain how virtual particles interact. Thus, some people defer the waveform collapse to some unknown time. Since the confirmation wave in this theory is smeared all over space, it cannot explain when the transaction begins or is completed and how the confirmation wave determines which offer wave it matches up to.
Quantum decoherence, which was proposed in the context of the many-worlds interpretation, but has also become an important part of some modern updates of the Copenhagen interpretation based on consistent histories, allows physicists to identify the fuzzy boundary between the quantum micro-world and the world where the classical intuition is applicable. But it does not describe the actual process of the wave-function collapse. It only explains the conversion of the quantum probabilities (that are able to interfere) to the ordinary classical probabilities. Some people have tried to reformulate quantum mechanics as probability or logic theories. In some theories, the requirements for probability values to be real numbers have been relaxed. The resulting non-real probabilities correspond to quantum waveform. But till now a fully developed theory is missing.
Hidden Variables Theories treat Quantum mechanics as incomplete. Until a more sophisticated theory underlying Quantum mechanics is discovered, it is not possible to make any definitive statement. It views quantum objects as having properties with well-defined values that exist separately from any measuring devices. According to this view, chance plays no roll at all and everything is fully deterministic. Every material object invariably does occupy some particular region of space. This theory takes the form of a single set of basic physical laws that apply in exactly the same way to every physical object that exists. The waveform may be a purely statistical creation or it may have some physical role. The Causal Interpretation of Bohm and its latter development, the Ontological Interpretation, emphasize “beables” rather than the “observables” in contradistinction to the predominantly epistemological approach of the standard model. This interpretation is causal, but non-local and non-relativistic, while being capable of being extended beyond the domain of the current quantum theory in several ways.
There are divergent views on the nature of reality and the role of science in dealing with reality. Measuring a quantum object was supposed to force it to collapse from a waveform into one position. According to quantum mechanical dogma, this collapse makes objects “real”. But new verifications of “collapse reversal” suggest that we can no longer assume that measurements alone create reality. It is possible to take a “weak” measurement of a quantum particle continuously partially collapsing the quantum state, and then “unmeasure” it altering certain properties of the particle and perform the same weak measurement again. In one such experiment reported in Nature News, the particle was found to have returned to its original quantum state just as if no measurement had ever been taken. This implies that, we cannot assume that measurements create reality because; it is possible to erase the effects of a measurement and start again.
Newton gave his laws of motion in the second chapter, entitled “Axioms, or Laws of motion” of his book Principles of Natural Philosophy published in 1687 in Latin language. The second law says that the change of motion is proportional to the motive force impressed. Newton relates the force to the change of momentum (not to the acceleration as most textbooks do). Momentum is accepted as one of two quantities that, taken together, yield the complete information about a dynamic system at any instant. The other quantity is position, which is said to determine the strength and direction of the force. Since then the earlier ideas have changed considerably. The pairing of momentum and position is no longer viewed in the Euclidean space of three dimensions. Instead, it is viewed in phase space, which is said to have six dimensions, three for position and three for momentum. But here the term dimension has actually been used for direction, which is not a scientific description.
In fact most of the terms used by modern scientists have not been precisely defined - they have only an operational definition, which is not only incomplete, but also does not stand scientific scrutiny, though it is often declared as “reasonable”. This has been done not by chance, but by design, as modern science is replete with such instances. For example, we quote from the paper of Einstein and his colleagues Boris Podolsky and Nathan Rosen, which is known as the EPR argument (Phys. Rev. 47, 777 (1935):
“A comprehensive definition of reality is, however, unnecessary for our purpose. We shall be satisfied with the following criterion, which, we regard as reasonable. If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity. It seems to us that this criterion, while far from exhausting all possible ways of recognizing a physical reality, at least provides us with one such way, whenever the conditions set down in it occur. Regarded not as necessary, but merely as a sufficient, condition of reality, this criterion is in agreement with classical as well as quantum-mechanical ideas of reality.”
Prima facie, what Einstein and his colleagues argued was that under ideal conditions, observation (includes measurement) functions like a mirror reflecting an independently existing, external reality. The specific criterion for describing reality characterizes it in terms of objectivity understood as independence from any direct measurement. This implies that, when a direct measurement of physical reality occurs, it merely passively reflects rather than actively constitutes the object under observation. It further implies that ideal observations not only reflect the state of the object during observation, but also before and after observation just like a photograph taken. It has a separate and fixed identity than the object whose photograph has been taken. While the object may be evolving in time, the photograph depicts a time invariant state. Bohr and Heisenberg opposed this notion based on the Kantian view by describing acts of observation and measurement more generally as constitutive of phenomena. More on this will be discussed later.
The fact that our raw sense impressions and experiences are compatible with widely differing concepts of the world has led some philosophers to suggest that we should dispense with the idea of an “objective world” altogether and base our physical theories on nothing but direct sense impressions only. Berkeley expressed the positivist identification of sense impressions with objective existence by the famous phrase “esse est percipi” (to be is to be perceived). This has led to the changing idea of “objective reality”. However, if we can predict with certainty “the value of a physical quantity”, it only means that we have partial and not complete “knowledge” – which is the “total” result of “all” measurements - of the system. It has not been shown that knowledge is synonymous with reality. We may have the “knowledge” of mirage, but it is not real. Based on the result of our measurement, we may have knowledge that something is not real, but only apparent.
The partial definition of reality is not correct as it talks about “the value of a physical quantity” and not “the value of all physical quantities”. We can predict with certainty “the value of a physical quantity” such as position or momentum, which are classical concepts, without in any way disturbing the system. This has been accepted for past events by Heisenberg himself, which has been discussed in latter pages. Further, measurement is a process of comparison between similars and not bouncing light off something to disturb it. This has been discussed in detail while discussing the measurement problem. We cannot classify an object being measured (observed) separately from the apparatus performing the measurement (though there is lot of confusion in this area). They must belong to the same class. This is clearly shown in the quantum world where it is accepted that we cannot divorce the property we are trying to measure from the type of observation we make: the property is dependent on the type of measurement and the measuring instrument must be designed to use that particular property. However, this interpretation can be misleading and may not have anything to do with reality as described below. Such limited treatment of the definition of “reality” has given the authors the freedom to manipulate the facts to suit their convenience. Needless to say; the conclusions arrived at in that paper has been successively proved wrong by John S. Bell, Alain Aspect, etc, though for a different reason.
In the double slit experiment, it is often said that whether the electron has gone through the hole No.1 or No. 2 is meaningless. The electron, till we observe which hole it goes through, exists in a superposition state of equal measure of probability wave for going through the hole 1 and through the hole 2. This is a highly misleading notion as after it went through, we can always see its imprint on the photographic plate at a particular position and that is real. Before such observation we do not know which hole it went through, but there is no reason to presume that it went through a mixed state of both holes. Our inability to measure or know cannot change physical reality. It can only limit our knowledge of such physical reality. This aspect and the interference phenomenon have been discussed elaborately in later pages.
If, we accept the modern view of superposition of states, we land in many complex situations. Suppose the Schrödinger’s cat is somewhere in deep space and a team of astronauts were sent to measure its state According to the Copenhagen interpretation, the astronauts by opening the box and performing the observation have now put the cat into a definite quantum state; say find it alive. For them, the cat is no longer in a superposition state of equal measure of probability of living or dead. But for their Earth bound colleagues, the cat and the astronauts on board the space shuttle who know the state of the cat (did they change to a quantum state?), are still in a probability wave superposition state of live cat and dead cat. Finally, when the astronauts communicate with a computer down on earth, they pass on the information that is stored in the magnetic memory of the computer. After the computer receives the information, but before its memory is read by the earth-bound scientists, the computer is part of the superposition state for the earth-bound scientists. Finally, in reading the computer output, the earth-bound scientists reduce the superposition state to a definite one. Reality springs into being or rather from being to becoming only after we observe it. Is the above description sensible?
What really happens is that the cat interacts with the particles around it – protons, electrons, air molecules, dust particles, radiation, etc, which has the effect of “observing” it. The state is accessed by each of the conscious observers (as well as the other particles) by intercepting on its/our retina a small fraction of the light that has interacted with the cat. Thus, in reality, the field set up by his retina is perturbed and the impulse is carried out to the brain, where it is compared with previous similar impressions. If the impression matches with any previous impressions, we cognize it to be like that. Thereafter only we cognize the result of the measurement: the cat is alive or dead at the moment of observation. Thus, the process of measurement is carried out constantly without disturbing the system and evolution of the observed has nothing to do with the observation. This has been elaborated while discussing the measurement problem.
Further someone has put the cat and the deadly apparatus in the box. Thus according to the generally accepted theory, the wave-function has collapsed for him at that time. The information is available to us. Only afterwards, the evolutionary state of the cat – whether living or dead – is not known to us including the person who put the cat in the box in the first place. But according to the above description, the cat, whose wave-function has collapsed for the person who put the cat in the box, again goes into a “superposition of states of both alive and dead” and needs another observation – directly or indirectly through a set of apparatus - to describe its proper state at any subsequent time. This implies that after the second observation, the cat again goes into a “superposition of states of both alive and dead” till it is again observed and so on ad infinitum till it is found dead. But then the same story repeats for the dead cat – this time about his state of decomposition!
The cat example shows three distinct aspects: the state of the cat, i.e., dead or alive at the moment of observation (which information is time invariant as it is fixed), the state of the cat prior to and after the moment of observation (which information is time variant as the cat will die at some unspecified time due to unspecified reasons), and the cognition of these information by a conscious observer, which is time invariant but about the time evolution of the states of the cat. In his book “Popular Astronomy”, Prof. Bigelow says; Force, Mass, Surface, Electricity, Magnetism, etc., “are apprehended only during instantaneous transfer of energy”. He further adds; “Energy is the great unknown quantity, and its existence is recognized only during its state of change”. This is an eternal truth. We endorse the above view. It is well-known that the Universe is so called because everything in it is ever moving. Thus the view that observation not only describes the state of the object during observation, but also the state before and after it, is misleading. The result of measurement is the description of a state frozen in time, thus a fixed quantity. Its time evolution is not self-evident in the result of measurement. It has any meaning only after it is cognized by a conscious agent, as consciousness is time invariant. Thus, the observable, observation and observer depict three aspects of confined mass, displacing energy and revealing radiation of a single phenomenon depicting reality. Quantum physics has to explain these phenomena scientifically. We will discus it later.
When one talks about what an electron is “doing”, one implies what sort of a wave function is associated with it. But the wave function is not a physical object in the sense a proton or an electron or a billiard ball. In fact, the rules of quantum theory do not even allot a unique wave function to a given state of motion, since multiplying the wave function by a factor of modulus unity does not change any physical consequence. Thus, Heisenberg opined that “the atoms or elementary particles are not as real; they form a world of potentialities or possibilities rather than one of things or facts”. This shows the helplessness of the physicists to explain the quantum phenomena in terms of the macro world. The activities of the elementary particles appear essential as long as we believe in the independent existence of fundamental laws that we can hope to understand better.
Reality cannot differ from person to person or from measurement to measurement because it has existence independent of these factors. The elements of our “knowledge” are actually derived from our raw sense impressions, by automatically interpreting them in conventional terms based on our earlier impressions. Since these impressions vary, our responses to the same data also vary. Yet, unless an event is observed, it has no meaning by itself. Thus, it can be said that while observables have a time evolution independent of observation, it depends upon observation for any meaningful description in relation to others. For this reason the individual responses/readings to the same object may differ based on their earlier (at a different time and may be space) experience/environment. As the earlier example of the cat shows, it requires a definite link between the observer and the observed – a split (from time evolution), and a link (between the measurement representing its state and the consciousness of the observer for describing such state in communicable language). This link varies from person to person. At every interaction, the reality is not “created”, but the “presently evolved state” of the same reality gets described and communicated. Based on our earlier experiences/experimental set-up, it may return different responses/readings.
There is no proof to show that a particle does not have a value before measurement. The static attributes of a proton or an electron such as its charge or its mass have well defined properties and will remain so even before and after observation even though it may change its position or composition due to the effect of the forces acting on it – spatial translation. The dynamical attributes will continue to evolve – temporal translation. The life cycles of stars and galaxies will continue till we notice their extinction in a supernova explosion. The moon will exist even when we are not observing it. The proof for this is their observed position after a given time matches our theoretical calculation. Before measurement, we do not know the “present” state. Since present is a dynamical entity describing time evolution of the particle, it evolves continuously from past to future. This does not mean that the observer creates reality – after observation at a given instant he only discovers the spatial and temporal state of its static and dynamical aspects.
The prevailing notion of superposition (an unobserved proposition) only means that we do not know how the actual fixed value after measurement has been arrived at (described elaborately in later pages), as the same value could be arrived at by infinite numbers of ways. We superimpose our ignorance on the particle and claim that the value of that particular aspect is undetermined whereas in reality the value might already have been fixed (the cat might have died). The observer cannot influence the state of the observed (moment of death of the cat) before or after observation. He can only report the “present state”. Quantum mechanics has failed to describe the collapse mechanism satisfactorily. In fact many models (such as the Copenhagen interpretation) treat the concept of collapse as non-sense. The few models that accept collapse as real are incomplete and fail to come up with a satisfactory mechanism to explain it. In 1932, John von Neumann showed that if electrons are ordinary objects with inherent properties (which would include hidden variables) then the behavior of those objects must contradict the predictions of quantum theory. Because of his stature in those days, no one contradicted him. But in 1952, David Bohm showed that hidden variables theories were plausible if super-luminal velocities are possible. Bohm’s mechanics has returned predictions equivalent to other interpretations of quantum mechanics. Thus, it cannot be discarded lightly. If Bohm is right, then Copenhagen interpretation and its extensions are wrong.
There is no proof to show that the characteristics of particle states are randomly chosen instantaneously at the time of observation/measurement. Since the value remains fixed after measurement, it is reasonable to assume that it remained so before measurement also. For example, if we measure the temperature of a particle by a thermometer, it is generally assumed that a little heat is transferred from the particle to the thermometer thereby changing the state of the particle. This is an absolutely wrong assumption. No particle in the Universe is perfectly isolated. A particle inevitably interacts with its environment. The environment might very well be a man-made measuring device.
Introduction of the thermometer does not change the environment as all objects in the environment are either isothermic or heat is flowing from higher concentration to lower concentration. In the former case there is no effect. In the latter case also it does not change anything as the thermometer is isothermic with the environment. Thus the rate of heat flow from the particle to the thermometer remains constant – same as that of the particle to its environment. When exposed to heat, the expansion of mercury shows a uniform gradient in proportion to the temperature of its environment. This is sub-divided over a randomly chosen range and taken as the unit. The expansion of mercury when exposed to the heat flow from a particle till both become isothermic is compared with this unit and we get a scalar quantity, which we call the result of measurement at that instant. Similarly, the heat flow to the thermometer does not affect the object as it was in any case continuing with the heat flow at a steady rate and continued to do so even after measurement. This is proved from the fact that the thermometer reading does not change after sometime (all other conditions being unchanged). This is common to all measurements. Since the scalar quantity returned as the result of measurement is a number, it is sometimes said that numbers are everything.
While there is no proof that measurement determines reality, there is proof to the contrary. Suppose we have a random group of people and we measure three of their properties: sex, height and skin-color. They can be male or female, tall or short and their skin-color could be fair or brown. If we take at random 30 people and measure the sex and height first (male and tall), and then the skin-color (fair) for the same sample, we will get one result (how many tall men are fair). If we measure the sex and the skin-color first (male and fair), and then the height (tall), we will get a different result (how many fare males are tall). If we measure the skin-color and the height first (fair and tall), and then the sex (male), we will get a yet different result (how many fare and tall persons are male). Order of measurement apparently changes result of measurement. But the result of measurement really does not change anything. The tall will continue to be tall and the fair will continue to be fair. The male and female will not change sex either. This proves that measurement does not determine reality, but only exposes selected aspects of reality in a desired manner – depending upon the nature of measurement. It is also wrong to say that whenever any property of a microscopic object affects a macroscopic object, that property is observed and becomes physical reality. We have experienced situations when an insect bite is not really felt (measure of pain) by us immediately even though it affects us. A viral infection does not affect us immediately.
We measure position, which is the distance from a fixed reference point in different coordinates, by a tape of unit distance from one end point to the other end point or its sub-divisions. We measure mass by comparing it with another unit mass. We measure time, which is the interval between events by a clock, whose ticks are repetitive events of equal duration (interval) which we take as the unit, etc. There is no proof to show that this principle is not applicable to the quantum world. These measurements are possible when both the observer with the measuring instrument and the object to be measured are in the same frame of reference (state of motion); thus without disturbing anything. For this reason results of measurement are always scalar quantities – multiples of the unit. Light is only an accessory for knowing the result of measurement and not a pre-condition for measurement. Simultaneous measurement of both position and momentum is not possible, which is correct, though due to different reasons explained in later pages. Incidentally, both position and momentum are regarded as classical concepts.
In classical mechanics and electromagnetism, properties of a point mass or properties of a field are described by real numbers or functions defined on two or three dimensional sets. These have direct, spatial meaning, and in these theories there seems to be less need to provide a special interpretation for those numbers or functions. The accepted mathematical structure of quantum mechanics, on the other hand, is based on fairly abstract mathematics (?), such as Hilbert spaces, (which is the quantum mechanical counterpart of the classical phase-space) and operators on those Hilbert spaces. Here again, there is no precise definition of space. The proof for the existence and justification of the different classification of “space” and “vacuum” are left unexplained.
When developing new theories, physicists tend to assume that quantities such as the strength of gravity, the speed of light in vacuum or the charge on the electron are all constant. The so-called universal constants are neither self-evident in Nature nor have been derived from fundamental principles (though there are some claims to the contrary, each has some problem). They have been deduced mathematically and their value has been determined by actual measurement. For example, the fine structure constant has been postulated in QED, but its value has been derived only experimentally (We have derived the measured value from fundamental principles). Yet, the regularity with which such constants of Nature have been discovered points to some important principle underlying it. But are these quantities really constant?
The velocity of light varies according to the density of the medium. The acceleration due to gravity “g” varies from place to place. We have measured the value of “G” from earth. But we do not know whether the value is the same beyond the solar system. The current value of the distance between the Sun and the Earth has been pegged at 149,597,870.696 kilometers. A recent (2004) study shows that the Earth is moving away from the Sun @ 15 cm per annum. Since this value is 100 times greater than the measurement error, something must really be pushing Earth outwards. While one possible explanation for this phenomenon is that the Sun is losing enough mass via fusion and the solar wind, alternative explanations include the influence of dark matter and changing value of G. We will explain it later.
Einstein proposed the Cosmological Constant to allow static homogeneous solutions to his equations of General Relativity in the presence of matter. When the expansion of the Universe was discovered, it was thought to be unnecessary forcing Einstein to declare was it was his greatest blunder. There have been a number of subsequent episodes in which a non-zero cosmological constant was put forward as an explanation for a set of observations and later withdrawn when the observational case evaporated. Meanwhile, the particle theorists are postulating that the cosmological constant can be interpreted as a measure of the energy density of the vacuum. This energy density is the sum of a number of apparently unrelated contributions: potential energies from scalar fields and zero-point fluctuations of each field theory degree of freedom as well as a bare cosmological constant λ0, each of magnitude much larger than the upper limits on the cosmological constant as measured now. However, the observed vacuum energy is very very small in comparison to the theoretical prediction: a discrepancy of 120 orders of magnitude between the theoretical and observational values of the cosmological constant. This has led some people to postulate an unknown mechanism which would set it precisely to zero. Others postulate the mechanism to suppress the cosmological constant by just the right amount to yield an observationally accessible quantity. However, all agree that this illusive quantity does play an important dynamical role in the Universe. The confusion can be settled if we accept the changing value of G, which can be related to the energy density of the vacuum. Thus, the so-called constants of Nature could also be thought of as the equilibrium points, where different forces acting on a system in different proportions balance each other.
For example, let us consider the Libration points called L4 and L5, which are said to be places that gravity forgot. They are vast regions of space, sometimes millions of kilometers across, in which celestial forces cancel out gravity and trap anything that falls into them. The Libration points, known as ¨ÉxnùÉäSSÉ and {ÉÉiÉ in earlier times, were rediscovered in 1772 by the mathematician Joseph-Louis Lagrange. He calculated that the Earth’s gravitational field neutralizes the gravitational pull of the sun at five regions in space, making them the only places near our planet where an object is truly weightless. Astronomers call them Libration points; also Lagrangian points, or L1, L2, L3, L4 and L5 for short. Of the five Libration points, L4 and L5 are the most intriguing.
Two such Libration points sit in the Earth’s orbit also, one marching ahead of our planet, the other trailing along behind. They are the only ones that are stable. While a satellite parked at L1 or L2 will wander off after a few months unless it is nudged back into place (like the American satellite SOHO), any object at L4 or L5 will stay put due to a complex web of forces (like the asteroids). Evidence for such gravitational potholes appears around other planets too. In 1906, Max Wolf discovered an asteroid outside of the main belt between Mars and Jupiter, and recognized that it was sitting at Jupiter’s L4 point. The mathematics for L4 uses the “brute force approach” making it approximate. Lying 150 million kilometers away along the line of Earth’s orbit, L4 circles the sun about 60 degrees (slightly more, according to our calculation) in front of the planet while L5 lies at the same angle behind. Wolf named it Achilles, leading to the tradition of naming these asteroids after characters from the Trojan wars.
The realization that Achilles would be trapped in its place and forced to orbit with Jupiter, never getting much closer or further away, started a flurry of telescopic searches for more examples. There are now more than 1000 asteroids known to reside at each of Jupiter’s L4 and L5 points. Of these, about ⅔ reside at L4 while the rest ⅓ are at L5. Perturbations by the other planets (primarily Saturn) causes these asteroids to oscillate around L4 and L5 by about 15-200 and at inclinations of up to 400 to the orbital plane. These oscillations generally take between 150 years and 200 years to complete. Such planetary perturbations may also be the reason why there have been so few Trojans found around other planets. Searches for “Trojan” asteroids around other planets have met with mixed results. Mars has 5 of them at L5 only. Saturn seemingly has none. Neptune has two.
The asteroid belt surrounds the inner Solar system like a rocky, ring-shaped moat, extending out from the orbit of Mars to that of Jupiter. But there are voids in that moat in distinct locations called Kirkwood gaps that are associated with orbital resonances with the giant planets - where the orbital influence of Jupiter is especially potent. Any asteroid unlucky enough to venture into one of these locations will follow chaotic orbits and will be perturbed and ejected from the cozy confines of the belt, often winding up on a collision course with one of the inner, rocky planets (such as Earth) or the moon. But Jupiter’s pull cannot account for the extent of the belt’s depletion seen at present or for the spotty distribution of asteroids across the belt - unless there was a migration of planets early in the history of the solar system. According to a report (Nature 457, 1109-1111 dated 26 February 2009), the observed distribution of main belt asteroids does not fill uniformly even those regions that are dynamically stable over the age of the Solar System. There is a pattern of excess depletion of asteroids, particularly just outward of the Kirkwood gaps associated with the 5:2, the 7:3 and the 2:1 Jovian resonances. These features are not accounted for by planetary perturbations in the current structure of the Solar System, but are consistent with dynamical ejection of asteroids by the sweeping of gravitational resonances during the migration of Jupiter and Saturn.
Some researchers designed a computer model of the asteroid belt under the influence of the outer “gas giant” planets, allowing them to test the distribution that would result from changes in the planets’ orbits over time. A simulation wherein the orbits remained static, did not agree with observational evidence. There were places where there should have been a lot more asteroids than we saw. On the other hand, a simulation with an early migration of Jupiter inward and Saturn outward - the result of interactions with lingering planetesimals (small bodies) from the creation of the solar system - fit the observed layout of the belt much better. The uneven spacing of asteroids is readily explained by this planet-migration process that other people have also worked on. In particular, if Jupiter had started somewhat farther from the sun and then migrated inward toward its current location, the gaps it carved into the belt would also have inched inward, leaving the belt looking much like it does now. The good agreement between the simulated and observed asteroid distributions is quite remarkable.
One significant question not addressed in this paper is the pattern of migration - whether the asteroid belt can be used to rule out one of the presently competing theories of migratory patterns. The new study deals with the speed at which the planets’ orbits have changed. The simulation presumes a rather rapid migration of a million or two million years, but other models of Neptune’s early orbital evolution tend to show that migration proceeds much more slowly, over millions of years. We hold this period as 4.32 million years for the Solar system. This example shows that the orbits of planets, which are stabilized due to balancing of the centripetal force and gravity, might be changing from time to time. This implies that either the masses of the Sun and the planets or their distance from each other or both are changing over long periods of time (which is true). It can also mean that G is changing. Thus, the so-called constants of Nature may not be so constants after all.
Earlier, a cosmology with changing physical values for the gravitational constant G was proposed by P.A.M. Dirac in 1937. Field theories applying this principle have been proposed by P. Jordan and D.W. Sciama and in 1961 by C. Brans and R.H. Dicke. According to these theories the value of G is diminishing. Brans and Dicke suggested a change of about 0.00000000002 per year. This theory has not been accepted on the ground that it would have profound effect on the phenomena ranging from the evolution of the Universe to the evolution of the Earth. For instance, stars evolve faster if G is greater. Thus, the stellar evolutionary ages computed with constant G at its present value would be too great. The Earth compressed by gravitation would expand having a profound effect on surface features. The Sun would have been hotter than it is now and the Earth’s orbit would have been smaller. No one bothered to check whether such a scenario existed or is possible. Our studies in this regard show that the above scenario did happen. We have data to prove the above point.
Precise measurements in 1999 gave so divergent values of G from the currently accepted value that the result had to be pushed under the carpet, as otherwise most theories of physics would have tumbled. Presently, physicists are measuring gravity by bouncing atoms up and down off a laser beam (arXiv:0902.0109). The experiments have been modified to perform atom interferometry, whereby quantum interference between atoms can be used to measure tiny accelerations. Those still using the earlier value of G in their calculations, land in trajectories much different from their theoretical calculations. Thus, modern science is based on a value of G that has been proved to be wrong. The Pioneer and Fly-by anomalies and the change of direction of Voyager 2 after it passed the orbit of Saturn have cast a shadow on the authenticity of the theory of gravitation. Till now these have not been satisfactorily explained. We have discussed these problems and explained a different theory of gravitation in later pages.
According to reports published in several scientific journals, precise measurements of the light from distant quasars and the only known natural nuclear reactor, which was active nearly 2 billion years ago at what is now Oklo in Gabon suggest that the value of the fine-structure constant may have changed over the history of the universe (Physical Review D, vol 69, p 121701). If confirmed, the results will be of enormous significance for the foundations of physics. Alpha is an extremely important constant that determines how light interacts with matter - and it shouldn’t be able to change. Its value depends on, among other things, the charge on the electron, the speed of light and Planck’s constant. Could one of these really have changed?
If the fine-structure constant changes over time, it allows postulating that the velocity of light might not be constant. This would explain the flatness, horizon and monopole problems in cosmology. Recent work has shown that the universe appears to be expanding at an ever faster rate, and there may well be a non-zero cosmological constant. There is a class of theories where the speed of light is determined by a scalar field (the force making the cosmos expand, the cosmological constant) that couples to the gravitational effect of pressure. Changes in the speed of light convert the energy density of this field into energy. One off-shoot of this view is that in a young and hot universe during the radiation epoch, this prevents the scalar field dominating the universe. As the universe expands, pressure-less matter dominates and variations in c decreases making α (alpha) fixed and stable. The scalar field begins to dominate, driving a faster expansion of the universe. Whether the variation of the fine-structure constant claimed exists or not, putting bounds on the rate of change puts tight constraints on new theories of physics.
One of the most mysterious objects in the universe is what is known as the black hole – a derivative of the general theory of relativity. It is said to be the ultimate fate of a super-massive star that has exhausted its fuel that sustained it for millions of years. In such a star, gravity overwhelms all other forces and the star collapses under its own gravity to the size of a pinprick. It is called a black hole as nothing – not even light – can escape it. A black hole has two parts. At its core is a singularity, the infinitesimal point into which all the matter of the star gets crushed. Surrounding the singularity is the region of space from which escape is impossible - the perimeter of which is called the event horizon. Once something enters the event horizon, it loses all hope of exiting. It is generally believed that a large star eventually collapses to a black hole. Roger Penrose conjectured that the formation of a singularity during stellar collapse necessarily entails the formation of an event horizon. According to him, Nature forbids us from ever seeing a singularity because a horizon always cloaks it. Penrose’s conjecture is termed the cosmic censorship hypothesis. It is only a conjecture. But some theoretical models suggest that instead of a black hole, a collapsing star might become a naked singularity.
Most physicists operate under the assumption that a horizon must indeed form around a black hole. What exactly happens at a singularity - what becomes of the matter after it is infinitely crushed into oblivion - is not known. By hiding the singularity, the event horizon isolates this gap in our knowledge. General relativity does not account for the quantum effects that become important for microscopic objects, and those effects presumably intervene to prevent the strength of gravity from becoming truly infinite. Whatever happens in a black hole stays in a black hole. Yet Researchers have found a wide variety of stellar collapse scenarios in which an event horizon does not form, so that the singularity remains exposed to our view. Physicists call it a naked singularity. In such a case, Matter and radiation can both fall in and come out, whereas matter falling into the singularity inside a black hole would land in a one-way trip.
In principle, we can come as close as we like to a naked singularity and return back. Naked singularities might account for unexplained high-energy phenomena that astronomers have seen, and they might offer a laboratory to explore the fabric of the so-called space-time on its finest scales. The results of simulations by different scientists show that most naked singularities are stable to small variations of the initial setup. Thus, these situations appear to be generic and not contrived. These counterexamples to Penrose’s conjecture suggest that cosmic censorship is not a general rule.
The discovery of naked singularities would transform the search for a unified theory of physics, not the least by providing direct observational tests of such a theory. It has taken so long for physicists to accept the possibility of naked singularities because they raise a number of conceptual puzzles. A commonly cited concern is that such singularities would make nature inherently unpredictable. Unpredictability is actually common in general relativity and not always directly related to cosmic censorship violation described above. The theory permits time travel, which could produce causal loops with unforeseeable outcomes, and even ordinary black holes can become unpredictable. For example, if we drop an electric charge into an uncharged black hole, the shape of space-time around the hole radically changes and is no longer predictable. A similar situation holds when the black hole is rotating.
Specifically, what happens is that space-time no longer neatly separates into space and time, so that physicists cannot consider how the black hole evolves from some initial time into the future. Only the purest of pure black holes, with no charge or rotation at all, is fully predictable. The loss of predictability and other problems with black holes actually stem from the occurrence of singularities; it does not matter whether they are hidden or not. Cosmologists dread the singularity because at this point gravity becomes infinite, along with the temperature and density of the universe. As its equations cannot cope with such infinities, general relativity fails to describe what happens at the big bang.
In the mid 1980s, Abhay Ashtekar rewrote the equations of general relativity in a quantum-mechanical framework to show that the fabric of space-time is woven from loops of gravitational field lines. The theory is called the loop quantum gravity. If we zoom out far enough, the space appears smooth and unbroken, but a closer look reveals that space comes in indivisible chunks, or quanta, 10-35 square meters in size. In 2000, some scientists used loop quantum gravity to create a simple model of the universe. This is known as the LQC. Unlike general relativity, the physics of LQC did not break down at the big bang. Some others developed computer simulations of the universe according to LQC. Early versions of the theory described the evolution of the universe in terms of quanta of area, but a closer look revealed a subtle error. After this mistake was corrected it was found that the calculations now involved tiny volumes of space. It made a crucial difference. Now the universe according to LQC agreed brilliantly with general relativity when expansion was well advanced, while still eliminating the singularity at the big bang. When they ran time backwards, instead of becoming infinitely dense at the big bang, the universe stopped collapsing and reversed direction. The big bang singularity had disappeared (Physical Review Letters, vol.96, p-141301). The era of the Big Bounce has arrived. But the scientists are far from explaining all the conundrums.
Often it is said that the language of physics is mathematics. In a famous essay, Wigner wrote about the “unreasonable effectiveness of mathematics”. Most physicists resonate with the perplexity expressed by Wigner and Einstein’s dictum that “the most incomprehensible thing about the universe is that it is comprehensible”. They marvel at the fact that the universe is not anarchic - that atoms obey the same laws in distant galaxies as in the lab. Yet, Gödel’s Theorem implies that we can never be certain that mathematics is consistent: it leaves open the possibility that a proof exists demonstrating that 0=1. The quantum theory tells that, on the atomic scale, nature is intrinsically fuzzy. Nonetheless, atoms behave in precise mathematical ways when they emit and absorb light, or link together to make molecules. Yet, is Nature mathematical?
Language is a means of communication. Mathematics cannot communicate in the same manner like a language. Mathematics on its own does not lead to a sensible universe. The mathematical formula has to be interpreted in communicable language to acquire some meaning. Thus, mathematics is only a tool for describing some and not all ideas. For example, “observer” has an important place in quantum physics. Everett addressed the measurement problem by making the observer an integral part of the system observed: introducing a universal wave function that links observers and objects as parts of a single quantum system. But there is no equation for the “observer”.
We have not come across any precise and scientific definition of mathematics. Concise Oxford Dictionary defines mathematics as: “the abstract science of numbers, quantity, and space studied in its own right”, or “as applied to other disciplines such as physics, engineering, etc”. This is not a scientific description as the definition of number itself leads to circular reasoning. Even the mathematicians do not have a common opinion on the content of mathematics. There are at least four views among mathematicians on what mathematics is. John D Barrow calls these views as:
Platonism: It is the view that concepts like groups, sets, points, infinities, etc., are “out there” independent of us – “the pie is in the sky”. Mathematicians discover them and use them to explain Nature in mathematical terms. There is an offshoot of this view called “neo-Platonism”, which likens mathematics to the composition of a cosmic symphony by independent contributors, each moving it towards some grand final synthesis. Proof: completely independent mathematical discoveries by different mathematicians working in different cultures so often turn out to be identical.
Conceptualism: It is the anti-thesis of Platonism. According to this view, scientists create an array of mathematical structures, symmetries and patterns and force the world into this mould, as they find it so compelling. The so-called constants of Nature, which arise as theoretically undetermined constants of proportionality in the mathematical equations, are solely artifacts of the peculiar mathematical representation they have chosen to use for different purposes.
Formalism: This was developed during the last century, when a number of embarrassing logical paradoxes were discovered. There was proof which established the existence of particular objects, but offered no way of constructing them explicitly in a finite number of steps. Hilbert’s formalism belongs to this category, which defines mathematics as nothing more than the manipulation of symbols according to specified rules (not natural, but sometimes un-physical man-made rules). The resultant paper edifice has no special meaning at all. If the manipulations are done correctly, it should result in a vast collection of tautological statements: an embroidery of logical connections.
Intuitionism: Prior to Cantor’s work on infinite sets, mathematicians had not made use of actual infinities, but only exploited the existence of quantities that could be made arbitrarily large or small – the concept of limit. To avoid founding whole areas of mathematics upon the assumption that infinite sets share the “obvious” properties possessed by finite one’s, it was proposed that only quantities that can be constructed from the natural numbers 1,2,3,…, in a finite number of logical steps, should be regarded as proven true.
None of the above views is complete because it neither is a description derived from fundamental principles nor conforms to a proper definition of mathematics, whose foundation is built upon logical consistency. The Platonic view arose from the fact that mathematical quantities transcend human minds and manifests the intrinsic character of reality. A number, say three or five codes some information differently in various languages, but conveys the same concept in all civilizations. They are abstract entities and mathematical truth means correspondence between the properties of these abstract objects and our system of symbols. We associate the transitory physical objects such as three worlds or five sense organs to these immutable abstract quantities as a secondary realization. These ideas are somewhat misplaced. Numbers are a property of all objects by which we distinguish between similars. If there is nothing similar to an object, it is one. If there are similars, the number is decided by the number of times we perceive such similars (we may call it a set). Since perception is universal, the concept of numbers is also universal.
Believers in eternal truth often point to mathematics as a model of a realm with timeless truths. Mathematicians explore this realm with their minds and discover truths that exist outside of time, in the same way that we discover the laws of physics by experiment. But mathematics is not only self-consistent, but also plays a central role in formulating fundamental laws of physics, which the physics Nobel laureate Eugene Wigner once referred to as the “unreasonable success of mathematics in physics”. One way to explain this “success” within the dominant metaphysical paradigm of the timeless multiverse is to suppose that physical reality is mathematical, i.e. we are creatures within the timeless Platonic realm. The cosmologist Max Tegmark calls this the mathematical universe hypothesis. A slightly less provocative approach is to posit that since the laws of physics can be represented mathematically, not only is their essential truth outside of time, but there is in the Platonic realm a mathematical object, a solution to the equations of the final theory, that is “isomorphic” in every respect to the history of the universe. That is, any truth about the universe can be mapped into a theorem about the corresponding mathematical object. If nothing exists or is true outside of time, then this description is void. However, if mathematics is not the description of a different timeless realm of reality, what is it? What are the theorems of mathematics about if numbers, formulas and curves do not exist outside of our world?
Let us consider a game of chess. It was invented at a particular time, before which there is no reason to speak of any truths of chess. But once the game was invented, a long list of facts became demonstrable. These are provable from the rules and can be called the theorems of chess. These facts are objective in that any two minds that reason logically from the same rules will reach the same conclusions about whether a conjectured theorem is true or not. Platonists would say that chess always existed timelessly in an infinite space of mathematically describable games. By such an assertion, we do not achieve anything except a feeling of doing something elevated. Further, we have to explain how we finite beings embedded in time can gain knowledge about this timeless realm. It is much simpler to think that at the moment the game was invented, a large set of facts become objectively demonstrable, as a consequence of the invention of the game. There is no need to think of the facts as eternally existing truths, which are suddenly discoverable. Instead we can say they are objective facts that are evoked into existence by the invention of the game of chess. The bulk of mathematics can be treated the same way, even if the subjects of mathematics such as numbers and geometry are inspired by our most fundamental observations of nature. Mathematics is no less objective, useful or true for being evoked by and dependent on discoveries of living minds in the process of exploring the time-bound universe.
The Mandelbrot Set is often cited as a mathematical object with an independent existence of its own. Mandelbrot Set is produced by a remarkably simple mathematical formula – a few lines of code (f(z) = z2+c) describing a recursive feed-back loop – but can be used to produce beautiful colored computer plots. It is possible to endlessly zoom in to the set revealing ever more beautiful structures which never seem to repeat themselves. Penrose called it “not an invention of the human mind: it was a discovery”. It was just out there. On the other hand, fractals – geometrical shapes found through out Nature – are self-similar because how far you zoom into them; they still resemble the original structure. Some people use these factors to plead that mathematics and not evolution is the sole factor in designing Nature. They miss the deep inner meaning of these, which will be described later while describing the structure of the Universe.
The opposing view reflects the ideas of Kant regarding the innate categories of thought whereby all our experience is ordered by our minds. Kant pointed out the difference between the internal mental models we build of the external world and the real objects that we know through our sense organs. The views of Kant have many similarities with that of Bohr. The Consciousness of Kant is described as intelligence by Bohr. The sense organs of Kant are described as measuring devices by Bohr. Kant’s mental models are Bohr’s quantum mechanical models. This view of mathematics stresses more on “mathematical modeling” than mathematical rules or axioms. In this view, the so-called constants of Nature that arise as theoretically determined constants of proportionality in our mathematical equations, are solely artifacts of the particular mathematical representation we have chosen to use for explaining different natural phenomena. For example, we use G as the Gravitational constant because of our inclination to express the gravitational interaction in a particular way. This view is misleading as the large number of the so-called constants of Nature points to some underlying reality behind it. We will discuss this point later.
The debate over the definition of “physical reality” led to the notion that it should be external to the observer – an observer-independent objective reality. The statistical formulation of the laws of atomic and sub-atomic physics has added a new dimension to the problem. In quantum mechanics, the experimental arrangements are treated in classical terms, whereas the observed objects are treated in probabilistic terms. In this way, the measuring apparatus and the observer are effectively joined into one complex system which has no distinct, well defined parts, and the measuring apparatus does not have to be described as an isolated physical entity.
As Max Tegmark in his External Reality Hypothesis puts it: If we assume that reality exists independently of humans, then for a description to be complete, it must also be well-defined according to non-human entities that lack any understanding of human concepts like “particle”, “observation”, etc. A description of objects in this external reality and the relations between them would have to be completely abstract, forcing any words or symbols to be mere labels with no preconceived meanings what-so-ever. To understand the concept, you have to distinguish between two ways of viewing reality. The first is from outside, like the overview of a physicist studying its mathematical structure – a bird’s eye view. The second way is the inside view of an observer living in the structure – the view of a frog in the well.
Though Tegmark’s view is nearer the truth (it will be discussed later), it has been contested by others on the ground of contradicting logical consistency. Tegmark relies on a quote of David Hilbert: “Mathematical existence is merely freedom from contradiction”. This implies that mathematical structures simply do not exist unless they are logically consistent. They cite the Russell’s paradox (discussed in detail in later pages) and other paradoxes - such as the Zermelo-Frankel set theory that avoids the Russell’s paradox - to point out that mathematics on its own does not lead to a sensible universe. We seem to need to apply constraints in order to obtain consistent physical reality from mathematics. Unrestricted axioms lead to Russell’s paradox.
Conventional bivalent logic is assumed to be based on the principle that every proposition takes exactly one of two truth values: “true” or “false”. This is a wrong conclusion based on European tradition as in the ancient times students were advised to: observe, listen (to teachings of others), analyze and test with practical experiments before accepting anything as true. Till it is conclusively proved or disproved, it was “undecided”. The so-called discovery of multi-valued logic is nothing new. If we extend the modern logic then why stop at ternary truth values: it could be four or more-valued logic. But then what are they? We will discuss later.
Though Euclid with his Axioms appears to be a Formalist, his Axioms were abstracted from the real physical world. But the focus of attention of modern Formalists is upon the relations between entities and the rules governing them, rather than the question of whether the objects being manipulated have any intrinsic meaning. The connection between the Natural world and the structure of mathematics is totally irrelevant to them. Thus, when they thought that the Euclidean geometry is not applicable to curved surfaces, they had no hesitation in accepting the view that the sum of the three angles of a triangle need not be equal to 1800. It could be more or less depending upon the curvature. This is a wholly misguided view. The lines or the sides drawn on a curved surface are not straight lines. Hence the Axioms of Euclid are not violated, but are wrongly applied. Riemannian geometry, which led to the chain of non-Euclidean geometry, was developed out of his interest in trying to solve the problems of distortion of metal sheets when they were heated. Einstein used this idea to suggest curvature of space-time without precisely defining space or time or spece-time. But such curvature is a temporary phenomenon due to the application of heat energy. The moment the external heat energy is removed, the metal plate is restored to its original position and Euclidean geometry is applicable. If gravity changes the curvature of space, then it should be like the external energy that distorts the metal plate. Then who applies gravity to mass or what is the mechanism by which gravity is applied to mass. If no external agency is needed and it acts perpetually, then all mass should be changing perpetually, which is contrary to observation. This has been discussed elaborately in latter pages.
Once the notion of the minimum distance scale was firmly established, questions were raised about infinity and irrational numbers. Feynman raised doubts about the relevance of infinitely small scales as follows: “It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space”. Paul Davies asserted: “the use of differential equations assumes the continuity of space-time on arbitrarily small scales.
The frequent appearance of π implies that their numerical values may be computed to arbitrary precision by an infinite sequence of operations. Many physicists tacitly accept these mathematical idealizations and treat the laws of physics as implementable in some abstract and perfect Platonic realm. Another school of thought, represented most notably by Wheeler and Landauer, stresses that real calculations involve physical objects, such as computers, and take place in the real physical universe, with its specific available resources. In short, information is physical. That being so, it follows that there will be fundamental physical limitations to what may be calculated in the real world”. Thus, Intuitionism or Constructivism divides mathematical structures into “physically relevant” and “physically irrelevant”. It says that mathematics should only include statements which can be deduced by a finite sequence of step-by-step constructions starting from the natural numbers. Thus, according to this view, infinity and irrational numbers cannot be part of mathematics.
Infinity is qualitatively different from even the largest number. Finite numbers, however large, obey the laws of arithmetic. We can add, multiply and divide them, and put different numbers unambiguously in order of size. But infinity is the same as a part of itself, and the mathematics of other numbers is not applicable to it. Often the term “Hilbert’s hotel” is used as a metaphor to describe infinity. Suppose a hotel is full and each guest wants to bring a colleague who would need another room. This would be a nightmare for the management, who could not double the size of the hotel instantly. In an infinite hotel, though, there is no problem. The guest from room 1 goes into room 2, the guest in room 2 into room 4, and so on. All the odd-numbered rooms are then free for new guests. This is a wrong analogy. The numbers are divided into two categories based on whether there is similar perception or not. If after the perception of one object there is further similar perception, they are many, which can range from 2,3,4,…..n depending upon the sequence of perceptions? If there is no similar perception after the perception of one object, then it is one. In the case of Infinity, neither of the above conditions applies. However, Infinity is more like the number ‘one’ – without a similar – except for one characteristic. While one object has a finite dimension, infinity has infinite dimensions. The perception of higher numbers is generated by repetition of ‘one’ that many number of times, but the perception of infinity is ever incomplete.
Since interaction requires a perceptible change anywhere in the system under examination or measurement, normal interactions are not applicable in the case of infinity. For example, space and time in their absolute terms are infinite. Space and time cannot be measured, as they are not directly perceptible through our sense organs, but are deemed to be perceived. Actually what we measure as space is the interval between objects or points on objects. These intervals are mental constructs and have no physical existence other than the objects, which are used to describe space through alternative symbolism. Similarly, what we measure as time is the interval between events. Space and time do not and cannot interact with each other or with other objects or events as no mathematics is possible between infinities. Our measurements of an arbitrary segment of space or time (which are really the intervals) do not affect space or time in any way. We have explained the quantum phenomena with real numbers derived from fundamental principles and correlated them to the macro world. The quantities like π and φ etc have other significances, which will be discussed later.
The fundamental “stuff” of the Universe is the same and the differences arise only due to the manner of their accumulation and reduction – magnitude and sequential arrangement. Since number is a property of all particles, physical phenomena have some associated mathematical basis. However, the perceptible structures and processes of the physical world are not the same as their mathematical formulations, many of which are neither perceptible nor feasible. Thus the relationship between physics and mathematics is that of the map and the territory. Map facilitates study of territory, but it does not tell all about territory. Knowing all about the territory from the map is impossible. This creates the difficulty. Science is increasingly becoming less objective. The scientists are presenting data as if it is absolute truth merely liberated by their able hands for the benefit of lesser mortals. Thus, it has to be presented to the lesser mortals in a language that they do not understand – thus do not question. This leads to misinterpretations to the extent that some classic experiments become dogma even when they are fatally flawed. One example is the Olber’s paradox.
In order to understand our environment and interact effectively with it, we engage in the activities of counting the total effect of each of the systems. Such counting is called mathematics. It covers all aspects of life. We are central to everything in a mathematical way. As Barrow points out; “While Copernicus’s idea that our position in the universe should not be special in every sense is sound, it is not true that it cannot be special in any sense”. If we consider our positioning as opposed to our position in the Universe, we will find our special place. For example, if we plot a graph with mass of the star relative to the Sun (with Sun at 1) and radius of orbit relative to Earth (with Earth at 1) and consider scale of the planets, its distance from the Sun, its surface conditions, the positioning of the neighboring planets etc; and consider these variables in a mathematical space, we will find that the Earth’s positioning is very special indeed. It is in a narrow band called the Habitable zone (For details, please refer to Wikipedia on planetary habitability hypothesis).
If we imagine the complex structure of the Mandelbrot Set as representative of the Universe (since it is self similar), then we could say that we are right in the border region of the fractal structure. If we consider the relationship between different dimensions of space or a (bubble), then we find their exponential nature. If we consider the center of the bubble as 0 and the edge as 1 and map it in a logarithmic scale, we will find an interesting zone at 0.5. Starting for the Galaxy, to the Sun to Earth to the atoms, everything comes in this zone. For example, we can consider the galactic core as the equivalent of the S orbital of the atom, the bars as equivalent of the P orbital, the spiral arms as equivalent of the D orbital and apply the logarithmic scale, we will find the Sun at 0.5 position. The same is true for Earth. It is known that both fusion and fission push atoms towards iron. The element finds itself in the middle group of the middle period of the periodic table; again 0.5. Thus, there can be no doubt that Nature is mathematical. But the structures and the processes of the world are not the same as mathematical formulations. The map is not the territory. Hence there are various ways of representing Nature. Mathematics is one of them. However, only mathematics cannot describe Nature in any meaningful way.
Even the modern mathematician and physicists do not agree on many concepts. Mathematicians insist that zero has existence, but no dimension, whereas the physicists insist that since the minimum possible length is the Planck scale; the concept of zero has vanished! The Lie algebra corresponding to SU (n) is a real and not a complex Lie algebra. The physicists introduce the imaginary unit i, to make it complex. This is different from the convention of the mathematicians. Mathematicians treat any operation involving infinity is void as it does not change by addition or subtraction of or multiplication or division by any number. History of development of science shows that whenever infinity appears in an equation, it points to some novel phenomenon or some missing parameters. Yet, physicists use renormalization by manipulation to generate another infinity in the other side of the equation and then cancel both! Certainly it is not mathematics!
Often the physicists apply the “brute force approach”, in which many parameters are arbitrarily reduced to zero or unity to get the desired result. One example is the mathematics for solving the equations for the libration points. But such arbitrary reduction changes the nature of the system under examination (The modern values are slightly different from our computation). This aspect is overlooked by the physicists. We can cite many such instances, where the conventions of mathematicians are different from those of physicists. The famous Cambridge coconut puzzle is a clear representation of the differences between physics and mathematics. Yet, the physicists insist that unless a theory is presented in a mathematical form, they will not even look at it. We do not accept that the laws of physics break down at singularity. At singularity only the rules of the game change and the mathematics of infinities takes over.
Modern scientists claim to depend solely on mathematics. But most of what is called as “mathematics” in modern science fails the test of logical consistency that is a corner stone for judging the truth content of a mathematical statement. For example, mathematics for a multi-body system like a lithium or higher atom is done by treating the atom as a number of two body systems. Similarly, the Schrödinger equation in so-called one dimension (it is a second order equation as it contains a term x2, which is in two dimensions and mathematically implies area) is converted to three dimensional by addition of two similar factors for y and z axis. Three dimensions mathematically imply volume. Addition of three areas does not generate volume and x2+y2+z2 ≠ (x.y.z). Similarly, mathematically all operations involving infinity is void. Hence renormalization is not mathematical. Thus, the so called mathematics of modern physicists is not mathematical at all!
In fact, some recent studies appear to hint that perception is mathematically impossible. Imagine a black-and-white line drawing of a cube on a sheet of paper. Although this drawing looks to us like a picture of a cube, there are actually infinite numbers of other three-dimensional objects that could have produced the same set of lines when collapsed on the page. But we don’t notice any of these alternatives. The reason for the same is that, our visual systems have more to go on than just bare perceptual input. They are said to use heuristics and short cuts, based on the physics and statistics of the natural world, to make the “best guesses” about the nature of reality. Just as we interpret a two-dimensional drawing as representing a three-dimensional object, we interpret the two-dimensional visual input of a real scene as indicating a three-dimensional world. Our perceptual system makes this inference automatically, using educated guesses to fill in the gaps and make perception possible. Our brains use the same intelligent guessing process to reconstruct the past and help in perceiving the world.
Memory functions differently than a video-recording with a moment-by-moment sensory image. In fact, it’s more like a puzzle: we piece together our memories, based on both what we actually remember and what seems most likely given our knowledge of the world. Just as we make educated guesses – inferences - in perception, our minds’ best inferences help “fill in the gaps” of memory, reconstructing the most plausible picture of what happened in our past. The most striking demonstration of the minds’ guessing game occurs when we find ways to fool the system into guessing wrong. When we trick the visual system, we see a “visual illusion” - a static image might appear as if it’s moving, or a concave surface will look convex. When we fool the memory system, we form a false memory - a phenomenon made famous by researcher Elizabeth Loftus, who showed that it is relatively easy to make people remember events that never occurred. As long as the falsely remembered event could plausibly have occurred, all it takes is a bit of suggestion or even exposure to a related idea to create a false memory.
Earlier, visual illusions and false memories were studied separately. After all, they seem qualitatively different: visual illusions are immediate, whereas false memories seemed to develop over an extended period of time. A recent study blurs the line between these two phenomena. The study reveals an example of false memory occurring within 42 milliseconds - about half the amount of time it takes to blink your eye. It relied upon a phenomenon known as “boundary extension”, an example of false memory found when recalling pictures. When we see a picture of a location - say, a yard with a garbage can in front of a fence - we tend to remember the scene as though more of the fence were visible surrounding the garbage can. In other words, we extend the boundaries of the image, believing that we saw more fence than was actually present. This phenomenon is usually interpreted as a constructive memory error - our memory system extrapolates the view of the scene to a wider angle than was actually present. The new study, published in the November 2008 issue of the journal Psychological Science, asked how quickly this boundary extension happens.
The researchers showed subjects a picture, erased it for a very short period of time by overlaying a new image, and then showed a new picture that was either the same as the first image or a slightly zoomed-out view of the same place. They found that when people saw the exact same picture again, they thought the second picture was more zoomed-in than the first one they had seen. When they saw a slightly zoomed-out version of the picture they had seen before, however, they thought this picture matched the first one. This experience is the classic boundary extension effect. However, the gap between the first and second picture was less than 1/20th of a second. In less than the blink of an eye, people remembered a systematically modified version of pictures they had seen. This modification is, by far, the fastest false memory ever found.
Although it is still possible that boundary extension is purely a result of our memory system, the incredible speed of this phenomenon suggests a more parsimonious explanation: that boundary extension may in part be caused by the guesses of our visual system itself. The new dataset thus blurs the boundaries between the initial representation of a picture (via the visual system) and the storage of that picture in memory. This raises the question: is boundary extension a visual illusion or a false memory? Perhaps these two phenomena are not as different as previously thought. False memories and visual illusions both occur quickly and easily, and both seem to rely on the same cognitive mechanism: the fundamental property of perception and memory to fill in gaps with educated guesses, information that seems most plausible given the context. The work adds to a growing movement that suggests that memory and perception may be simply two sides of the same coin. This, in turn, implies that mathematics, which is based on perception of numbers and other visual imagery, could be misleading for developing theories of physics.
The essence of creation is accumulation and reduction of the number of particles in each system in various combinations. Thus, Nature has to be mathematical. But then physics should obey the laws of mathematics, just as mathematics should comply with the laws of physics. We have shown elsewhere that all of mathematics cannot be physics. We may have a mathematical equation without a corresponding physical explanation. Accumulation or reduction can be linear or non-linear. If they are linear, the mathematics is addition and subtraction. If they are non-linear, the mathematics is multiplication and division. Yet, this principle is violated in a large number of equations. For an example, the Schrödinger’s equation in one dimension has been discussed earlier. Then there are unphysical combinations. For example, certain combinations of protons and neutrons are prohibited physically, though there is no restriction on devising one such mathematical formula. There is no equation for the observer. Thus, sole dependence on mathematics for discussing physics is neither desirable nor warranted.
We accept “proof” – mathematical or otherwise - to validate the reality of any physical phenomena. We depend on proof to validate a theory as long as it corresponds to reality. The modern system of proof takes five stages: observation/experiment, developing hypothesis, testing the hypothesis, acceptance or rejection or modification of hypothesis based on the additional information and lastly, reconstruction of the hypothesis if it was not accepted. We also adopt a five stage approach to proof. First we observe/experiment and hypothesize. Then we look for corroborative evidence. In the third stage we try to prove that the opposite of the hypothesis is wrong. In the fourth stage we try to prove whether the hypothesis is universally valid or has any limitations. In the last stage we try to prove that any theory other than this is wrong.
Mathematics is one of the tools of “proof” because of its logical constancy. It is a universal law that the tools are selected based on the nature of operations and not vice-versa. The tools can only restrict the choice of operations. Hence mathematics by itself does not provide proof, but the proof may use mathematics as a tool. We also depend on symmetry, as it is a fundamental property of Nature. In our theory, different infinities co-exist and do not interact with each other. Thus, we agree that the evolutionary process of the Universe could be explained mathematically, as basically it is a process of non-linear accumulation and corresponding reduction of particles and energies in different combinations. But we differ on the interpretation of the equation. For us, the left hand side of the equation represents the cause and the right hand side the effect, which is reversible only in the same order. If the magnitudes of the parameters of one side are changed, the effect on the other side also correspondingly changes. But such changes must be according to natural laws and not arbitrary changes. For example, we agree that e/m = c2 or m/e = 1/c2, which we derive from fundamental principles. But we do not agree that e = mc2. This is because we treat mass and energy as inseparable conjugates with variable magnitude and not interchangeable, as each has characteristics not found in the other. Thus, they are not fit to be used in an equation as cause and effect. Simultaneously, we agree with c2 as energy flow is perceived in fields, which are represented by second order quantities.
If we accept the equation e = mc2, according to modern principles, it will lead to m = e/c2. In that case, we will land in many self contradicting situations. For example, if photon has zero rest mass, then m0 = 0/c2 (at rest, external energy that moves a particle has to be zero. Internal energy is not relevant, as a stable system has zero net energy). This implies that m0c2 = 0, or e = 0, which makes c2 = 0/0, which is meaningless. But if we accept e/m = c2 and both sides of the equation as cause and effect, then there is no such contradiction. As we have proved in our book “Vaidic Theory of Numbers”, all operations involving zero except multiplication are meaningless. Hence if either e or m becomes zero, the equation becomes meaningless and in all other cases, it matches the modern values. Here we may point out that the statement that the rest mass of matter is determined by its total energy content is not susceptible of a simple test since there is no independent measure of the later quantity. This proves our view that mass and energy are inseparable conjugates.
The domain that astronomers call “the universe” - the space, extending more than 10 billion light years around us and containing billions of galaxies, each with billions of stars, billions of planets (and maybe billions of biospheres) - could be an infinitesimal part of the totality. There is a definite horizon to direct observations: a spherical shell around us, such that no light from beyond it has had time to reach us since the big bang. However, there is nothing physical about this horizon. If we were in the middle of an ocean, it is conceivable that the water ends just beyond your horizon - except that we know it doesn’t. Likewise, there are reasons to suspect that our universe - the aftermath of our big bang - extends hugely further than we can see.
An idea called eternal inflation suggested by some cosmologists envisages big bangs popping off, endlessly, in an ever-expanding substratum. Or there could be other space-times alongside ours - all embedded in a higher-dimensional space. Ours could be but one universe in a multiverse. Other branches of mathematics then may become relevant. This has encouraged the use of exotic mathematics such as the transfinite numbers. It may require a rigorous language to describe the number of possible states that a universe could possess and to compare the probability of different configurations. It may just be too hard for human brains to grasp. A fish may be barely aware of the medium in which it lives and swims; certainly it has no intellectual powers to comprehend that water consists of interlinked atoms of hydrogen and oxygen. The microstructure of empty space could, likewise, be far too complex for unaided human brains to grasp. Can we guarantee that with the present mathematics we can overcome all obstacles and explain all complexities of Nature? Should we not resort to the so-called exotic mathematics? But let us see where it lands us.
The manipulative mathematical nature of the descriptions of quantum physics has created difficulties in its interpretation. For example, the mathematical formalism used to describe the time evolution of a non-relativistic system proposes two somewhat different kinds of transformations:
· Reversible transformations described by unitary operators on the state space. These transformations are determined by solutions to the Schrödinger equation.
· Non-reversible and unpredictable transformations described by mathematically more complicated transformations. Examples of these transformations are those that are undergone by a system as a result of measurement.
The truth content of a mathematical statement is judged from its logical consistency. We agree that mathematics is a way of representing and explaining the Universe in a symbolic way because evolution is logically consistent. This is because everything is made up of the same “stuff”. Only the quantities (number or magnitude) and their ordered placement or configuration create the variation. Since numbers are a property by which we differentiate between similar objects and all natural phenomena are essentially accumulation and reduction of the fundamental “stuff” in different permissible combinations, physics has to be mathematical. But then mathematics must conform to Natural laws: not un-physical manipulations or the brute force approach of arbitrarily reducing some parameters to zero to get a result that goes in the name of mathematics. We suspect that the over-dependence on mathematics is not due to the fact that it is unexceptionable, but due to some other reason described below.
In his book “The Myth of the Framework”, Karl R Popper, acknowledged as the major influence in modern philosophy and political thought, has said: “Many years ago, I used to warn my students against the wide-spread idea that one goes to college in order to learn how to talk and write “impressively” and incomprehensibly. At that time many students came to college with this ridiculous aim in mind, especially in Germany …………. They unconsciously learn that highly obscure and difficult language is the intellectual value par excellence……………Thus arose the cult of incomprehensibility, of “impressive” and high sounding language. This was intensified by the impenetrable and impressive formalism of mathematics…………….” It is unfortunate that even now many Professors, not to speak of their students, are still devotees of the above cult.
The modern Scientists justify the cult of incomprehensibility in the garb of research methodology - how “big science” is really done. “Big science” presents a big opportunity for methodologists. With their constant meetings and exchanges of e-mail, collaboration scientists routinely put their reasoning on public display (not the general public, but only those who subscribe to similar views), long before they write up their results for publication in a journal. In reality, it is done to test the reaction of others as often bitter debate takes place on such ideas. Further, when particle physicists try to find a particular set of events among the trillions of collisions that occur in a particle accelerator, they focus their search by ignoring data outside a certain range. Clearly, there is a danger in admitting a non-conformist to such raw material, since a lack of acceptance of their reasoning and conventions can easily lead to very different conclusions, which may contradict their theories. Thus, they offer their own theory of “error-statistical evidence” such as in the statement, “The distinction between the epistemic and causal relevance of epistemic states of experimenters may also help to clarify the debate over the meaning of the likelihood principle”. Frequently they refer to ceteris paribus (other things being equal), without specifying which other things are equal (and then face a challenge to justify their statement).
The cult of incomprehensibility has been used even the most famous scientists with devastating effect. Even the obvious mistakes in their papers have been blindly accepted by the scientific community and remained un-noticed for hundreds of years. Here we quote from an article written by W.H. Furry of Department of Physics, Harvard University, published in March 1, 1936 issue of Physical Review, Volume 49. The paper “Note on the Quantum-Mechanical Theory of Measurement” was written in response to the famous EPR Argument and its counter by Bohr. The quote relates to the differentiation between “pure state” and “mixture state”.
“2. POSSIBLE TYPES OF STATISTICAL INFORMATION ABOUT A SYSTEM.
Our statistical information about a system may always be expressed by giving the expectation values of all observables. Now the expectation value of an arbitrary observable F, for a state whose wave function is φ, is
If we do not know the state of the system, but know that wi
are the respective probabilities of its being in states whose wave functions are φi, then we must assign as the expectation value of F the weighted average of its expectation values for the states φi. Thus,
This formula for is the appropriate one when our system is one of an ensemble of systems of which numbers proportional to wi are in the states φi. It must not be confused with any such formula as
which corresponds to the system’s having a wave function which is a linear combination of the φi. This last formula is of the type of (1), while (2) is an altogether different type.
An alternative way of expressing our statistical information is to give the probability that measurement of an arbitrary observable F will give as result an arbitrary one of its eigenvalues, say δ. When the system is in the state φ, this probability is
where xδ is the eigenfunction of F corresponding to the eigenvalues δ. When we know only that wi are the probabilities of the system’s being in the states φi, the probability in question is
Formula (2’) is not the same as any special case of (1’) such as
It differs generically from (1’) as (2) does from (1).
When such equations as (1), (1’) hold, we say that the system is in the “pure state” whose wave function is φ. The situation represented by Eqs. (2), (2’) is called a “mixture” of the states φi with the weights wi. It can be shown that the most general type of statistical information about a system is represented by a mixture. A pure state is a special case, with only one non-vanishing wi. The term mixture is usually reserved for cases in which there is more than one non-vanishing wi.It must again be emphasized that a mixture in this sense is essentially different from any pure state whatever.”
Now we quote from a recent Quantum Reality Web site the same description of “pure state” and “quantum state”:
“The statistical properties of both systems before measurement, however, could be described by a density matrix. So for an ensemble system such as this the density matrix is a better representation of the state of the system than the vector.
So how do we calculate the density matrix? The density matrix is defined as the weighted sum of the tensor products over all the different states:
Where p and q refer to the relative probability of each state. For the example of particles in a box, p would represent the number of particles in state│ψ>, and q would represent the number of particles in state │φ>.
Let’s imagine we have a number of qubits in a box (these can take the value │0> or│1>.
Let’s say all the qubits are in the following superposition state: 0.6│0> +0.8i│1>.
In other words, the ensemble system is in a pure state, with all of the particles in an identical quantum superposition of states │0> and│1>. As we are dealing with a single, pure state, the construction of the density matrix is particularly simple: we have a single probability p, which is equal to 1.0 (certainty), while q (and all the other probabilities) are equal to zero. The density matrix then simplifies to: │ψ><ψ│
This state can be written as a column (“ket”) vector. Note the imaginary component (the expansion coefficients are in general complex numbers):
In order to generate the density matrix we need to use the Hermitian conjugate (or adjoint) of this column vector (the transpose of the complex conjugate│ψ>. So in this case the adjoint is the following row (“bra”) vector:
What does this density matrix tell us about the statistical properties of our pure state ensemble quantum system? For a start, the diagonal elements tell us the probabilities of finding the particle in the│0> or│1> eigenstate. For example, the 0.36 component informs us that there will be a 36% probability of the particle being found in the │0> state after measurement. Of course, that leaves a 64% chance that the particle will be found in the │1> state (the 0.64% component).
The way the density matrix is calculated, the diagonal elements can never have imaginary components (this is similar to the way the eigenvalues are always real). However, the off-diagonal terms can have imaginary components (as shown in the above example). These imaginary components have a associated phase (complex numbers can be written in polar form). It is the phase differences of these off-diagonal elements which produces interference (for more details, see the book Quantum Mechanics Demystified). The off-diagonal elements are characteristic of a pure state. A mixed state is a classical statistical mixture and therefore has no off-diagonal terms and no interference.
So how do the off-diagonal elements (and related interference effects) vanish during decoherence?
The off-diagonal (imaginary) terms have a completely unknown relative phase factor which must be averaged over during any calculation since it is different for each separate measurement (each particle in the ensemble). As the phase of these terms is not correlated (not coherent) the sums cancel out to zero. The matrix becomes diagonalised (all off-diagonal terms become zero. Interference effects vanish. The quantum state of the ensemble system is then apparently “forced” into one of the diagonal eigenstates (the overall state of the system becomes a mixture state) with the probability of a particular eigenstate selection predicted by the value of the corresponding diagonal element of the density matrix.
Consider the following density matrix for a pure state ensemble in which the off-diagonal terms have a phase factor of θ:
The above statement can be written in a simplified manner as follows: Selection of a particular eigenstate is governed by a purely probabilistic process. This requires a large number of readings. For this purpose, we must consider an ensemble – a large number of quantum particles in a similar state and treat them as a single quantum system. Then we measure each particle to ascertain a particular value; say color. We tabulate the results in a statement called the density matrix. Before measurement, each of the particles is in the same state with the same state vector. In other words, they are all in the same superposition state. Hence this is called a pure state. After measurement, all particles are in different classical states – the state (color) of each particle is known. Hence it is called a mixed state.
In common-sense language, what it means is that: if we take a box of billiard balls of say 100 numbers of random colors - say blue and green, before counting balls of each color, we could not say what percentage of balls are blue and what percentage green. But after we count the balls of each color and tabulate the results, we know that (in the above example) 36% of the balls belong to one color and 64% belong to another color. If we have to describe the balls after counting, we will give the above percentage or say that 36 numbers of balls are blue and 64 numbers of balls are green. That will be a pure statement. But before such measurement, we can describe the balls as 100 balls of blue and green color. This will be a mixed state.
As can be seen, our common-sense description is opposite of the quantum mechanical classification, which are written by two scientists about 75 years apart and which is accepted by all scientists unquestioningly. Thus, it is no wonder that one scientist jokingly said that: “A good working definition of quantum mechanics is that things are the exact opposite of what you thought they were. Empty space is full, particles are waves, and cats can be both alive and dead at the same time.”
We quote another example from the famous EPR argument of Einstein and others (Phys. Rev. 47, 777 (1935): “To illustrate the ideas involved, let us consider the quantum-mechanical description of the behavior of a particle having a single degree of freedom. The fundamental concept of the theory is the concept of state, which is supposed to be completely characterized by the wave function ψ, which is a function of the variables chosen to describe the particle’s behavior. Corresponding to each physically observable quantity A there is an operator, which may be designated by the same letter.
If ψ is an eigenfunction of the operator A, that is, if ψ’ ≡ Aψ = aψ (1)
where a is a number, then the physical quantity A has with certainty the value a whenever the particle is in the state given by ψ. In accordance with our criterion of reality, for a particle in the state given by ψ for which Eq. (1) holds, there is an element of physical reality corresponding to the physical quantity A”.
We can write the above statement and the concept behind it in various ways that will be far easier to understand by the common man. We can also give various examples to demonstrate the physical content of the above statement. However, such statements and examples will be difficult to twist and interpret differently when necessary. Putting the concept in an ambiguous format helps in its subsequent manipulation, as is explained below citing from the same example:
“In accordance with quantum mechanics we can only say that the relative probability that a measurement of the coordinate will give a result lying between a and b is
Since this probability is independent of a, but depends only upon the difference b - a, we see that all values of the coordinate are equally probable”.
The above conclusion has been arrived at based on the following logic: “More generally, it is shown in quantum mechanics that, if the operators corresponding to two physical quantities, say A and B, do not commute, that is, if AB ≠ BA, then the precise knowledge of one of them precludes such a knowledge of the other. Furthermore, any attempt to determine the latter experimentally will alter the state of the system in such a way as to destroy the knowledge of the first”.
The above statement is highly misleading. The law of commutation is a special case of non-linear accumulation as explained below. All interactions involve application of force which leads to accumulation and corresponding reduction. Where such accumulation is between similars, it is linear accumulation and its mathematics is called addition. If such accumulation is not fully between similars, but partially similars (and partially dissimilar) it is non-linear accumulation and its mathematics is called multiplication. For example, 10 cars and another 10 cars are twenty cars through addition. But if there are 10 cars in a row and there are two rows of cars, then rows of cars is common to both statements, but one statement shows the number of cars in a row while the other shows the number of rows of cars. Because of this partial dissimilarity, the mathematics has to be multiplication of 10 x 2 or 2 x 10. We are free to use any of the two sequences and the result will be the same. This is the law of commutation. However, no multiplication is possible if the two factors are not partially similar. In such cases, the two factors are said to be non-commutable. If the two terms are mutually exclusive, i.e., one of the terms will always be zero, the result of their multiplication will always be zero. Hence they may be said to be not commutable though in reality they are commutable, but the result of their multiplication is always zero. This implies that the knowledge of one precludes the knowledge of the other. The commutability or otherwise depend on the nature of the quantities – whether they are partially related and partially non-related to each other or not.
Position is a fixed co-ordinate in a specific frame of reference. Momentum is a mobile co-ordinate in the same frame of reference. Both fixedity and mobility are mutually exclusive. If a particle has a fixed position, its momentum is zero. If it has momentum, it does not have a fixed position. Since “particle” is similar in both the above statements, i.e., since both are related to the particle, they can be multiplied, hence commutable. But since one or the other factors is always zero, the result will always be zero and the equation AB ≠ BA does not hold. In other words, while uncertainty is established due to other reasons, the equation Δx. Δp ≥ h is a mathematically wrong statement, as mathematically the answer will always be zero. The validity of a physical statement is judged by its correspondence to reality or as Einstein and others put it: “by the degree of agreement between the conclusions of the theory and human experience”. Since in this case the degree of agreement between the conclusions of the theory and human experience is zero, it cannot be a valid physical statement either. Hence, it is no wonder that the Heisenberg’s Uncertainty relation is still a hypothesis and not proven. In latter pages we have discussed this issue elaborately.
In modern science there is a tendency of generalization or extension of one principle to others. For example; the Schrödinger equation in the so-called one dimension (actually it contains a second order term; hence cannot be an equation in one dimension) is generalized (?) to three dimensions by adding two more terms for y and z dimensions (mathematically and physically it is a wrong procedure). We have discussed it in latter pages. While position and momentum are specific quantities, the generalizations are done by replacing these quantities with A and B. When a particular statement is changed to a general statement by following algebraic principles, the relationship between the quantities of the particular statement is not changed. However, physicists often bypass or over-look this mathematical rule. A and B could be any set of two quantities. Since they are not specified, it is easy to use them in any way one wants. Even if the two quantities are commutable, since they are not precisely described, it gives one the freedom to manipulate by claiming that they are not commutable and vice-versa. Modern science is full of such manipulations.
Here we give another example to prove that physics and modern mathematics are not always compatible. Bell’s Inequality is one of the important equations used by all quantum physicists. We will discuss it repeatedly for different purposes. Briefly the theory holds that if a system consists of an ensemble of particles having three Boolean properties A, B and C, and there is a reciprocal relationship between the values of measurement of A on two particles, the same type relationship exists between the particles with respect to the quantity B, the value of one particle measured and found to be a, and the value of another particle measured and found to be b, then the first particle must have started with state (A = a, B = b). In that event, the Theorem says that P (A, C) ≤ P (A, B) + P (B, C). In the case of classical particles, the theorem appears to be correct.
Quantum mechanically: P(A, C) = ½ sin2 (θ), where θ is the angle between the analyzers. Let an apparatus emit entangled photons that pass through separate polarization analysers. Let A, B and C be the events that a single photon will pass through analyzers with axis set at 00, 22.50, and 450 to vertical respectively. It can be proved that C → C.
Thus, according to Bell’s theorem: P(A, C) ≤ P(A, B) + P(B, C),
Or ½ sin2 (450) ≤ ½ sin2 (22.50) + ½ sin2 (22.50),
Or 0.25 ≤0.1464, which is clearly absurd.
This inequality has been used by quantum physicists to prove entanglement and distinguish quantum phenomena from classical phenomena. We will discuss it in detail to show that the above interpretation is wrong and the same set of mathematics is applicable to both macro and the micro world. The real reason for such deviation from common sense is that because of the nature of measurement, measuring one quantity affects the measurement of another. The order of measurement becomes important in such cases. Even in the macro world, the order of measurement leads to different results. However, the real implication of Bell’s original mathematics is much deeper and points to one underlying truth that will be discussed later.
A wave function is said to describe all possible states in which a particle may be found. To describe probability, some people give the example of a large, irregular thundercloud that fills up the sky. The darker the thundercloud, the greater the concentration of water vapor and dust at that point. Thus by simply looking at a thundercloud, we can rapidly estimate the probability of finding large concentrations of water and dust in certain parts of the sky. The thundercloud may be compared to a single electron's wave function. Like a thundercloud, it fills up all space. Likewise, the greater its value at a point, the greater the probability of finding the electron there! Similarly, wave functions can be associated with large objects, like people. As one sits in his chair, he has a Schrödinger probability wave function. If we could somehow see his wave function, it would resemble a cloud very much in the shape of his body. However, some of the cloud would spread out all over space, out to Mars and even beyond the solar system, although it would be vanishingly small there. This means that there is a very large likelihood that his, in fact, sitting here in his chair and not on the planet Mars. Although part of his wave function has spread even beyond the Milky Way galaxy, there is only an infinitesimal chance that he is sitting in another galaxy. This description is highly misleading.
The mathematics for the above assumption is funny. Suppose we choose a fixed point A and walked in the north-eastern direction by 5 steps. We mark that point as B. There are an infinite number of ways of reaching the point B from A. For example, we can walk 4 steps to the north of A and then walk 3 steps to the east. We will reach at B. Similarly, we can walk 6 steps in the northern direction, 3 steps in the eastern direction and 2 steps in the Southern direction. We will reach at B. Alternatively; we can walk 8 steps in the northern direction, 6 steps in the eastern direction and 5 steps in the South-eastern direction. We will reach at B. It is presumed that since the vector addition or “superposition” of these paths, which are different sorts from the straight path, lead to the same point, the point B could be thought of as a superposition of paths of different sort from A. Since we are free to choose any of these paths, at any instant, we could be “here” or “there”. This description is highly misleading.
To put the above statement mathematically, we take a vector V which can be resolved into two vectors V1 and V2 along the directions 1 and 2, we can write: V = V1 + V2. If a unit of displacement along the direction 1 is represented by 1, then V1 = V11, wherein V1 denotes the magnitude of the displacement V1. Similarly, V2 = V22. Therefore:
V = V1 + V2 = V11 + V22. [1 and 2 are also denoted as (1,0) and (0,1) respectively].
This equation is also written as: V = λ1 + λ2, where λ is treated as the magnitude of the displacement. Here V is treated as a superposition of any standard vectors (1,0) and (0,1) with coefficients given by the numbers (ordered pair) (V1 , V2). This is the concept of a vector space. Here the vector has been represented in two dimensions. For three dimensions, this equation is written as V = λ1 + λ2 + λ3. For an n-tuple in n dimensions, the equation is written as V = λ1 + λ2 + λ3 +…… λn.
It is said that the choice of dimensions appropriate to a quantum mechanical problem depends on the number of independent possibilities the system possesses. In the case of polarization of light, there are only two possibilities. The same is true for electrons. But in the case of electrons, it is not dimensions, but spin. If we choose a direction and look at the electron’s spin in relation to that direction, then either its axis of rotation points along that direction or it is wholly in the reverse direction. Thus, electron spin is described as “up” and “down”. Scientists describe the spin of electron as something like that of a top, but different from it. In reality, it is something like the nodes of the Moon. At one node, Moon appear to be always going in the northern direction and at the other node, it always appears to be going in the southern direction. It is said that the value of “up” and “down” for an electron spin is always valid irrespective of the directions we may choose. There is no contradiction here, as direction is not important in the case of nodes. It is only the lay out of the two intersecting planes that is relevant. In many problems, the number of possibilities is said to be unbounded. Thus, scientists use infinite dimensional spaces to represent them. For this they use something called the Hilbert space. We will discuss about these later.
Any intelligent reader would have seen through the fallacy of the vector space. Still we are describing it again. Firstly, as we have described in the wave phenomena in later pages, superposition is a merger of two waves, which lose their own identity to create something different. What we see is the net effect, which is different from the individual effects. There are many ways in which it could occur at one point. But all waves do not stay in superposition. Similarly, the superposition is momentary, as the waves submit themselves to the local dynamics. Thus, only because there is a probability of two waves joining to cancel the effect of each other and merge to give a different picture, we cannot formulate a general principle such as the equation: V = λ1 + λ2 to cover all cases, because the resultant wave or flat surface is also transitory.
Secondly, the generalization of the equation V = λ1 + λ2 to V = λ1 + λ2 + λ3 +…… λn is mathematically wrong as explained below. Even though initially we mentioned 1 and 2 as directions, they are essentially dimensions, because they are perpendicular to each other. Direction is the information contained in the relative position of one point with respect to another point without the distance information. Directions may be either relative to some indicated reference (the violins in a full orchestra are typically seated to the left of the conductor), or absolute according to some previously agreed upon frame of reference (Kolkata lies due north-east of Puri). Direction is often indicated manually by an extended index finger or written as an arrow. On a vertically oriented sign representing a horizontal plane, such as a road sign, “forward” is usually indicated by an upward arrow. Mathematically, direction may be uniquely specified by a unit vector in a given basis, or equivalently by the angles made by the most direct path with respect to a specified set of axes. These angles can have any value and their inter-relationship can take an infinite number of values. But in the case of dimensions, they have to be at right angles to each other which remain invariant under mutual transformation.
According to Vishwakaema the perception that arises from length is the same that arises from the perception of breadth and height – thus they belong to the same class, so that the shape of the particle remains invariant under directional transformations. There is no fixed rule as to which of the three spreads constitutes either length or breadth or height. They are exchangeable in re-arrangement. Hence, they are treated as belonging to one class. These three directions have to be mutually perpendicular on the consideration of equilibrium of forces (for example, electric field and the corresponding magnetic field) and symmetry. Thus, these three directions are equated with “forward-backward”, “right-left”, and “up-down”, which remain invariant under mutual exchange of position. Thus, dimension is defined as the spread of an object in mutually perpendicular directions, which remains invariant under directional transformations. This definition leads to only three spatial dimensions with ten variants. For this reason, the general equation in three dimensions uses x, y, and z (and/or c) co-ordinates or at least third order terms (such as a3+3a2b+3ab2+b3), which implies that with regard to any frame of reference, they are not arbitrary directions, but fixed frames at right angles to one another, making them dimensions. A one dimensional geometric shape is impossible. A point has imperceptible dimension, but not zero dimensions. The modern definition of a one dimensional sphere or “one sphere” is not in conformity with this view. It cannot be exhibited physically, as anything other than a point or a straight line has a minimum of two dimensions.
While the mathematicians insist that a point has existence, but no dimensions, the Theoretical Physicists insist that the minimum perceptible dimension is the Planck length. Thus, they differ over the dimension of a point from the mathematicians. For a straight line, the modern mathematician uses the first order equation, ax + by + c = 0, which uses two co-ordinates, besides a constant. A second order equation always implies area in two dimensions. A three dimensional structure has volume, which can be expressed only by an equation of the third order. This is the reason why Born had to use the term “d3r” to describe the differential volume element in his equations.
The Schrödinger equation was devised to find the probability of finding the particle in the narrow region between x and x+dx, which is denoted by P(x) dx. The function P(x) is the probability distribution function or probability density, which is found from the wave function ψ(x) in the equation P(x) = [ψ(x)]2. The wave function is determined by solving the Schrödinger’s differential equation: d2ψ/dx2 + 8π2m/h2 [E-V(x)]ψ = 0, where E is the total energy of the system and V(x) is the potential energy of the system. By using a suitable energy operator term, the equation is written as Hψ = Eψ. The equation is also written as iħ ∂/∂tψ› = Hψ›, where the left hand side represents iħ times the rate of change with time of a state vector. The right hand side equates this with the effect of an operator, the Hamiltonian, which is the observable corresponding to the energy of the system under consideration. The symbol ψ indicates that it is a generalization of Schrödinger’s wave-function. The equation appears to be an equation in one dimension, but in reality it is a second order equation signifying a two dimensional field, as the original equation and the energy operator contain a term x2. A third order equation implies volume. Three areas cannot be added to create volume. Thus, the Schrödinger equation described above is an equation not in one, but in two dimensions. The method of the generalization of the said Schrödinger equation to the three spatial dimensions does not stand mathematical scrutiny.
Three areas cannot be added to create volume. Any simple mathematical model will prove this. Hence, the Schrödinger equation could not be solved for other than hydrogen atoms. For many electron atoms, the so called solutions simply consider them as many one-electron atoms, ignoring the electrostatic energy of repulsion between the electrons and treating them as point charges frozen to some instantaneous position. Even then, the problem remains to be solved. The first ionization potential of helium is theorized to be 20.42 eV, against the experimental value of 24.58 eV. Further, the atomic spectra show that for every series of lines (Lyman, Balmer, etc) found for hydrogen, there is a corresponding series found at shorter wavelengths for helium, as predicted by theory. But in the spectrum of helium, there are two series of lines observed for every single series of lines observed for hydrogen. Not only does helium possess the normal Balmer series, but also it has a second “Balmer” series starting at λ = 3889 Å. This shows that, for the helium atom, the whole series repeats at shorter wavelengths.
For the lithium atom, it is even worse, as the total energy of repulsion between the electrons is more complex. Here, it is assumed that as in the case of hydrogen and helium, the most stable energy of lithium atom will be obtained when all three electrons are placed in the 1s atomic orbital giving the electronic configuration of 1s3, even though it is contradicted by experimental observation. Following the same basis as for helium, the first ionization potential of lithium is theorized to be 20.4 eV, against the experimental value of 202.5 eV to remove all three electrons and only 5.4 eV to remove one electron from lithium. Experimentally, it requires less energy to ionize lithium than it does to ionize hydrogen, but the theory predicts ionization energy one and half times larger. More serious than this is the fact that, the theory should never predict the system to be more stable than it actually is. The method should always predict energy less negative than is actually observed. If this is not found to be the case, then it means that an incorrect assumption has been made or that some physical principle has been ignored.
Further, it contradicts the principle of periodicity, as the calculation places each succeeding electron in the 1s orbital as it increases nuclear charge by unity. It must be remembered that, with every increase in n, all the preceding values of l are repeated, and a new l value is introduced. The reasons why more than two electrons could not be placed in the 1s orbit has not been explained. Thus, the mathematical formulations are contrary to the physical conditions based on observation. To overcome this problem, scientists take the help of operators. An operator is something which turns one vector into another. Often scientists describe robbery as an operator that transforms a state of wealth to a state of penury for the robbed and vice versa for the robber. Another example of an operator often given is the operation that rotates a frame clockwise or anticlockwise changing motion in northern direction to that in eastern or western directions. The act of passing light through a polarizer is called an operator as it changes the physical state of the photons polarization. Thus, the use of a polarizer is described as measurement of polarization, since the transmitted beam has to have its polarization in the direction perpendicular to it. We will come back to operators later.
The probability does not refer to (as is commonly believed) whether the particle will be observed at any specific position at a specific time or not. Similarly the description of different probability of finding the particle at any point of space is misleading. A particle will be observed only at a particular position at a particular time and no where else. Since a mobile particle does not have a fixed position, the probability actually refers to the state in which the particle is likely to be observed. This is because all the forces acting on it and their dynamics, which influence the state of the particle, may not be known to us. Hence we cannot predict with certainty whether the particle will be found here or elsewhere. After measurement, the particle is said to acquire a time invariant “fixed state” by “wave-function collapse”. This is referred to as the result of measurement, which is an arbitrarily frozen time invariant non-real (since in reality, it continues to change) state. This is because; the actual state with all influences on the particle has been measured at “here-now”, which is a perpetually changing state. Since all mechanical devices are subject to time variance in their operational capacities, they have to be “operated” by a “conscious agent” – directly or indirectly - because, as will be shown later, only consciousness is time invariant. This transition from a time variant initial state to a time invariant hypothetical “fixed state” through “now” or “here-now” is the dividing line between quantum physics and the classical physics, as well as conscious actions and mechanical actions. To prove the above statement, we have examined what is “information” in latter pages, because only conscious agents can cognize information and use it to achieve the desired objects. However, before that we will briefly discuss the chaos prevailing in this area among the scientists.
Modern science fails to answer the question “why” on many occasions. In fact it avoids such inconvenient questions. Here we may quote an interesting anecdote from the lives of two prominent persons. Once, Arthur Eddington was explaining the theory of the expanding universe to Bertrand Russell. Eddington told Russell that the expansion was so rapid and powerful that even a most powerful dictator would not be able to control the entire universe. He explained that even if the orders were sent with the speed of light, they would not reach the farthest parts of the universe. Bertrand Russell asked, “If that is so, how does God supervise what is going on in those parts?” Eddington looked keenly at Russell and replied, “That, dear Bertrand does not lie in the province of the physicists.” This begs the question: What is physics? We cannot take the stand that the role of physics is not to explain, but to describe reality. Description is also an explanation. Otherwise, why and to whom do you describe? If the validity of a physical statement is judged by its correspondence to reality, we cannot hide behind the veil of reductionism, but explain scientifically the theory behind the seemingly “acts of God”.
There is a general belief that we can understand all physical phenomenon if we can relate it to the interactions of atoms and molecules. After all, the Universe is made up of these particles only. Their interactions – in different combinations – create everything in the Universe. This is called a reductionist approach because it is claimed that everything else can be reduced to this supposedly more fundamental level. But this approach runs into problem with thermodynamics and its arrow of time. In the microscopic world, no such arrow of time is apparent, irrespective of whether it is being described by Newtonian mechanics, relativistic or quantum mechanics. One consequence of this description is that there can be no state of microscopic equilibrium. Time-symmetric laws do not single out a special end-state where all potential for change is reduced to zero, since all instants in time are treated as equivalent.
The apparent time reversibility of motion within the atomic and molecular regimes, in direct contradiction to the irreversibility of thermodynamic processes constitutes the celebrated irreversibility paradox put forward by in 1876 by Loschmidt among others (L. Boltzmann: Lectures on Gas Theory – University of California Press, 1964, page 9). The paradox suggests that the two great edifices – thermodynamics and mechanics – are at best incomplete. It represents a very clear problem in need of an explanation which should not be swept under carpet. As Lord Kelvin says: If the motion of every particle of matter in the Universe were precisely reversed at any instant, the course of Nature would be simply reversed for ever after. The bursting bubble of foam at the foot of a waterfall would reunite and descend into water. The thermal motions would reconcentrate energy and throw the mass up the fall in drops reforming in a close column of ascending water. Living creatures would grow backwards – from old age to infancy till they are unborn again – with conscious knowledge of the future but no memory of the past. We will solve this paradox in later pages.
The modern view on reductionism is faulty. Reductionism is based on the concept of differentiation. When an object is perceived as a composite that can be reduced to different components having perceptibly different properties which can be differentiated from one another and from the composite as a whole, the process of such differentiation is called reductionism. Some objects may generate similar perception of some properties or the opposite of some properties from a group of substances. In such cases the objects with similar properties are grouped together and the objects with opposite properties are grouped together. The only universally perceived aspect that is common to all objects is physical existence in space and time, as the radiation emitted by or the field set up by all objects create a perturbation in our sense organs always in identical ways. Since intermediate particles exhibit some properties similar with other particles and are similarly perceived with other such objects and not differentiated from others, reductionism applies only to the fundamental particles. This principle is violated in most modern classifications.
To give one example, x-rays and γ-rays exhibit exclusive characteristics that are not shared by other rays of the electromagnetic spectrum or between themselves – such as the place of their origin. Yet, they are clubbed under one category. If wave nature of propagation is the criterion for such categorisation, then sound waves that travel through a medium such as air or other gases in addition to liquids and solids of all kinds should also have been added to the classification. Then there are mechanical waves, such as the waves that travel though a vibrating string or other mechanical object or surface, waves that travel through a fluid or along the surface of a fluid, such as water waves. If electromagnetic properties are the criteria for such categorisation, then it is not scientific, as these rays do not interact with electromagnetic fields. If they have been clubbed together on the ground that theoretically they do not require any medium for their propagation, then firstly there is no true vacuum and secondly, they are known to travel through various mediums such as glass. There are many such examples of wrong classification due to reductionism and developmental history.
The cults of incomprehensibility and reductionism have led to another deficiency. Both cosmology and elementary particle physics share the same theory of the plasma and radiation. They have independent existence that is seemingly eternal and may be cyclic. Their combinations lead to the sub-atomic particles that belong to the micro world of quantum physics. The atoms are a class by itself, whose different combinations lead to the perceivable particles and bodies that belong to the macro world of the so-called classical physics. The two worlds merge in the stars, which contain plasma of the micro world and the planetary system of the macro world. Thus, the study of the evolution of stars can reveal the transition from the micro world to the macro world. For example, the internal structures of planet Jupiter and protons are identical and like protons, Jupiter-like stars are abundant in the stars. Yet, in stead of unification of all branches of science, Cosmology and nuclear physics have been fragmented into several “specialized” branches.
Here we are reminded of an anecdote related to Lord Chaitanya. While in his southern sojourn, a debate was arranged between him and a great scholar of yore. The scholar went off explaining many complex doctrines while Lord Chaitanya sat quietly and listened with rapt attention without any response. Finally the scholar told Lord Chaitanya that he was not responding at all to his discourse. Was it too complex for him? The Scholar was sure from the look on Lord Chaitanya’s face that he did not understand anything. To this, Lord Chaitanya replied; “I fully understand what you are talking about. But I was wondering why you are making the simple things look so complicated?” Then he explained the same theories in plain language after which the scholar fell at his feet.
There has been very few attempts to list out the essence of all branches and develop “one” science. Each branch has its huge data bank with its specialized technical terms glorifying some person at the cost of a scientific nomenclature thereby enhancing incomprehensibility. Even if we read the descriptions of all six proverbial blind men repeatedly, one who has not seen an elephant cannot visualize it. This leaves the students with little opportunity to get a macro view of all theories and evaluate their inter-relationship. The educational system with its examination method of emphasizing the aspects of “memorization and reproduction at a specific instant” compounds the problem. Thus, the students have to accept many statements and theories as “given” without questioning it even on the face of ambiguities. Further, we have never come across any book on science, which does not glorify the discoveries in superlative terms, while leaving out the uncomfortable and ambiguous aspects, often with an assurance that they are correct and should be accepted as such. This creates an impression on the minds of young students to accept the theories unquestioningly making them superstitious. Thus, whenever some deficiencies have been noticed in any theory, there is an attempt at patch work within the broad parameters of the same theories. There have been few attempts to review the theories ab initio. Thus, the scientists cannot relate the tempest at a distant land to the flapping of the wings of the butterfly elsewhere.
Till now scientists do not know “what” are electrons, photons, and other subatomic objects that have made the amazing technological revolution possible? Even the modern description of the nucleus and the nucleons leave many aspects unexplained. Photo-electric effect, for which Einstein got his Noble Prize, deals with electrons and photons. But it does not clarify “what” are these particles. The scientists, who framed the current theories, were not gifted with the benefit of the presently available data. Thus, without undermining their efforts, it is necessary to ab initio re-formulate the theories based on the presently available data. Only this way we can develop a theory whose correspondence resembles to reality. Here is an attempt in this regard from a different perspective. Like the child revealing the secret of the Emperor’s clothes, we, a novice in this field, are attempting to point the lamp in the direction of the Sun.
Thousands of papers are read every year in various forums on as yet undiscovered particles. This reminds us of the saying which means: after taking bath in the water of the mirage, wearing the flower of the sky in the head, holding the bow made of the horns of a rabbit, here goes the son of the barren woman! Modern scientists are precisely making similar statements. This is a sheer waste of not only valuable time but also public money worth trillions for the pleasure of a few. In addition, this amounts to misguiding general public for generations. This is unacceptable because a scientific theory must stand up to experimental scrutiny within a reasonable time period. Till it is proved or disproved, it cannot be accepted, though not rejected either. We cannot continue for three quarters and more of a century to develop “theories” based on such unproven postulates in the hope that we may succeed someday – may be after a couple of centuries! We cannot continue research on the properties of the “flowers of the sky” on the ground that someday it may be discovered.
Experiments with the subatomic phenomena show effects that have not been reconciled with our normal view of an objective world. Yet, they cannot be treated separately. This implies the existence of two different states – classical and quantum – with different dynamics, but linked to each other in some fundamentally similar manner. Since the validity of a physical statement is judged by its correspondence to reality, there is a big question mark on the direction in which theoretical physics is moving. Technology has acquired a pre-eminent position in the global epistemic order. However, Engineers and Technologists, who progress by trial and error methods, have projected themselves as experimental scientists. Their search for new technology has been touted as the progress of science, questioning whose legitimacy is projected as deserving a sacrament. Thus, everything that exposes the hollowness or deficiencies of science is consigned to defenestration. The time has come to seriously consider the role, the ends and the methods of scientific research. If we are to believe that the sole objective of the scientists is to make their impressions mutually consistent, then we lose all motivation in theoretical physics. These impressions are not of a kind that occurs in our daily life. They are extremely special, are produced at great cost, time and effort. Hence it is doubtful whether the mere pleasure their harmony gives to a selected few can justify the huge public spending on such “scientific research”.
A report published in the Notices of the American Mathematical Society, October 2005 issue shows that the Theory of Dynamical Systems that is used for calculating the trajectories of space flights and the Theory of Transition States for chemical reactions share the same mathematics. This is the proof of a universally true statement that both microcosm and the macrocosm replicate each other. The only problem is to find the exact correlations. For example, as we have repeated pointed out, the internal structure of a proton and that of planet Jupiter are identical. We will frequently use this and other similarities between the microcosm and the macrocosm (from astrophysics) in this presentation to prove the above statement. Also we will frequently refer to the definitions of technical terms as defined precisely in our book “Vaidic Theory of Numbers”.
“It is easy to explain something to a layman. It is easier to explain the same thing to an expert. But even the most knowledgeable person cannot explain something to one who has limited half-baked knowledge.” ------------- (Hitopadesha).
“To my mind there must be, at the bottom of it all, not an equation, but an utterly simple idea. And to me that idea, when we finally discover it, will be so compelling, so inevitable, that we will say to one another: ‘Oh, How wonderful! How could it have been otherwise.” -----------(John Wheeler).
“All these fifty years of conscious brooding have brought me no nearer to the answer to the question, 'What are light quanta?' Nowadays every Tom, Dick and Harry thinks he knows it, but he is mistaken”. --------------- Einstein, 1954
Twentieth century was a marvel in technological advancement. But except for the first quarter, the advancement of theoretical physics has nothing much to be written about. The principle of mass-energy equivalence, which is treated as the corner-stone principle of all nuclear interactions, binding energies of atoms and nucleons, etc., enters physics only as a corollary of the transformation equations between frames of references in relative motion. Quantum Mechanics (QM) cannot justify this equivalence principle on its own, even though it is the theory concerned about the energy exchanges and interactions of fundamental particles. Quantum Field Theory (QFT) is the extension of QM (dealing with particles) over to fields. In spite of the reported advancements in QFT, there is very little back up experimental proof to validate many of its postulates including Higgs mechanism, bare mass/charge, infinite charge etc. It seems almost impossible to think of QFT without thinking of particles which are accelerated and scattered in colliders. But interestingly, the particle interpretation has the best arguments against QFT. Till recently, the Big Bang hypothesis held the center stage in cosmology. Now Loop Quantum Cosmology (LQC) with its postulates of the “Big Bounce” is taking over. Yet there are two distinctly divergent streams of thought on this subject also. The confusion surrounding interpretation of quantum physics is further compounded by the modern proponents, who often search historical documents of discarded theories and come up with new meanings to back up their own theories. For example, the cosmological constant, first proposed and subsequently rejected as the greatest blunder of his life by Einstein; has made a come back in cosmology. Bohr’s complementarity principle, originally central to his vision of quantum particles, has been reduced to a corollary and is often identified with the frameworks in Consistent Histories.
There are a large number of different approaches or formulations to the foundations of Quantum Mechanics. There is the Heisenberg’s Matrix Formulation, Schrödinger’s Wave-function Formulation, Feynman’s Path Integral Formulation, Second Quantization Formulation, Wigner’s Phase Space Formulation, Density Matrix Formulation, Schwinger’s Variational Formulation, de Broglie-Bohm’s Pilot Wave Formulation, Hamilton-Jacobi Formulation etc. There are several Quantum Mechanical pictures based on placement of time-dependence. There is the Schrödinger Picture: time-dependent Wave-functions, the Heisenberg Picture: time-dependent operators and the Interaction Picture: time-dependence split. The different approaches are in fact, modifications of the theory. Each one introduces some prominent new theoretical aspect with new equations, which needs to be interpreted or explained. Thus, there are many different interpretations of Quantum Mechanics, which are very difficult to characterize. Prominent among them are; the Realistic Interpretation: wave-function describes reality, the Positivistic Interpretation: wave-function contains only the information about reality, the famous Copenhagen Interpretation: which is the orthodox Interpretation. Then there is Bohm’s Causal Interpretation, Everett’s Many World’s Interpretation, Mermin’s Ithaca Interpretation, etc. With so many contradictory views, quantum physics is not a coherent theory, but truly weird.
General relativity breaks down when gravity is very strong: for example when describing the big bang or the heart of a black hole. And the standard model has to be stretched to the breaking point to account for the masses of the universe’s fundamental particles. The two main theories; quantum theory and relativity, are also incompatible, having entirely different notions: such as for the concept of time. The incompatibility of quantum theory and relativity has made it difficult to unite the two in a single “Theory of everything”. There are almost infinite numbers of the “Theory of Everything” or the “Grand Unified Theory”. But none of them are free from contradictions. There is a vertical split between those pursuing the superstrings route and others, who follow the little Higgs route.
String theory, which was developed with a view to harmonize General Relativity with Quantum theory, is said to be a high order theory where other models, such as supergravity and quantum gravity appear as approximations. Unlike super-gravity, string theory is said to be a consistent and well-defined theory of quantum gravity, and therefore calculating the value of the cosmological constant from it should, at least in principle, be possible. On the other hand, the number of vacuum states associated with it seems to be quite large, and none of these features three large spatial dimensions, broken super-symmetry, and a small cosmological constant. The features of string theory which are at least potentially testable - such as the existence of super-symmetry and cosmic strings - are not specific to string theory. In addition, the features that are specific to string theory - the existence of strings - either do not lead to precise predictions or lead to predictions that are impossible to test with current levels of technology.
There are many unexplained questions relating to the strings. For example, given the measurement problem of quantum mechanics, what happens when a string is measured? Does the uncertainty principle apply to the whole string? Or does it apply only to some section of the string being measured? Does string theory modify the uncertainty principle? If we measure its position, do we get only the average position of the string? If the position of a string is measured with arbitrarily high accuracy, what happens to the momentum of the string? Does the momentum become undefined as opposed to simply unknown? What about the location of an end-point? If the measurement returns an end-point, then which end-point? Does the measurement return the position of some point along the string? (The string is said to be a Two dimensional object extended in space. Hence its position cannot be described by a finite set of numbers and thus, cannot be described by a finite set of measurements.) How do the Bell’s inequalities apply to string theory? We must get answers to these questions first before we probe more and spend (waste!) more money in such research. These questions should not be put under the carpet as inconvenient or on the ground that some day we will find the answers. That someday has been a very long period indeed!
The energy “uncertainty” introduced in quantum theory combines with the mass-energy equivalence of special relativity to allow the creation of particle/anti-particle pairs by quantum fluctuations when the theories are merged. As a result there is no self-consistent theory which generalizes the simple, one-particle Schrödinger equation into a relativistic quantum wave equation. Quantum Electro-Dynamics began not with a single relativistic particle, but with a relativistic classical field theory, such as Maxwell’s theory of electromagnetism. This classical field theory was then “quantized” in the usual way and the resulting quantum field theory is claimed to be a combination of quantum mechanics and relativity. However, this theory is inherently a many-body theory with the quanta of the normal modes of the classical field having all the properties of physical particles. The resulting many-particle theory can be relatively easily handled if the particles are heavy on the energy scale of interest or if the underlying field theory is essentially linear. Such is the case for atomic physics where the electron-volt energy scale for atomic binding is about a million times smaller than the energy required to create an electron positron pair and where the Maxwell theory of the photon field is essentially linear.
However, the situation is completely reversed for the theory of the quarks and gluons that compose the strongly interacting particles in the atomic nucleus. While the natural energy scale of these particles, the proton, r meson, etc. is on the order of hundreds of millions of electron volts, the quark masses are about one hundred times smaller. Likewise, the gluons are quanta of a Yang-Mills field which obeys highly non-linear field equations. As a result, strong interaction physics has no known analytical approach and numerical methods is said to be the only possibility for making predictions from first principles and developing a fundamental understanding of the theory. This theory of the strongly interacting particles is called quantum chromodynamics or QCD, where the non-linearities in the theory have dramatic physical effects. One coherent, non-linear effect of the gluons is to “confine” both the quarks and gluons so that none of these particles can be found directly as excitations of the vacuum. Likewise, a continuous “chiral symmetry”, normally exhibited by a theory of light quarks, is broken by the condensation of chirally oriented quark/anti-quark pairs in the vacuum. The resulting physics of QCD is thus entirely different from what one would expect from the underlying theory, with the interaction effects having a dominant influence.
It is known that the much celebrated Standard Model of Particle Physics is incomplete as it relies on certain arbitrarily determined constants as inputs - as “givens”. The new formulations of physics such as the Super String Theory and M-theory do allow mechanisms where these constants can arise from the underlying model. However, the problem with these theories is that they postulate the existence of extra dimensions that are said to be either “extra-large” or “compactified” down to the Planck length, where they have no impact on the visible world we live in. In other words, we are told to blindly believe that extra dimensions must exist, but on a scale that we cannot observe. The existence of these extra dimensions has not been proved. However, they are postulated to be not fixed in size. Thus, the ratio between the compactified dimensions and our normal four space-time dimensions could cause some of the fundamental constants to change! If this could happen then it might lead to physics that are in contradiction to the universe we observe.
The concept of “absolute simultaneity” – an off-shoot of quantum entanglement and non-locality, poses the gravest challenge to Special Relativity. But here also, a different interpretation is possible for the double-slit experiment, Bell’s inequality, entanglement and decoherence, which can rub them off of their mystic character. The Ives - Stilwell experiment conducted by Herbert E. Ives and G. R. Stilwell in 1938 is considered to be one of the fundamental tests of the special theory of relativity. The experiment was intended to use a primarily longitudinal test of light wave propagation to detect and quantify the effect of time dilation on the relativistic Doppler effect of light waves received from a moving source. Also it intended to indirectly verify and quantify the more difficult to detect transverse Doppler effect associated with detection at a substantial angle to the path of motion of the source - specifically the effect associated with detection at a 90° angle to the path of motion of the source. In both respects it is believed that, a longitudinal test can be used to indirectly verify an effect that actually occurs at a 90° transverse angle to the path of motion of the source.
Based on recent theoretical findings of the relativistic transverse Doppler effect, some scientists have shown that such comparison between longitudinal and transverse effects is fundamentally flawed and thus invalid; because it assumes compatibility between two different mathematical treatments. The experiment was designed to detect the predicted time dilation related red-shift effect (increase in wave-length with corresponding decrease in frequency) of special relativity at the fundamentally longitudinal angles at or near 00 and 1800, even though the time dilation effect is based on the transverse angle of 900. Thus, the results of the said experiment do not prove anything. More specifically, it can be shown that the mathematical treatment of special relativity to the transverse Doppler effect is invalid and thus incompatible with the longitudinal mathematical treatment at distances close to the moving source. Any direct comparisons between the longitudinal and transverse mathematical predictions under the specified conditions of the experiment are invalid.
Cosmic rays are particles - mostly protons but sometimes heavy atomic nuclei - that travel through the universe at close to the speed of light. Some cosmic rays detected on Earth are produced in violent events such as supernovae, but physicists still don’t know the origins of the highest-energy particles, which are the most energetic particles ever seen in nature. As cosmic-ray particles travel through space, they lose energy in collisions with the low-energy photons that pervade the universe, such as those of the cosmic microwave background radiation. Special theory of relativity dictates that any cosmic rays reaching Earth from a source outside our galaxy will have suffered so many energy-shedding collisions that their maximum possible energy cannot exceed 5 × 1019 electron-volts. This is known as the Greisen-Zatsepin-Kuzmin limit. Over the past decade, University of Tokyo’s Akeno Giant Air Shower Array - 111 particle detectors have detected several cosmic rays above the GZK limit. In theory, they could only have come from within our galaxy, avoiding an energy-sapping journey across the cosmos. However, astronomers cannot find any source for these cosmic rays in our galaxy. One possibility is that there is something wrong with the observed results. Another possibility is that Einstein was wrong. His special theory of relativity says that space is the same in all directions, but what if particles found it easier to move in certain directions? Then the cosmic rays could retain more of their energy, allowing them to beat the GZK limit. A recent report (Physical Letters B, Vol. 668, p-253) suggests that the fabric of space-time is not as smooth as Einstein and others have predicted.
During 1919, Eddington started his much publicised eclipse expedition to observe the bending of light by a massive object (here the Sun) to verify the correctness of General Relativity. The experiment in question concerned the problem of whether light rays are deflected by gravitational forces, and took the form of astrometric observations of the positions of stars near the Sun during a total solar eclipse. The consequence of Eddington’s theory-led attitude to the experiment, along with alleged data fudging, was claimed to favor Einstein’s theory over Newton’s when in fact the data supported no such strong construction. In reality, both the predictions were based on Einstein’s calculations in 1908 and again in 1911 using Newton’s theory of gravitation. In 1911, Einstein wrote: “A ray of light going past the Sun would accordingly undergo deflection to an amount of 4’10-6 = 0.83 seconds of arc”. He did not clearly explain which fundamental principle of physics used in his paper and giving the value of 0.83 seconds of arc (dubbed half deflection) was wrong. He revised his calculation in 1916 to hold that light coming from a star far away from the Earth and passing near the Sun will be deflected by the Sun’s gravitational field by an amount that is inversely proportional to the star’s radial distance from the Sun (1.745” at the Sun’s limb - dubbed full deflection). Einstein never explained why he revised his earlier figures. Eddington was experimenting which of the above two values calculated by Einstein is correct.
Specifically it has been alleged that a sort of data fudging took place when Eddington decided to reject the plates taken by the one instrument (the Greenwich Observatory’s Astrographic lens, used at Sobral), whose results tended to support the alternative “Newtonian” prediction of light bending (as calculated by Einstein). Instead the data from the inferior (because of cloud cover) plates taken by Eddington himself at Principe and from the inferior (because of a reduced field of view) 4-inch lens used at Sobral were promoted as confirming the theory. While he claimed that the result proved Einstein right and Newton wrong, an objective analysis of the actual photographs shows no such clear cut result. Both theories are consistent with the data obtained. It may be recalled that when someone said that there are only two persons in the world besides Einstein who understood relativity, Eddington had replied that he does not know who the other person was. This arrogance clouded his scientific acumen, as was confirmed by his distaste for the theories of Dr. S Chandrasekhar, which subsequently won the Nobel Prize.
Heisenberg’s Uncertainty relation is still a postulate, though many of its predictions have been verified and found to be correct. Heisenberg never called it a principle. Eddington was the first to call it a principle and others followed him. But as Karl Popper pointed out, uncertainty relations cannot be granted the status of a principle because theories are derived from principles, but uncertainty relation does not lead to any theory. We can never derive an equation like the Schrödinger equation or the commutation relation from the uncertainty relation, which is an inequality. Einstein’s distinction between “constructive theories” and “principle theories” does not help, because this classification is not a scientific classification. Serious attempts to build up quantum theory as a full fledged Theory of Principle on the basis of the uncertainty relation have never been carried out. At best it can be said that Heisenberg created “room” or “freedom” for the introduction of some non-classical mode of description of experimental data. But these do not uniquely lead to the formalism of quantum mechanics.
There are a plethora of other postulates in Quantum Mechanics; such as: the Operator postulate, the Hermitian property postulate, Basis set postulate, Expectation value postulate, Time evolution postulate, etc. The list goes on and on and includes such undiscovered entities as strings and such exotic particles as the Higg’s particle (which is dubbed as the “God particle”) and graviton; not to speak of squarks et all. Yet, till now it is not clear what quantum mechanics is about? What does it describe? It is said that quantum mechanical systems are completely described by its wave function? From this it would appear that quantum mechanics is fundamentally about the behavior of wave-functions. But do the scientists really believe that wave-functions describe reality? Even Schrödinger, the founder of the wave-function, found this impossible to believe! He writes (Schrödinger 1935): “That it is an abstract, unintuitive mathematical construct is a scruple that almost always surfaces against new aids to thought and that carries no great message”. Rather, he was worried about the “blurring” suggested by the spread-out character of the wave-function, which he describes as, “affects macroscopically tangible and visible things, for which the term ‘blurring’ seems simply wrong”.
Schrödinger goes on to note that it may happen in radioactive decay that “the emerging particle is described … as a spherical wave … that impinges continuously on a surrounding luminescent screen over its full expanse. The screen however, does not show a more or less constant uniform surface glow, but rather lights up at one instant at one spot …”. He observed further that one can easily arrange, for example by including a cat in the system, “quite ridiculous cases” with the ψ-function of the entire system having in it the living and the dead cat mixed or smeared out in equal parts. Resorting to epistemology cannot save such doctrines.
The situation was further made complicated by Bohr with interpretation of quantum mechanics. But how many scientists truly believe in his interpretation? Apart from the issues relating to the observer and observation, it usually is believed to address the measurement problem. Quantum mechanics is fundamentally about the micro-particles such as quarks and strings etc, and not the macroscopic regularities associated with measurement of their various properties. But if these entities are somehow not to be identified with the wave-function itself and if the description is not about measurements, then where is their place in the quantum description? Where is the quantum description of the objects that quantum mechanics should be describing? This question has led to the issues raised in the EPR argument. As we will see, this question has not been settled satisfactorily.
The formulations of quantum mechanics describe the deterministic unitary evolution of a wave-function. This wave-function is never observed experimentally. The wave-function allows computation of the probability of certain macroscopic events of being observed. However, there are no events and no mechanism for creating events in the mathematical model. It is this dichotomy between the wave-function model and observed macroscopic events that is the source of the various interpretations in quantum mechanics. In classical physics, the mathematical model relates to the objects we observe. In quantum mechanics, the mathematical model by itself never produces observation. We must interpret the wave-function in order to relate it to experimental observation. Often these interpretations are related to the personal and socio-cultural bias of the scientist, which gets weightage based on his standing in the community. Thus, the arguments of Einstein against Bohr’s position has roots in Lockean notions of perception, which opposes the Kantian metaphor of the “veil of perception” that pictures the apparatus of observation as like a pair of spectacles through which a highly mediated sight of the world can be glimpsed. According to Kant, “appearances” simply do not reflect an independently existing reality. They are constituted through the act of perception in such a way that conform them to the fundamental categories of sensible intuitions. Bohr maintained that “measurement has an essential influence on the conditions on which the very definition of physical quantities in question rests” (Bohr 1935, 1025).
In modern science, there is no unambiguous and precise definition of the words time, space, dimension, numbers, zero, infinity, charge, quantum particle, wave-function etc. The operational definitions have been changed from time to time to take into account newer facts that facilitate justification of the new “theory”. For example, the fundamental concept of the quantum mechanical theory is the concept of “state”, which is supposed to be completely characterized by the wave-function. However, till now it is not certain “what” a wave-function is. Is the wave-function real - a concrete physical object or is it something like a law of motion or an internal property of particles or a relation among spatial points? Or is it merely our current information about the particles? Quantum mechanical wave-functions cannot be represented mathematically in anything smaller than a 10 or 11 dimensional space called configuration space. This is contrary to experience and the existence of higher dimensions is still in the realm of speculation. If we accept the views of modern physicists, then we have to accept that the universe’s history plays itself out not in the three dimensional space of our everyday experience or the four-dimensional space-time of Special Relativity, but rather in this gigantic configuration space, out of which the illusion of three-dimensionality somehow emerges. Thus, what we see and experience is illusory! Maya?
The measurement problem in quantum mechanics is the unresolved problem of how (or if) wave-function collapse occurs. The inability to observe this process directly has given rise to different interpretations of quantum mechanics, and poses a key set of questions that each interpretation must answer. If it is postulated that a particle does not have a value before measurement, there has to be conclusive evidence to support this view. The wave-function in quantum mechanics evolves according to the Schrödinger equation into a linear superposition of different states, but actual measurements always find the physical system in a definite state. Any future evolution is based on the state the system was “discovered” to be in when the measurement was made, implying that the measurement “did something” to the process under examination. Whatever that “something” may be does not appear to be explained by the basic theory. Further, quantum systems described by linear wave-functions should be incapable of non-linear behavior. But chaotic quantum systems have been observed. Though chaos appears to be probabilistic, it is actually deterministic. Further, if the collapse causes the quantum state to jump from superposition of states to a fixed state, it must be either an illusion or an approximation to the reality at quantum level. We can rule out illusion as it is contrary to experience. In that case, there is nothing to suggest that events in quantum level are not deterministic. We may very well be able to determine the outcome of a quantum measurement provided we set up an appropriate measuring device!
The operational definitions and the treatment of the term wave-function used by researchers in quantum theory progressed through intermediate stages. Schrödinger viewed the wave-function associated with the electron as the charge density of an object smeared out over an extended (possibly infinite) volume of space. He did not regard the waveform as real nor did he make any comment on the waveform collapse. Max Born interpreted it as the probability distribution in the space of the electron’s position. He differed from Bohr in describing quantum systems as being in a state described by a wave-function which lives longer than any specific experiment. He considered the waveform as an element of reality. According to this view, also known as State Vector Interpretation, measurement implied the collapse of the wave function. Once a measurement is made, the wave-function ceases to be smeared out over an extended volume of space and the range of possibilities collapse to the known value. However, the nature of the waveform collapse is problematic and the equations of Quantum Mechanics do not cover the collapse itself.
The view known as “Consciousness Causes Collapse” regards measuring devices also as quantum systems for consistency. The measuring device changes state when a measurement is made, but its wave-function does not collapse. The collapse of the wave-function can be traced back to its interaction with a conscious observer. Let us take the example of measurement of the position of an electron. The waveform does not collapse when the measuring device initially measures the position of the electron. Human eye can also be considered a quantum system. Thus, the waveform does not collapse when the photon from the electron interacts with the eye. The resulting chemical signals to the brain can also be treated as a quantum system. Hence it is not responsible for the collapse of the wave-form. However, a conscious observer always sees a particular outcome. The wave-form collapse can be traced back to its first interaction with the consciousness of the observer. This begs the question: what is consciousness? At which stage in the above sequence of events did the wave-form collapse? Did the universe behave differently before life evolved? If so, how and what is the proof for that assumption? No answers.
Many-worlds Interpretation tries to overcome the measurement problem in a different way. It regards all possible outcomes of measurement as “really happening”, but holds that somehow we select only one of those realities (or in their words - universes). But this view clashes with the second law of thermodynamics. The direction of the thermodynamic arrow of time is defined by the special initial conditions of the universe which provides a natural solution to the question of why entropy increases in the forward direction of time. But what is the cause of the time asymmetry in the Many-worlds Interpretation? Why do universes split in the forward time direction? It is said that entropy increases after each universe-branching operation – the resultant universes are slightly more disordered. But some interpretations of decoherence contradict this view. This is called macroscopic quantum coherence. If particles can be isolated from the environment, we can view multiple interference superposition terms as a physical reality in this universe. For example, let us consider the case of the electric current being made to flow in opposite directions. If the interference terms had really escaped to a parallel universe, then we should never be able to observe them both as physical reality in this universe. Thus, this view is questionable.
Transactional Interpretation accepts the statistical nature of waveform, but breaks it into an “offer” wave and an “acceptance” wave, both of which are treated as real. Probabilities are assigned to the likelihood of interaction of the offer waves with other particles. If a particle interacts with the offer wave, then it “returns” a confirmation wave to complete the transaction. Once the transaction is complete, energy, momentum, etc., are transferred in quanta as per the normal probabilistic quantum mechanics. Since Nature always takes the shortest and the simplest path, the transaction is expected to be completed at the first opportunity. But once that happens, classical probability and not quantum probability will apply. Further, it cannot explain how virtual particles interact. Thus, some people defer the waveform collapse to some unknown time. Since the confirmation wave in this theory is smeared all over space, it cannot explain when the transaction begins or is completed and how the confirmation wave determines which offer wave it matches up to.
Quantum decoherence, which was proposed in the context of the many-worlds interpretation, but has also become an important part of some modern updates of the Copenhagen interpretation based on consistent histories, allows physicists to identify the fuzzy boundary between the quantum micro-world and the world where the classical intuition is applicable. But it does not describe the actual process of the wave-function collapse. It only explains the conversion of the quantum probabilities (that are able to interfere) to the ordinary classical probabilities. Some people have tried to reformulate quantum mechanics as probability or logic theories. In some theories, the requirements for probability values to be real numbers have been relaxed. The resulting non-real probabilities correspond to quantum waveform. But till now a fully developed theory is missing.
Hidden Variables Theories treat Quantum mechanics as incomplete. Until a more sophisticated theory underlying Quantum mechanics is discovered, it is not possible to make any definitive statement. It views quantum objects as having properties with well-defined values that exist separately from any measuring devices. According to this view, chance plays no roll at all and everything is fully deterministic. Every material object invariably does occupy some particular region of space. This theory takes the form of a single set of basic physical laws that apply in exactly the same way to every physical object that exists. The waveform may be a purely statistical creation or it may have some physical role. The Causal Interpretation of Bohm and its latter development, the Ontological Interpretation, emphasize “beables” rather than the “observables” in contradistinction to the predominantly epistemological approach of the standard model. This interpretation is causal, but non-local and non-relativistic, while being capable of being extended beyond the domain of the current quantum theory in several ways.
There are divergent views on the nature of reality and the role of science in dealing with reality. Measuring a quantum object was supposed to force it to collapse from a waveform into one position. According to quantum mechanical dogma, this collapse makes objects “real”. But new verifications of “collapse reversal” suggest that we can no longer assume that measurements alone create reality. It is possible to take a “weak” measurement of a quantum particle continuously partially collapsing the quantum state, and then “unmeasure” it altering certain properties of the particle and perform the same weak measurement again. In one such experiment reported in Nature News, the particle was found to have returned to its original quantum state just as if no measurement had ever been taken. This implies that, we cannot assume that measurements create reality because; it is possible to erase the effects of a measurement and start again.
Newton gave his laws of motion in the second chapter, entitled “Axioms, or Laws of motion” of his book Principles of Natural Philosophy published in 1687 in Latin language. The second law says that the change of motion is proportional to the motive force impressed. Newton relates the force to the change of momentum (not to the acceleration as most textbooks do). Momentum is accepted as one of two quantities that, taken together, yield the complete information about a dynamic system at any instant. The other quantity is position, which is said to determine the strength and direction of the force. Since then the earlier ideas have changed considerably. The pairing of momentum and position is no longer viewed in the Euclidean space of three dimensions. Instead, it is viewed in phase space, which is said to have six dimensions, three for position and three for momentum. But here the term dimension has actually been used for direction, which is not a scientific description.
In fact most of the terms used by modern scientists have not been precisely defined - they have only an operational definition, which is not only incomplete, but also does not stand scientific scrutiny, though it is often declared as “reasonable”. This has been done not by chance, but by design, as modern science is replete with such instances. For example, we quote from the paper of Einstein and his colleagues Boris Podolsky and Nathan Rosen, which is known as the EPR argument (Phys. Rev. 47, 777 (1935):
“A comprehensive definition of reality is, however, unnecessary for our purpose. We shall be satisfied with the following criterion, which, we regard as reasonable. If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity. It seems to us that this criterion, while far from exhausting all possible ways of recognizing a physical reality, at least provides us with one such way, whenever the conditions set down in it occur. Regarded not as necessary, but merely as a sufficient, condition of reality, this criterion is in agreement with classical as well as quantum-mechanical ideas of reality.”
Prima facie, what Einstein and his colleagues argued was that under ideal conditions, observation (includes measurement) functions like a mirror reflecting an independently existing, external reality. The specific criterion for describing reality characterizes it in terms of objectivity understood as independence from any direct measurement. This implies that, when a direct measurement of physical reality occurs, it merely passively reflects rather than actively constitutes the object under observation. It further implies that ideal observations not only reflect the state of the object during observation, but also before and after observation just like a photograph taken. It has a separate and fixed identity than the object whose photograph has been taken. While the object may be evolving in time, the photograph depicts a time invariant state. Bohr and Heisenberg opposed this notion based on the Kantian view by describing acts of observation and measurement more generally as constitutive of phenomena. More on this will be discussed later.
The fact that our raw sense impressions and experiences are compatible with widely differing concepts of the world has led some philosophers to suggest that we should dispense with the idea of an “objective world” altogether and base our physical theories on nothing but direct sense impressions only. Berkeley expressed the positivist identification of sense impressions with objective existence by the famous phrase “esse est percipi” (to be is to be perceived). This has led to the changing idea of “objective reality”. However, if we can predict with certainty “the value of a physical quantity”, it only means that we have partial and not complete “knowledge” – which is the “total” result of “all” measurements - of the system. It has not been shown that knowledge is synonymous with reality. We may have the “knowledge” of mirage, but it is not real. Based on the result of our measurement, we may have knowledge that something is not real, but only apparent.
The partial definition of reality is not correct as it talks about “the value of a physical quantity” and not “the value of all physical quantities”. We can predict with certainty “the value of a physical quantity” such as position or momentum, which are classical concepts, without in any way disturbing the system. This has been accepted for past events by Heisenberg himself, which has been discussed in latter pages. Further, measurement is a process of comparison between similars and not bouncing light off something to disturb it. This has been discussed in detail while discussing the measurement problem. We cannot classify an object being measured (observed) separately from the apparatus performing the measurement (though there is lot of confusion in this area). They must belong to the same class. This is clearly shown in the quantum world where it is accepted that we cannot divorce the property we are trying to measure from the type of observation we make: the property is dependent on the type of measurement and the measuring instrument must be designed to use that particular property. However, this interpretation can be misleading and may not have anything to do with reality as described below. Such limited treatment of the definition of “reality” has given the authors the freedom to manipulate the facts to suit their convenience. Needless to say; the conclusions arrived at in that paper has been successively proved wrong by John S. Bell, Alain Aspect, etc, though for a different reason.
In the double slit experiment, it is often said that whether the electron has gone through the hole No.1 or No. 2 is meaningless. The electron, till we observe which hole it goes through, exists in a superposition state of equal measure of probability wave for going through the hole 1 and through the hole 2. This is a highly misleading notion as after it went through, we can always see its imprint on the photographic plate at a particular position and that is real. Before such observation we do not know which hole it went through, but there is no reason to presume that it went through a mixed state of both holes. Our inability to measure or know cannot change physical reality. It can only limit our knowledge of such physical reality. This aspect and the interference phenomenon have been discussed elaborately in later pages.
If, we accept the modern view of superposition of states, we land in many complex situations. Suppose the Schrödinger’s cat is somewhere in deep space and a team of astronauts were sent to measure its state According to the Copenhagen interpretation, the astronauts by opening the box and performing the observation have now put the cat into a definite quantum state; say find it alive. For them, the cat is no longer in a superposition state of equal measure of probability of living or dead. But for their Earth bound colleagues, the cat and the astronauts on board the space shuttle who know the state of the cat (did they change to a quantum state?), are still in a probability wave superposition state of live cat and dead cat. Finally, when the astronauts communicate with a computer down on earth, they pass on the information that is stored in the magnetic memory of the computer. After the computer receives the information, but before its memory is read by the earth-bound scientists, the computer is part of the superposition state for the earth-bound scientists. Finally, in reading the computer output, the earth-bound scientists reduce the superposition state to a definite one. Reality springs into being or rather from being to becoming only after we observe it. Is the above description sensible?
What really happens is that the cat interacts with the particles around it – protons, electrons, air molecules, dust particles, radiation, etc, which has the effect of “observing” it. The state is accessed by each of the conscious observers (as well as the other particles) by intercepting on its/our retina a small fraction of the light that has interacted with the cat. Thus, in reality, the field set up by his retina is perturbed and the impulse is carried out to the brain, where it is compared with previous similar impressions. If the impression matches with any previous impressions, we cognize it to be like that. Thereafter only we cognize the result of the measurement: the cat is alive or dead at the moment of observation. Thus, the process of measurement is carried out constantly without disturbing the system and evolution of the observed has nothing to do with the observation. This has been elaborated while discussing the measurement problem.
Further someone has put the cat and the deadly apparatus in the box. Thus according to the generally accepted theory, the wave-function has collapsed for him at that time. The information is available to us. Only afterwards, the evolutionary state of the cat – whether living or dead – is not known to us including the person who put the cat in the box in the first place. But according to the above description, the cat, whose wave-function has collapsed for the person who put the cat in the box, again goes into a “superposition of states of both alive and dead” and needs another observation – directly or indirectly through a set of apparatus - to describe its proper state at any subsequent time. This implies that after the second observation, the cat again goes into a “superposition of states of both alive and dead” till it is again observed and so on ad infinitum till it is found dead. But then the same story repeats for the dead cat – this time about his state of decomposition!
The cat example shows three distinct aspects: the state of the cat, i.e., dead or alive at the moment of observation (which information is time invariant as it is fixed), the state of the cat prior to and after the moment of observation (which information is time variant as the cat will die at some unspecified time due to unspecified reasons), and the cognition of these information by a conscious observer, which is time invariant but about the time evolution of the states of the cat. In his book “Popular Astronomy”, Prof. Bigelow says; Force, Mass, Surface, Electricity, Magnetism, etc., “are apprehended only during instantaneous transfer of energy”. He further adds; “Energy is the great unknown quantity, and its existence is recognized only during its state of change”. This is an eternal truth. We endorse the above view. It is well-known that the Universe is so called because everything in it is ever moving. Thus the view that observation not only describes the state of the object during observation, but also the state before and after it, is misleading. The result of measurement is the description of a state frozen in time, thus a fixed quantity. Its time evolution is not self-evident in the result of measurement. It has any meaning only after it is cognized by a conscious agent, as consciousness is time invariant. Thus, the observable, observation and observer depict three aspects of confined mass, displacing energy and revealing radiation of a single phenomenon depicting reality. Quantum physics has to explain these phenomena scientifically. We will discus it later.
When one talks about what an electron is “doing”, one implies what sort of a wave function is associated with it. But the wave function is not a physical object in the sense a proton or an electron or a billiard ball. In fact, the rules of quantum theory do not even allot a unique wave function to a given state of motion, since multiplying the wave function by a factor of modulus unity does not change any physical consequence. Thus, Heisenberg opined that “the atoms or elementary particles are not as real; they form a world of potentialities or possibilities rather than one of things or facts”. This shows the helplessness of the physicists to explain the quantum phenomena in terms of the macro world. The activities of the elementary particles appear essential as long as we believe in the independent existence of fundamental laws that we can hope to understand better.
Reality cannot differ from person to person or from measurement to measurement because it has existence independent of these factors. The elements of our “knowledge” are actually derived from our raw sense impressions, by automatically interpreting them in conventional terms based on our earlier impressions. Since these impressions vary, our responses to the same data also vary. Yet, unless an event is observed, it has no meaning by itself. Thus, it can be said that while observables have a time evolution independent of observation, it depends upon observation for any meaningful description in relation to others. For this reason the individual responses/readings to the same object may differ based on their earlier (at a different time and may be space) experience/environment. As the earlier example of the cat shows, it requires a definite link between the observer and the observed – a split (from time evolution), and a link (between the measurement representing its state and the consciousness of the observer for describing such state in communicable language). This link varies from person to person. At every interaction, the reality is not “created”, but the “presently evolved state” of the same reality gets described and communicated. Based on our earlier experiences/experimental set-up, it may return different responses/readings.
There is no proof to show that a particle does not have a value before measurement. The static attributes of a proton or an electron such as its charge or its mass have well defined properties and will remain so even before and after observation even though it may change its position or composition due to the effect of the forces acting on it – spatial translation. The dynamical attributes will continue to evolve – temporal translation. The life cycles of stars and galaxies will continue till we notice their extinction in a supernova explosion. The moon will exist even when we are not observing it. The proof for this is their observed position after a given time matches our theoretical calculation. Before measurement, we do not know the “present” state. Since present is a dynamical entity describing time evolution of the particle, it evolves continuously from past to future. This does not mean that the observer creates reality – after observation at a given instant he only discovers the spatial and temporal state of its static and dynamical aspects.
The prevailing notion of superposition (an unobserved proposition) only means that we do not know how the actual fixed value after measurement has been arrived at (described elaborately in later pages), as the same value could be arrived at by infinite numbers of ways. We superimpose our ignorance on the particle and claim that the value of that particular aspect is undetermined whereas in reality the value might already have been fixed (the cat might have died). The observer cannot influence the state of the observed (moment of death of the cat) before or after observation. He can only report the “present state”. Quantum mechanics has failed to describe the collapse mechanism satisfactorily. In fact many models (such as the Copenhagen interpretation) treat the concept of collapse as non-sense. The few models that accept collapse as real are incomplete and fail to come up with a satisfactory mechanism to explain it. In 1932, John von Neumann showed that if electrons are ordinary objects with inherent properties (which would include hidden variables) then the behavior of those objects must contradict the predictions of quantum theory. Because of his stature in those days, no one contradicted him. But in 1952, David Bohm showed that hidden variables theories were plausible if super-luminal velocities are possible. Bohm’s mechanics has returned predictions equivalent to other interpretations of quantum mechanics. Thus, it cannot be discarded lightly. If Bohm is right, then Copenhagen interpretation and its extensions are wrong.
There is no proof to show that the characteristics of particle states are randomly chosen instantaneously at the time of observation/measurement. Since the value remains fixed after measurement, it is reasonable to assume that it remained so before measurement also. For example, if we measure the temperature of a particle by a thermometer, it is generally assumed that a little heat is transferred from the particle to the thermometer thereby changing the state of the particle. This is an absolutely wrong assumption. No particle in the Universe is perfectly isolated. A particle inevitably interacts with its environment. The environment might very well be a man-made measuring device.
Introduction of the thermometer does not change the environment as all objects in the environment are either isothermic or heat is flowing from higher concentration to lower concentration. In the former case there is no effect. In the latter case also it does not change anything as the thermometer is isothermic with the environment. Thus the rate of heat flow from the particle to the thermometer remains constant – same as that of the particle to its environment. When exposed to heat, the expansion of mercury shows a uniform gradient in proportion to the temperature of its environment. This is sub-divided over a randomly chosen range and taken as the unit. The expansion of mercury when exposed to the heat flow from a particle till both become isothermic is compared with this unit and we get a scalar quantity, which we call the result of measurement at that instant. Similarly, the heat flow to the thermometer does not affect the object as it was in any case continuing with the heat flow at a steady rate and continued to do so even after measurement. This is proved from the fact that the thermometer reading does not change after sometime (all other conditions being unchanged). This is common to all measurements. Since the scalar quantity returned as the result of measurement is a number, it is sometimes said that numbers are everything.
While there is no proof that measurement determines reality, there is proof to the contrary. Suppose we have a random group of people and we measure three of their properties: sex, height and skin-color. They can be male or female, tall or short and their skin-color could be fair or brown. If we take at random 30 people and measure the sex and height first (male and tall), and then the skin-color (fair) for the same sample, we will get one result (how many tall men are fair). If we measure the sex and the skin-color first (male and fair), and then the height (tall), we will get a different result (how many fare males are tall). If we measure the skin-color and the height first (fair and tall), and then the sex (male), we will get a yet different result (how many fare and tall persons are male). Order of measurement apparently changes result of measurement. But the result of measurement really does not change anything. The tall will continue to be tall and the fair will continue to be fair. The male and female will not change sex either. This proves that measurement does not determine reality, but only exposes selected aspects of reality in a desired manner – depending upon the nature of measurement. It is also wrong to say that whenever any property of a microscopic object affects a macroscopic object, that property is observed and becomes physical reality. We have experienced situations when an insect bite is not really felt (measure of pain) by us immediately even though it affects us. A viral infection does not affect us immediately.
We measure position, which is the distance from a fixed reference point in different coordinates, by a tape of unit distance from one end point to the other end point or its sub-divisions. We measure mass by comparing it with another unit mass. We measure time, which is the interval between events by a clock, whose ticks are repetitive events of equal duration (interval) which we take as the unit, etc. There is no proof to show that this principle is not applicable to the quantum world. These measurements are possible when both the observer with the measuring instrument and the object to be measured are in the same frame of reference (state of motion); thus without disturbing anything. For this reason results of measurement are always scalar quantities – multiples of the unit. Light is only an accessory for knowing the result of measurement and not a pre-condition for measurement. Simultaneous measurement of both position and momentum is not possible, which is correct, though due to different reasons explained in later pages. Incidentally, both position and momentum are regarded as classical concepts.
In classical mechanics and electromagnetism, properties of a point mass or properties of a field are described by real numbers or functions defined on two or three dimensional sets. These have direct, spatial meaning, and in these theories there seems to be less need to provide a special interpretation for those numbers or functions. The accepted mathematical structure of quantum mechanics, on the other hand, is based on fairly abstract mathematics (?), such as Hilbert spaces, (which is the quantum mechanical counterpart of the classical phase-space) and operators on those Hilbert spaces. Here again, there is no precise definition of space. The proof for the existence and justification of the different classification of “space” and “vacuum” are left unexplained.
When developing new theories, physicists tend to assume that quantities such as the strength of gravity, the speed of light in vacuum or the charge on the electron are all constant. The so-called universal constants are neither self-evident in Nature nor have been derived from fundamental principles (though there are some claims to the contrary, each has some problem). They have been deduced mathematically and their value has been determined by actual measurement. For example, the fine structure constant has been postulated in QED, but its value has been derived only experimentally (We have derived the measured value from fundamental principles). Yet, the regularity with which such constants of Nature have been discovered points to some important principle underlying it. But are these quantities really constant?
The velocity of light varies according to the density of the medium. The acceleration due to gravity “g” varies from place to place. We have measured the value of “G” from earth. But we do not know whether the value is the same beyond the solar system. The current value of the distance between the Sun and the Earth has been pegged at 149,597,870.696 kilometers. A recent (2004) study shows that the Earth is moving away from the Sun @ 15 cm per annum. Since this value is 100 times greater than the measurement error, something must really be pushing Earth outwards. While one possible explanation for this phenomenon is that the Sun is losing enough mass via fusion and the solar wind, alternative explanations include the influence of dark matter and changing value of G. We will explain it later.
Einstein proposed the Cosmological Constant to allow static homogeneous solutions to his equations of General Relativity in the presence of matter. When the expansion of the Universe was discovered, it was thought to be unnecessary forcing Einstein to declare was it was his greatest blunder. There have been a number of subsequent episodes in which a non-zero cosmological constant was put forward as an explanation for a set of observations and later withdrawn when the observational case evaporated. Meanwhile, the particle theorists are postulating that the cosmological constant can be interpreted as a measure of the energy density of the vacuum. This energy density is the sum of a number of apparently unrelated contributions: potential energies from scalar fields and zero-point fluctuations of each field theory degree of freedom as well as a bare cosmological constant λ0, each of magnitude much larger than the upper limits on the cosmological constant as measured now. However, the observed vacuum energy is very very small in comparison to the theoretical prediction: a discrepancy of 120 orders of magnitude between the theoretical and observational values of the cosmological constant. This has led some people to postulate an unknown mechanism which would set it precisely to zero. Others postulate the mechanism to suppress the cosmological constant by just the right amount to yield an observationally accessible quantity. However, all agree that this illusive quantity does play an important dynamical role in the Universe. The confusion can be settled if we accept the changing value of G, which can be related to the energy density of the vacuum. Thus, the so-called constants of Nature could also be thought of as the equilibrium points, where different forces acting on a system in different proportions balance each other.
For example, let us consider the Libration points called L4 and L5, which are said to be places that gravity forgot. They are vast regions of space, sometimes millions of kilometers across, in which celestial forces cancel out gravity and trap anything that falls into them. The Libration points, known as ¨ÉxnùÉäSSÉ and {ÉÉiÉ in earlier times, were rediscovered in 1772 by the mathematician Joseph-Louis Lagrange. He calculated that the Earth’s gravitational field neutralizes the gravitational pull of the sun at five regions in space, making them the only places near our planet where an object is truly weightless. Astronomers call them Libration points; also Lagrangian points, or L1, L2, L3, L4 and L5 for short. Of the five Libration points, L4 and L5 are the most intriguing.
Two such Libration points sit in the Earth’s orbit also, one marching ahead of our planet, the other trailing along behind. They are the only ones that are stable. While a satellite parked at L1 or L2 will wander off after a few months unless it is nudged back into place (like the American satellite SOHO), any object at L4 or L5 will stay put due to a complex web of forces (like the asteroids). Evidence for such gravitational potholes appears around other planets too. In 1906, Max Wolf discovered an asteroid outside of the main belt between Mars and Jupiter, and recognized that it was sitting at Jupiter’s L4 point. The mathematics for L4 uses the “brute force approach” making it approximate. Lying 150 million kilometers away along the line of Earth’s orbit, L4 circles the sun about 60 degrees (slightly more, according to our calculation) in front of the planet while L5 lies at the same angle behind. Wolf named it Achilles, leading to the tradition of naming these asteroids after characters from the Trojan wars.
The realization that Achilles would be trapped in its place and forced to orbit with Jupiter, never getting much closer or further away, started a flurry of telescopic searches for more examples. There are now more than 1000 asteroids known to reside at each of Jupiter’s L4 and L5 points. Of these, about ⅔ reside at L4 while the rest ⅓ are at L5. Perturbations by the other planets (primarily Saturn) causes these asteroids to oscillate around L4 and L5 by about 15-200 and at inclinations of up to 400 to the orbital plane. These oscillations generally take between 150 years and 200 years to complete. Such planetary perturbations may also be the reason why there have been so few Trojans found around other planets. Searches for “Trojan” asteroids around other planets have met with mixed results. Mars has 5 of them at L5 only. Saturn seemingly has none. Neptune has two.
The asteroid belt surrounds the inner Solar system like a rocky, ring-shaped moat, extending out from the orbit of Mars to that of Jupiter. But there are voids in that moat in distinct locations called Kirkwood gaps that are associated with orbital resonances with the giant planets - where the orbital influence of Jupiter is especially potent. Any asteroid unlucky enough to venture into one of these locations will follow chaotic orbits and will be perturbed and ejected from the cozy confines of the belt, often winding up on a collision course with one of the inner, rocky planets (such as Earth) or the moon. But Jupiter’s pull cannot account for the extent of the belt’s depletion seen at present or for the spotty distribution of asteroids across the belt - unless there was a migration of planets early in the history of the solar system. According to a report (Nature 457, 1109-1111 dated 26 February 2009), the observed distribution of main belt asteroids does not fill uniformly even those regions that are dynamically stable over the age of the Solar System. There is a pattern of excess depletion of asteroids, particularly just outward of the Kirkwood gaps associated with the 5:2, the 7:3 and the 2:1 Jovian resonances. These features are not accounted for by planetary perturbations in the current structure of the Solar System, but are consistent with dynamical ejection of asteroids by the sweeping of gravitational resonances during the migration of Jupiter and Saturn.
Some researchers designed a computer model of the asteroid belt under the influence of the outer “gas giant” planets, allowing them to test the distribution that would result from changes in the planets’ orbits over time. A simulation wherein the orbits remained static, did not agree with observational evidence. There were places where there should have been a lot more asteroids than we saw. On the other hand, a simulation with an early migration of Jupiter inward and Saturn outward - the result of interactions with lingering planetesimals (small bodies) from the creation of the solar system - fit the observed layout of the belt much better. The uneven spacing of asteroids is readily explained by this planet-migration process that other people have also worked on. In particular, if Jupiter had started somewhat farther from the sun and then migrated inward toward its current location, the gaps it carved into the belt would also have inched inward, leaving the belt looking much like it does now. The good agreement between the simulated and observed asteroid distributions is quite remarkable.
One significant question not addressed in this paper is the pattern of migration - whether the asteroid belt can be used to rule out one of the presently competing theories of migratory patterns. The new study deals with the speed at which the planets’ orbits have changed. The simulation presumes a rather rapid migration of a million or two million years, but other models of Neptune’s early orbital evolution tend to show that migration proceeds much more slowly, over millions of years. We hold this period as 4.32 million years for the Solar system. This example shows that the orbits of planets, which are stabilized due to balancing of the centripetal force and gravity, might be changing from time to time. This implies that either the masses of the Sun and the planets or their distance from each other or both are changing over long periods of time (which is true). It can also mean that G is changing. Thus, the so-called constants of Nature may not be so constants after all.
Earlier, a cosmology with changing physical values for the gravitational constant G was proposed by P.A.M. Dirac in 1937. Field theories applying this principle have been proposed by P. Jordan and D.W. Sciama and in 1961 by C. Brans and R.H. Dicke. According to these theories the value of G is diminishing. Brans and Dicke suggested a change of about 0.00000000002 per year. This theory has not been accepted on the ground that it would have profound effect on the phenomena ranging from the evolution of the Universe to the evolution of the Earth. For instance, stars evolve faster if G is greater. Thus, the stellar evolutionary ages computed with constant G at its present value would be too great. The Earth compressed by gravitation would expand having a profound effect on surface features. The Sun would have been hotter than it is now and the Earth’s orbit would have been smaller. No one bothered to check whether such a scenario existed or is possible. Our studies in this regard show that the above scenario did happen. We have data to prove the above point.
Precise measurements in 1999 gave so divergent values of G from the currently accepted value that the result had to be pushed under the carpet, as otherwise most theories of physics would have tumbled. Presently, physicists are measuring gravity by bouncing atoms up and down off a laser beam (arXiv:0902.0109). The experiments have been modified to perform atom interferometry, whereby quantum interference between atoms can be used to measure tiny accelerations. Those still using the earlier value of G in their calculations, land in trajectories much different from their theoretical calculations. Thus, modern science is based on a value of G that has been proved to be wrong. The Pioneer and Fly-by anomalies and the change of direction of Voyager 2 after it passed the orbit of Saturn have cast a shadow on the authenticity of the theory of gravitation. Till now these have not been satisfactorily explained. We have discussed these problems and explained a different theory of gravitation in later pages.
According to reports published in several scientific journals, precise measurements of the light from distant quasars and the only known natural nuclear reactor, which was active nearly 2 billion years ago at what is now Oklo in Gabon suggest that the value of the fine-structure constant may have changed over the history of the universe (Physical Review D, vol 69, p 121701). If confirmed, the results will be of enormous significance for the foundations of physics. Alpha is an extremely important constant that determines how light interacts with matter - and it shouldn’t be able to change. Its value depends on, among other things, the charge on the electron, the speed of light and Planck’s constant. Could one of these really have changed?
If the fine-structure constant changes over time, it allows postulating that the velocity of light might not be constant. This would explain the flatness, horizon and monopole problems in cosmology. Recent work has shown that the universe appears to be expanding at an ever faster rate, and there may well be a non-zero cosmological constant. There is a class of theories where the speed of light is determined by a scalar field (the force making the cosmos expand, the cosmological constant) that couples to the gravitational effect of pressure. Changes in the speed of light convert the energy density of this field into energy. One off-shoot of this view is that in a young and hot universe during the radiation epoch, this prevents the scalar field dominating the universe. As the universe expands, pressure-less matter dominates and variations in c decreases making α (alpha) fixed and stable. The scalar field begins to dominate, driving a faster expansion of the universe. Whether the variation of the fine-structure constant claimed exists or not, putting bounds on the rate of change puts tight constraints on new theories of physics.
One of the most mysterious objects in the universe is what is known as the black hole – a derivative of the general theory of relativity. It is said to be the ultimate fate of a super-massive star that has exhausted its fuel that sustained it for millions of years. In such a star, gravity overwhelms all other forces and the star collapses under its own gravity to the size of a pinprick. It is called a black hole as nothing – not even light – can escape it. A black hole has two parts. At its core is a singularity, the infinitesimal point into which all the matter of the star gets crushed. Surrounding the singularity is the region of space from which escape is impossible - the perimeter of which is called the event horizon. Once something enters the event horizon, it loses all hope of exiting. It is generally believed that a large star eventually collapses to a black hole. Roger Penrose conjectured that the formation of a singularity during stellar collapse necessarily entails the formation of an event horizon. According to him, Nature forbids us from ever seeing a singularity because a horizon always cloaks it. Penrose’s conjecture is termed the cosmic censorship hypothesis. It is only a conjecture. But some theoretical models suggest that instead of a black hole, a collapsing star might become a naked singularity.
Most physicists operate under the assumption that a horizon must indeed form around a black hole. What exactly happens at a singularity - what becomes of the matter after it is infinitely crushed into oblivion - is not known. By hiding the singularity, the event horizon isolates this gap in our knowledge. General relativity does not account for the quantum effects that become important for microscopic objects, and those effects presumably intervene to prevent the strength of gravity from becoming truly infinite. Whatever happens in a black hole stays in a black hole. Yet Researchers have found a wide variety of stellar collapse scenarios in which an event horizon does not form, so that the singularity remains exposed to our view. Physicists call it a naked singularity. In such a case, Matter and radiation can both fall in and come out, whereas matter falling into the singularity inside a black hole would land in a one-way trip.
In principle, we can come as close as we like to a naked singularity and return back. Naked singularities might account for unexplained high-energy phenomena that astronomers have seen, and they might offer a laboratory to explore the fabric of the so-called space-time on its finest scales. The results of simulations by different scientists show that most naked singularities are stable to small variations of the initial setup. Thus, these situations appear to be generic and not contrived. These counterexamples to Penrose’s conjecture suggest that cosmic censorship is not a general rule.
The discovery of naked singularities would transform the search for a unified theory of physics, not the least by providing direct observational tests of such a theory. It has taken so long for physicists to accept the possibility of naked singularities because they raise a number of conceptual puzzles. A commonly cited concern is that such singularities would make nature inherently unpredictable. Unpredictability is actually common in general relativity and not always directly related to cosmic censorship violation described above. The theory permits time travel, which could produce causal loops with unforeseeable outcomes, and even ordinary black holes can become unpredictable. For example, if we drop an electric charge into an uncharged black hole, the shape of space-time around the hole radically changes and is no longer predictable. A similar situation holds when the black hole is rotating.
Specifically, what happens is that space-time no longer neatly separates into space and time, so that physicists cannot consider how the black hole evolves from some initial time into the future. Only the purest of pure black holes, with no charge or rotation at all, is fully predictable. The loss of predictability and other problems with black holes actually stem from the occurrence of singularities; it does not matter whether they are hidden or not. Cosmologists dread the singularity because at this point gravity becomes infinite, along with the temperature and density of the universe. As its equations cannot cope with such infinities, general relativity fails to describe what happens at the big bang.
In the mid 1980s, Abhay Ashtekar rewrote the equations of general relativity in a quantum-mechanical framework to show that the fabric of space-time is woven from loops of gravitational field lines. The theory is called the loop quantum gravity. If we zoom out far enough, the space appears smooth and unbroken, but a closer look reveals that space comes in indivisible chunks, or quanta, 10-35 square meters in size. In 2000, some scientists used loop quantum gravity to create a simple model of the universe. This is known as the LQC. Unlike general relativity, the physics of LQC did not break down at the big bang. Some others developed computer simulations of the universe according to LQC. Early versions of the theory described the evolution of the universe in terms of quanta of area, but a closer look revealed a subtle error. After this mistake was corrected it was found that the calculations now involved tiny volumes of space. It made a crucial difference. Now the universe according to LQC agreed brilliantly with general relativity when expansion was well advanced, while still eliminating the singularity at the big bang. When they ran time backwards, instead of becoming infinitely dense at the big bang, the universe stopped collapsing and reversed direction. The big bang singularity had disappeared (Physical Review Letters, vol.96, p-141301). The era of the Big Bounce has arrived. But the scientists are far from explaining all the conundrums.
Often it is said that the language of physics is mathematics. In a famous essay, Wigner wrote about the “unreasonable effectiveness of mathematics”. Most physicists resonate with the perplexity expressed by Wigner and Einstein’s dictum that “the most incomprehensible thing about the universe is that it is comprehensible”. They marvel at the fact that the universe is not anarchic - that atoms obey the same laws in distant galaxies as in the lab. Yet, Gödel’s Theorem implies that we can never be certain that mathematics is consistent: it leaves open the possibility that a proof exists demonstrating that 0=1. The quantum theory tells that, on the atomic scale, nature is intrinsically fuzzy. Nonetheless, atoms behave in precise mathematical ways when they emit and absorb light, or link together to make molecules. Yet, is Nature mathematical?
Language is a means of communication. Mathematics cannot communicate in the same manner like a language. Mathematics on its own does not lead to a sensible universe. The mathematical formula has to be interpreted in communicable language to acquire some meaning. Thus, mathematics is only a tool for describing some and not all ideas. For example, “observer” has an important place in quantum physics. Everett addressed the measurement problem by making the observer an integral part of the system observed: introducing a universal wave function that links observers and objects as parts of a single quantum system. But there is no equation for the “observer”.
We have not come across any precise and scientific definition of mathematics. Concise Oxford Dictionary defines mathematics as: “the abstract science of numbers, quantity, and space studied in its own right”, or “as applied to other disciplines such as physics, engineering, etc”. This is not a scientific description as the definition of number itself leads to circular reasoning. Even the mathematicians do not have a common opinion on the content of mathematics. There are at least four views among mathematicians on what mathematics is. John D Barrow calls these views as:
Platonism: It is the view that concepts like groups, sets, points, infinities, etc., are “out there” independent of us – “the pie is in the sky”. Mathematicians discover them and use them to explain Nature in mathematical terms. There is an offshoot of this view called “neo-Platonism”, which likens mathematics to the composition of a cosmic symphony by independent contributors, each moving it towards some grand final synthesis. Proof: completely independent mathematical discoveries by different mathematicians working in different cultures so often turn out to be identical.
Conceptualism: It is the anti-thesis of Platonism. According to this view, scientists create an array of mathematical structures, symmetries and patterns and force the world into this mould, as they find it so compelling. The so-called constants of Nature, which arise as theoretically undetermined constants of proportionality in the mathematical equations, are solely artifacts of the peculiar mathematical representation they have chosen to use for different purposes.
Formalism: This was developed during the last century, when a number of embarrassing logical paradoxes were discovered. There was proof which established the existence of particular objects, but offered no way of constructing them explicitly in a finite number of steps. Hilbert’s formalism belongs to this category, which defines mathematics as nothing more than the manipulation of symbols according to specified rules (not natural, but sometimes un-physical man-made rules). The resultant paper edifice has no special meaning at all. If the manipulations are done correctly, it should result in a vast collection of tautological statements: an embroidery of logical connections.
Intuitionism: Prior to Cantor’s work on infinite sets, mathematicians had not made use of actual infinities, but only exploited the existence of quantities that could be made arbitrarily large or small – the concept of limit. To avoid founding whole areas of mathematics upon the assumption that infinite sets share the “obvious” properties possessed by finite one’s, it was proposed that only quantities that can be constructed from the natural numbers 1,2,3,…, in a finite number of logical steps, should be regarded as proven true.
None of the above views is complete because it neither is a description derived from fundamental principles nor conforms to a proper definition of mathematics, whose foundation is built upon logical consistency. The Platonic view arose from the fact that mathematical quantities transcend human minds and manifests the intrinsic character of reality. A number, say three or five codes some information differently in various languages, but conveys the same concept in all civilizations. They are abstract entities and mathematical truth means correspondence between the properties of these abstract objects and our system of symbols. We associate the transitory physical objects such as three worlds or five sense organs to these immutable abstract quantities as a secondary realization. These ideas are somewhat misplaced. Numbers are a property of all objects by which we distinguish between similars. If there is nothing similar to an object, it is one. If there are similars, the number is decided by the number of times we perceive such similars (we may call it a set). Since perception is universal, the concept of numbers is also universal.
Believers in eternal truth often point to mathematics as a model of a realm with timeless truths. Mathematicians explore this realm with their minds and discover truths that exist outside of time, in the same way that we discover the laws of physics by experiment. But mathematics is not only self-consistent, but also plays a central role in formulating fundamental laws of physics, which the physics Nobel laureate Eugene Wigner once referred to as the “unreasonable success of mathematics in physics”. One way to explain this “success” within the dominant metaphysical paradigm of the timeless multiverse is to suppose that physical reality is mathematical, i.e. we are creatures within the timeless Platonic realm. The cosmologist Max Tegmark calls this the mathematical universe hypothesis. A slightly less provocative approach is to posit that since the laws of physics can be represented mathematically, not only is their essential truth outside of time, but there is in the Platonic realm a mathematical object, a solution to the equations of the final theory, that is “isomorphic” in every respect to the history of the universe. That is, any truth about the universe can be mapped into a theorem about the corresponding mathematical object. If nothing exists or is true outside of time, then this description is void. However, if mathematics is not the description of a different timeless realm of reality, what is it? What are the theorems of mathematics about if numbers, formulas and curves do not exist outside of our world?
Let us consider a game of chess. It was invented at a particular time, before which there is no reason to speak of any truths of chess. But once the game was invented, a long list of facts became demonstrable. These are provable from the rules and can be called the theorems of chess. These facts are objective in that any two minds that reason logically from the same rules will reach the same conclusions about whether a conjectured theorem is true or not. Platonists would say that chess always existed timelessly in an infinite space of mathematically describable games. By such an assertion, we do not achieve anything except a feeling of doing something elevated. Further, we have to explain how we finite beings embedded in time can gain knowledge about this timeless realm. It is much simpler to think that at the moment the game was invented, a large set of facts become objectively demonstrable, as a consequence of the invention of the game. There is no need to think of the facts as eternally existing truths, which are suddenly discoverable. Instead we can say they are objective facts that are evoked into existence by the invention of the game of chess. The bulk of mathematics can be treated the same way, even if the subjects of mathematics such as numbers and geometry are inspired by our most fundamental observations of nature. Mathematics is no less objective, useful or true for being evoked by and dependent on discoveries of living minds in the process of exploring the time-bound universe.
The Mandelbrot Set is often cited as a mathematical object with an independent existence of its own. Mandelbrot Set is produced by a remarkably simple mathematical formula – a few lines of code (f(z) = z2+c) describing a recursive feed-back loop – but can be used to produce beautiful colored computer plots. It is possible to endlessly zoom in to the set revealing ever more beautiful structures which never seem to repeat themselves. Penrose called it “not an invention of the human mind: it was a discovery”. It was just out there. On the other hand, fractals – geometrical shapes found through out Nature – are self-similar because how far you zoom into them; they still resemble the original structure. Some people use these factors to plead that mathematics and not evolution is the sole factor in designing Nature. They miss the deep inner meaning of these, which will be described later while describing the structure of the Universe.
The opposing view reflects the ideas of Kant regarding the innate categories of thought whereby all our experience is ordered by our minds. Kant pointed out the difference between the internal mental models we build of the external world and the real objects that we know through our sense organs. The views of Kant have many similarities with that of Bohr. The Consciousness of Kant is described as intelligence by Bohr. The sense organs of Kant are described as measuring devices by Bohr. Kant’s mental models are Bohr’s quantum mechanical models. This view of mathematics stresses more on “mathematical modeling” than mathematical rules or axioms. In this view, the so-called constants of Nature that arise as theoretically determined constants of proportionality in our mathematical equations, are solely artifacts of the particular mathematical representation we have chosen to use for explaining different natural phenomena. For example, we use G as the Gravitational constant because of our inclination to express the gravitational interaction in a particular way. This view is misleading as the large number of the so-called constants of Nature points to some underlying reality behind it. We will discuss this point later.
The debate over the definition of “physical reality” led to the notion that it should be external to the observer – an observer-independent objective reality. The statistical formulation of the laws of atomic and sub-atomic physics has added a new dimension to the problem. In quantum mechanics, the experimental arrangements are treated in classical terms, whereas the observed objects are treated in probabilistic terms. In this way, the measuring apparatus and the observer are effectively joined into one complex system which has no distinct, well defined parts, and the measuring apparatus does not have to be described as an isolated physical entity.
As Max Tegmark in his External Reality Hypothesis puts it: If we assume that reality exists independently of humans, then for a description to be complete, it must also be well-defined according to non-human entities that lack any understanding of human concepts like “particle”, “observation”, etc. A description of objects in this external reality and the relations between them would have to be completely abstract, forcing any words or symbols to be mere labels with no preconceived meanings what-so-ever. To understand the concept, you have to distinguish between two ways of viewing reality. The first is from outside, like the overview of a physicist studying its mathematical structure – a bird’s eye view. The second way is the inside view of an observer living in the structure – the view of a frog in the well.
Though Tegmark’s view is nearer the truth (it will be discussed later), it has been contested by others on the ground of contradicting logical consistency. Tegmark relies on a quote of David Hilbert: “Mathematical existence is merely freedom from contradiction”. This implies that mathematical structures simply do not exist unless they are logically consistent. They cite the Russell’s paradox (discussed in detail in later pages) and other paradoxes - such as the Zermelo-Frankel set theory that avoids the Russell’s paradox - to point out that mathematics on its own does not lead to a sensible universe. We seem to need to apply constraints in order to obtain consistent physical reality from mathematics. Unrestricted axioms lead to Russell’s paradox.
Conventional bivalent logic is assumed to be based on the principle that every proposition takes exactly one of two truth values: “true” or “false”. This is a wrong conclusion based on European tradition as in the ancient times students were advised to: observe, listen (to teachings of others), analyze and test with practical experiments before accepting anything as true. Till it is conclusively proved or disproved, it was “undecided”. The so-called discovery of multi-valued logic is nothing new. If we extend the modern logic then why stop at ternary truth values: it could be four or more-valued logic. But then what are they? We will discuss later.
Though Euclid with his Axioms appears to be a Formalist, his Axioms were abstracted from the real physical world. But the focus of attention of modern Formalists is upon the relations between entities and the rules governing them, rather than the question of whether the objects being manipulated have any intrinsic meaning. The connection between the Natural world and the structure of mathematics is totally irrelevant to them. Thus, when they thought that the Euclidean geometry is not applicable to curved surfaces, they had no hesitation in accepting the view that the sum of the three angles of a triangle need not be equal to 1800. It could be more or less depending upon the curvature. This is a wholly misguided view. The lines or the sides drawn on a curved surface are not straight lines. Hence the Axioms of Euclid are not violated, but are wrongly applied. Riemannian geometry, which led to the chain of non-Euclidean geometry, was developed out of his interest in trying to solve the problems of distortion of metal sheets when they were heated. Einstein used this idea to suggest curvature of space-time without precisely defining space or time or spece-time. But such curvature is a temporary phenomenon due to the application of heat energy. The moment the external heat energy is removed, the metal plate is restored to its original position and Euclidean geometry is applicable. If gravity changes the curvature of space, then it should be like the external energy that distorts the metal plate. Then who applies gravity to mass or what is the mechanism by which gravity is applied to mass. If no external agency is needed and it acts perpetually, then all mass should be changing perpetually, which is contrary to observation. This has been discussed elaborately in latter pages.
Once the notion of the minimum distance scale was firmly established, questions were raised about infinity and irrational numbers. Feynman raised doubts about the relevance of infinitely small scales as follows: “It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space”. Paul Davies asserted: “the use of differential equations assumes the continuity of space-time on arbitrarily small scales.
The frequent appearance of π implies that their numerical values may be computed to arbitrary precision by an infinite sequence of operations. Many physicists tacitly accept these mathematical idealizations and treat the laws of physics as implementable in some abstract and perfect Platonic realm. Another school of thought, represented most notably by Wheeler and Landauer, stresses that real calculations involve physical objects, such as computers, and take place in the real physical universe, with its specific available resources. In short, information is physical. That being so, it follows that there will be fundamental physical limitations to what may be calculated in the real world”. Thus, Intuitionism or Constructivism divides mathematical structures into “physically relevant” and “physically irrelevant”. It says that mathematics should only include statements which can be deduced by a finite sequence of step-by-step constructions starting from the natural numbers. Thus, according to this view, infinity and irrational numbers cannot be part of mathematics.
Infinity is qualitatively different from even the largest number. Finite numbers, however large, obey the laws of arithmetic. We can add, multiply and divide them, and put different numbers unambiguously in order of size. But infinity is the same as a part of itself, and the mathematics of other numbers is not applicable to it. Often the term “Hilbert’s hotel” is used as a metaphor to describe infinity. Suppose a hotel is full and each guest wants to bring a colleague who would need another room. This would be a nightmare for the management, who could not double the size of the hotel instantly. In an infinite hotel, though, there is no problem. The guest from room 1 goes into room 2, the guest in room 2 into room 4, and so on. All the odd-numbered rooms are then free for new guests. This is a wrong analogy. The numbers are divided into two categories based on whether there is similar perception or not. If after the perception of one object there is further similar perception, they are many, which can range from 2,3,4,…..n depending upon the sequence of perceptions? If there is no similar perception after the perception of one object, then it is one. In the case of Infinity, neither of the above conditions applies. However, Infinity is more like the number ‘one’ – without a similar – except for one characteristic. While one object has a finite dimension, infinity has infinite dimensions. The perception of higher numbers is generated by repetition of ‘one’ that many number of times, but the perception of infinity is ever incomplete.
Since interaction requires a perceptible change anywhere in the system under examination or measurement, normal interactions are not applicable in the case of infinity. For example, space and time in their absolute terms are infinite. Space and time cannot be measured, as they are not directly perceptible through our sense organs, but are deemed to be perceived. Actually what we measure as space is the interval between objects or points on objects. These intervals are mental constructs and have no physical existence other than the objects, which are used to describe space through alternative symbolism. Similarly, what we measure as time is the interval between events. Space and time do not and cannot interact with each other or with other objects or events as no mathematics is possible between infinities. Our measurements of an arbitrary segment of space or time (which are really the intervals) do not affect space or time in any way. We have explained the quantum phenomena with real numbers derived from fundamental principles and correlated them to the macro world. The quantities like π and φ etc have other significances, which will be discussed later.
The fundamental “stuff” of the Universe is the same and the differences arise only due to the manner of their accumulation and reduction – magnitude and sequential arrangement. Since number is a property of all particles, physical phenomena have some associated mathematical basis. However, the perceptible structures and processes of the physical world are not the same as their mathematical formulations, many of which are neither perceptible nor feasible. Thus the relationship between physics and mathematics is that of the map and the territory. Map facilitates study of territory, but it does not tell all about territory. Knowing all about the territory from the map is impossible. This creates the difficulty. Science is increasingly becoming less objective. The scientists are presenting data as if it is absolute truth merely liberated by their able hands for the benefit of lesser mortals. Thus, it has to be presented to the lesser mortals in a language that they do not understand – thus do not question. This leads to misinterpretations to the extent that some classic experiments become dogma even when they are fatally flawed. One example is the Olber’s paradox.
In order to understand our environment and interact effectively with it, we engage in the activities of counting the total effect of each of the systems. Such counting is called mathematics. It covers all aspects of life. We are central to everything in a mathematical way. As Barrow points out; “While Copernicus’s idea that our position in the universe should not be special in every sense is sound, it is not true that it cannot be special in any sense”. If we consider our positioning as opposed to our position in the Universe, we will find our special place. For example, if we plot a graph with mass of the star relative to the Sun (with Sun at 1) and radius of orbit relative to Earth (with Earth at 1) and consider scale of the planets, its distance from the Sun, its surface conditions, the positioning of the neighboring planets etc; and consider these variables in a mathematical space, we will find that the Earth’s positioning is very special indeed. It is in a narrow band called the Habitable zone (For details, please refer to Wikipedia on planetary habitability hypothesis).
If we imagine the complex structure of the Mandelbrot Set as representative of the Universe (since it is self similar), then we could say that we are right in the border region of the fractal structure. If we consider the relationship between different dimensions of space or a (bubble), then we find their exponential nature. If we consider the center of the bubble as 0 and the edge as 1 and map it in a logarithmic scale, we will find an interesting zone at 0.5. Starting for the Galaxy, to the Sun to Earth to the atoms, everything comes in this zone. For example, we can consider the galactic core as the equivalent of the S orbital of the atom, the bars as equivalent of the P orbital, the spiral arms as equivalent of the D orbital and apply the logarithmic scale, we will find the Sun at 0.5 position. The same is true for Earth. It is known that both fusion and fission push atoms towards iron. The element finds itself in the middle group of the middle period of the periodic table; again 0.5. Thus, there can be no doubt that Nature is mathematical. But the structures and the processes of the world are not the same as mathematical formulations. The map is not the territory. Hence there are various ways of representing Nature. Mathematics is one of them. However, only mathematics cannot describe Nature in any meaningful way.
Even the modern mathematician and physicists do not agree on many concepts. Mathematicians insist that zero has existence, but no dimension, whereas the physicists insist that since the minimum possible length is the Planck scale; the concept of zero has vanished! The Lie algebra corresponding to SU (n) is a real and not a complex Lie algebra. The physicists introduce the imaginary unit i, to make it complex. This is different from the convention of the mathematicians. Mathematicians treat any operation involving infinity is void as it does not change by addition or subtraction of or multiplication or division by any number. History of development of science shows that whenever infinity appears in an equation, it points to some novel phenomenon or some missing parameters. Yet, physicists use renormalization by manipulation to generate another infinity in the other side of the equation and then cancel both! Certainly it is not mathematics!
Often the physicists apply the “brute force approach”, in which many parameters are arbitrarily reduced to zero or unity to get the desired result. One example is the mathematics for solving the equations for the libration points. But such arbitrary reduction changes the nature of the system under examination (The modern values are slightly different from our computation). This aspect is overlooked by the physicists. We can cite many such instances, where the conventions of mathematicians are different from those of physicists. The famous Cambridge coconut puzzle is a clear representation of the differences between physics and mathematics. Yet, the physicists insist that unless a theory is presented in a mathematical form, they will not even look at it. We do not accept that the laws of physics break down at singularity. At singularity only the rules of the game change and the mathematics of infinities takes over.
Modern scientists claim to depend solely on mathematics. But most of what is called as “mathematics” in modern science fails the test of logical consistency that is a corner stone for judging the truth content of a mathematical statement. For example, mathematics for a multi-body system like a lithium or higher atom is done by treating the atom as a number of two body systems. Similarly, the Schrödinger equation in so-called one dimension (it is a second order equation as it contains a term x2, which is in two dimensions and mathematically implies area) is converted to three dimensional by addition of two similar factors for y and z axis. Three dimensions mathematically imply volume. Addition of three areas does not generate volume and x2+y2+z2 ≠ (x.y.z). Similarly, mathematically all operations involving infinity is void. Hence renormalization is not mathematical. Thus, the so called mathematics of modern physicists is not mathematical at all!
In fact, some recent studies appear to hint that perception is mathematically impossible. Imagine a black-and-white line drawing of a cube on a sheet of paper. Although this drawing looks to us like a picture of a cube, there are actually infinite numbers of other three-dimensional objects that could have produced the same set of lines when collapsed on the page. But we don’t notice any of these alternatives. The reason for the same is that, our visual systems have more to go on than just bare perceptual input. They are said to use heuristics and short cuts, based on the physics and statistics of the natural world, to make the “best guesses” about the nature of reality. Just as we interpret a two-dimensional drawing as representing a three-dimensional object, we interpret the two-dimensional visual input of a real scene as indicating a three-dimensional world. Our perceptual system makes this inference automatically, using educated guesses to fill in the gaps and make perception possible. Our brains use the same intelligent guessing process to reconstruct the past and help in perceiving the world.
Memory functions differently than a video-recording with a moment-by-moment sensory image. In fact, it’s more like a puzzle: we piece together our memories, based on both what we actually remember and what seems most likely given our knowledge of the world. Just as we make educated guesses – inferences - in perception, our minds’ best inferences help “fill in the gaps” of memory, reconstructing the most plausible picture of what happened in our past. The most striking demonstration of the minds’ guessing game occurs when we find ways to fool the system into guessing wrong. When we trick the visual system, we see a “visual illusion” - a static image might appear as if it’s moving, or a concave surface will look convex. When we fool the memory system, we form a false memory - a phenomenon made famous by researcher Elizabeth Loftus, who showed that it is relatively easy to make people remember events that never occurred. As long as the falsely remembered event could plausibly have occurred, all it takes is a bit of suggestion or even exposure to a related idea to create a false memory.
Earlier, visual illusions and false memories were studied separately. After all, they seem qualitatively different: visual illusions are immediate, whereas false memories seemed to develop over an extended period of time. A recent study blurs the line between these two phenomena. The study reveals an example of false memory occurring within 42 milliseconds - about half the amount of time it takes to blink your eye. It relied upon a phenomenon known as “boundary extension”, an example of false memory found when recalling pictures. When we see a picture of a location - say, a yard with a garbage can in front of a fence - we tend to remember the scene as though more of the fence were visible surrounding the garbage can. In other words, we extend the boundaries of the image, believing that we saw more fence than was actually present. This phenomenon is usually interpreted as a constructive memory error - our memory system extrapolates the view of the scene to a wider angle than was actually present. The new study, published in the November 2008 issue of the journal Psychological Science, asked how quickly this boundary extension happens.
The researchers showed subjects a picture, erased it for a very short period of time by overlaying a new image, and then showed a new picture that was either the same as the first image or a slightly zoomed-out view of the same place. They found that when people saw the exact same picture again, they thought the second picture was more zoomed-in than the first one they had seen. When they saw a slightly zoomed-out version of the picture they had seen before, however, they thought this picture matched the first one. This experience is the classic boundary extension effect. However, the gap between the first and second picture was less than 1/20th of a second. In less than the blink of an eye, people remembered a systematically modified version of pictures they had seen. This modification is, by far, the fastest false memory ever found.
Although it is still possible that boundary extension is purely a result of our memory system, the incredible speed of this phenomenon suggests a more parsimonious explanation: that boundary extension may in part be caused by the guesses of our visual system itself. The new dataset thus blurs the boundaries between the initial representation of a picture (via the visual system) and the storage of that picture in memory. This raises the question: is boundary extension a visual illusion or a false memory? Perhaps these two phenomena are not as different as previously thought. False memories and visual illusions both occur quickly and easily, and both seem to rely on the same cognitive mechanism: the fundamental property of perception and memory to fill in gaps with educated guesses, information that seems most plausible given the context. The work adds to a growing movement that suggests that memory and perception may be simply two sides of the same coin. This, in turn, implies that mathematics, which is based on perception of numbers and other visual imagery, could be misleading for developing theories of physics.
The essence of creation is accumulation and reduction of the number of particles in each system in various combinations. Thus, Nature has to be mathematical. But then physics should obey the laws of mathematics, just as mathematics should comply with the laws of physics. We have shown elsewhere that all of mathematics cannot be physics. We may have a mathematical equation without a corresponding physical explanation. Accumulation or reduction can be linear or non-linear. If they are linear, the mathematics is addition and subtraction. If they are non-linear, the mathematics is multiplication and division. Yet, this principle is violated in a large number of equations. For an example, the Schrödinger’s equation in one dimension has been discussed earlier. Then there are unphysical combinations. For example, certain combinations of protons and neutrons are prohibited physically, though there is no restriction on devising one such mathematical formula. There is no equation for the observer. Thus, sole dependence on mathematics for discussing physics is neither desirable nor warranted.
We accept “proof” – mathematical or otherwise - to validate the reality of any physical phenomena. We depend on proof to validate a theory as long as it corresponds to reality. The modern system of proof takes five stages: observation/experiment, developing hypothesis, testing the hypothesis, acceptance or rejection or modification of hypothesis based on the additional information and lastly, reconstruction of the hypothesis if it was not accepted. We also adopt a five stage approach to proof. First we observe/experiment and hypothesize. Then we look for corroborative evidence. In the third stage we try to prove that the opposite of the hypothesis is wrong. In the fourth stage we try to prove whether the hypothesis is universally valid or has any limitations. In the last stage we try to prove that any theory other than this is wrong.
Mathematics is one of the tools of “proof” because of its logical constancy. It is a universal law that the tools are selected based on the nature of operations and not vice-versa. The tools can only restrict the choice of operations. Hence mathematics by itself does not provide proof, but the proof may use mathematics as a tool. We also depend on symmetry, as it is a fundamental property of Nature. In our theory, different infinities co-exist and do not interact with each other. Thus, we agree that the evolutionary process of the Universe could be explained mathematically, as basically it is a process of non-linear accumulation and corresponding reduction of particles and energies in different combinations. But we differ on the interpretation of the equation. For us, the left hand side of the equation represents the cause and the right hand side the effect, which is reversible only in the same order. If the magnitudes of the parameters of one side are changed, the effect on the other side also correspondingly changes. But such changes must be according to natural laws and not arbitrary changes. For example, we agree that e/m = c2 or m/e = 1/c2, which we derive from fundamental principles. But we do not agree that e = mc2. This is because we treat mass and energy as inseparable conjugates with variable magnitude and not interchangeable, as each has characteristics not found in the other. Thus, they are not fit to be used in an equation as cause and effect. Simultaneously, we agree with c2 as energy flow is perceived in fields, which are represented by second order quantities.
If we accept the equation e = mc2, according to modern principles, it will lead to m = e/c2. In that case, we will land in many self contradicting situations. For example, if photon has zero rest mass, then m0 = 0/c2 (at rest, external energy that moves a particle has to be zero. Internal energy is not relevant, as a stable system has zero net energy). This implies that m0c2 = 0, or e = 0, which makes c2 = 0/0, which is meaningless. But if we accept e/m = c2 and both sides of the equation as cause and effect, then there is no such contradiction. As we have proved in our book “Vaidic Theory of Numbers”, all operations involving zero except multiplication are meaningless. Hence if either e or m becomes zero, the equation becomes meaningless and in all other cases, it matches the modern values. Here we may point out that the statement that the rest mass of matter is determined by its total energy content is not susceptible of a simple test since there is no independent measure of the later quantity. This proves our view that mass and energy are inseparable conjugates.
The domain that astronomers call “the universe” - the space, extending more than 10 billion light years around us and containing billions of galaxies, each with billions of stars, billions of planets (and maybe billions of biospheres) - could be an infinitesimal part of the totality. There is a definite horizon to direct observations: a spherical shell around us, such that no light from beyond it has had time to reach us since the big bang. However, there is nothing physical about this horizon. If we were in the middle of an ocean, it is conceivable that the water ends just beyond your horizon - except that we know it doesn’t. Likewise, there are reasons to suspect that our universe - the aftermath of our big bang - extends hugely further than we can see.
An idea called eternal inflation suggested by some cosmologists envisages big bangs popping off, endlessly, in an ever-expanding substratum. Or there could be other space-times alongside ours - all embedded in a higher-dimensional space. Ours could be but one universe in a multiverse. Other branches of mathematics then may become relevant. This has encouraged the use of exotic mathematics such as the transfinite numbers. It may require a rigorous language to describe the number of possible states that a universe could possess and to compare the probability of different configurations. It may just be too hard for human brains to grasp. A fish may be barely aware of the medium in which it lives and swims; certainly it has no intellectual powers to comprehend that water consists of interlinked atoms of hydrogen and oxygen. The microstructure of empty space could, likewise, be far too complex for unaided human brains to grasp. Can we guarantee that with the present mathematics we can overcome all obstacles and explain all complexities of Nature? Should we not resort to the so-called exotic mathematics? But let us see where it lands us.
The manipulative mathematical nature of the descriptions of quantum physics has created difficulties in its interpretation. For example, the mathematical formalism used to describe the time evolution of a non-relativistic system proposes two somewhat different kinds of transformations:
· Reversible transformations described by unitary operators on the state space. These transformations are determined by solutions to the Schrödinger equation.
· Non-reversible and unpredictable transformations described by mathematically more complicated transformations. Examples of these transformations are those that are undergone by a system as a result of measurement.
The truth content of a mathematical statement is judged from its logical consistency. We agree that mathematics is a way of representing and explaining the Universe in a symbolic way because evolution is logically consistent. This is because everything is made up of the same “stuff”. Only the quantities (number or magnitude) and their ordered placement or configuration create the variation. Since numbers are a property by which we differentiate between similar objects and all natural phenomena are essentially accumulation and reduction of the fundamental “stuff” in different permissible combinations, physics has to be mathematical. But then mathematics must conform to Natural laws: not un-physical manipulations or the brute force approach of arbitrarily reducing some parameters to zero to get a result that goes in the name of mathematics. We suspect that the over-dependence on mathematics is not due to the fact that it is unexceptionable, but due to some other reason described below.
In his book “The Myth of the Framework”, Karl R Popper, acknowledged as the major influence in modern philosophy and political thought, has said: “Many years ago, I used to warn my students against the wide-spread idea that one goes to college in order to learn how to talk and write “impressively” and incomprehensibly. At that time many students came to college with this ridiculous aim in mind, especially in Germany …………. They unconsciously learn that highly obscure and difficult language is the intellectual value par excellence……………Thus arose the cult of incomprehensibility, of “impressive” and high sounding language. This was intensified by the impenetrable and impressive formalism of mathematics…………….” It is unfortunate that even now many Professors, not to speak of their students, are still devotees of the above cult.
The modern Scientists justify the cult of incomprehensibility in the garb of research methodology - how “big science” is really done. “Big science” presents a big opportunity for methodologists. With their constant meetings and exchanges of e-mail, collaboration scientists routinely put their reasoning on public display (not the general public, but only those who subscribe to similar views), long before they write up their results for publication in a journal. In reality, it is done to test the reaction of others as often bitter debate takes place on such ideas. Further, when particle physicists try to find a particular set of events among the trillions of collisions that occur in a particle accelerator, they focus their search by ignoring data outside a certain range. Clearly, there is a danger in admitting a non-conformist to such raw material, since a lack of acceptance of their reasoning and conventions can easily lead to very different conclusions, which may contradict their theories. Thus, they offer their own theory of “error-statistical evidence” such as in the statement, “The distinction between the epistemic and causal relevance of epistemic states of experimenters may also help to clarify the debate over the meaning of the likelihood principle”. Frequently they refer to ceteris paribus (other things being equal), without specifying which other things are equal (and then face a challenge to justify their statement).
The cult of incomprehensibility has been used even the most famous scientists with devastating effect. Even the obvious mistakes in their papers have been blindly accepted by the scientific community and remained un-noticed for hundreds of years. Here we quote from an article written by W.H. Furry of Department of Physics, Harvard University, published in March 1, 1936 issue of Physical Review, Volume 49. The paper “Note on the Quantum-Mechanical Theory of Measurement” was written in response to the famous EPR Argument and its counter by Bohr. The quote relates to the differentiation between “pure state” and “mixture state”.
“2. POSSIBLE TYPES OF STATISTICAL INFORMATION ABOUT A SYSTEM.
Our statistical information about a system may always be expressed by giving the expectation values of all observables. Now the expectation value of an arbitrary observable F, for a state whose wave function is φ, is
If we do not know the state of the system, but know that wi
are the respective probabilities of its being in states whose wave functions are φi, then we must assign as the expectation value of F the weighted average of its expectation values for the states φi. Thus,
This formula for is the appropriate one when our system is one of an ensemble of systems of which numbers proportional to wi are in the states φi. It must not be confused with any such formula as
which corresponds to the system’s having a wave function which is a linear combination of the φi. This last formula is of the type of (1), while (2) is an altogether different type.
An alternative way of expressing our statistical information is to give the probability that measurement of an arbitrary observable F will give as result an arbitrary one of its eigenvalues, say δ. When the system is in the state φ, this probability is
where xδ is the eigenfunction of F corresponding to the eigenvalues δ. When we know only that wi are the probabilities of the system’s being in the states φi, the probability in question is
Formula (2’) is not the same as any special case of (1’) such as
It differs generically from (1’) as (2) does from (1).
When such equations as (1), (1’) hold, we say that the system is in the “pure state” whose wave function is φ. The situation represented by Eqs. (2), (2’) is called a “mixture” of the states φi with the weights wi. It can be shown that the most general type of statistical information about a system is represented by a mixture. A pure state is a special case, with only one non-vanishing wi. The term mixture is usually reserved for cases in which there is more than one non-vanishing wi.It must again be emphasized that a mixture in this sense is essentially different from any pure state whatever.”
Now we quote from a recent Quantum Reality Web site the same description of “pure state” and “quantum state”:
“The statistical properties of both systems before measurement, however, could be described by a density matrix. So for an ensemble system such as this the density matrix is a better representation of the state of the system than the vector.
So how do we calculate the density matrix? The density matrix is defined as the weighted sum of the tensor products over all the different states:
Where p and q refer to the relative probability of each state. For the example of particles in a box, p would represent the number of particles in state│ψ>, and q would represent the number of particles in state │φ>.
Let’s imagine we have a number of qubits in a box (these can take the value │0> or│1>.
Let’s say all the qubits are in the following superposition state: 0.6│0> +0.8i│1>.
In other words, the ensemble system is in a pure state, with all of the particles in an identical quantum superposition of states │0> and│1>. As we are dealing with a single, pure state, the construction of the density matrix is particularly simple: we have a single probability p, which is equal to 1.0 (certainty), while q (and all the other probabilities) are equal to zero. The density matrix then simplifies to: │ψ><ψ│
This state can be written as a column (“ket”) vector. Note the imaginary component (the expansion coefficients are in general complex numbers):
In order to generate the density matrix we need to use the Hermitian conjugate (or adjoint) of this column vector (the transpose of the complex conjugate│ψ>. So in this case the adjoint is the following row (“bra”) vector:
What does this density matrix tell us about the statistical properties of our pure state ensemble quantum system? For a start, the diagonal elements tell us the probabilities of finding the particle in the│0> or│1> eigenstate. For example, the 0.36 component informs us that there will be a 36% probability of the particle being found in the │0> state after measurement. Of course, that leaves a 64% chance that the particle will be found in the │1> state (the 0.64% component).
The way the density matrix is calculated, the diagonal elements can never have imaginary components (this is similar to the way the eigenvalues are always real). However, the off-diagonal terms can have imaginary components (as shown in the above example). These imaginary components have a associated phase (complex numbers can be written in polar form). It is the phase differences of these off-diagonal elements which produces interference (for more details, see the book Quantum Mechanics Demystified). The off-diagonal elements are characteristic of a pure state. A mixed state is a classical statistical mixture and therefore has no off-diagonal terms and no interference.
So how do the off-diagonal elements (and related interference effects) vanish during decoherence?
The off-diagonal (imaginary) terms have a completely unknown relative phase factor which must be averaged over during any calculation since it is different for each separate measurement (each particle in the ensemble). As the phase of these terms is not correlated (not coherent) the sums cancel out to zero. The matrix becomes diagonalised (all off-diagonal terms become zero. Interference effects vanish. The quantum state of the ensemble system is then apparently “forced” into one of the diagonal eigenstates (the overall state of the system becomes a mixture state) with the probability of a particular eigenstate selection predicted by the value of the corresponding diagonal element of the density matrix.
Consider the following density matrix for a pure state ensemble in which the off-diagonal terms have a phase factor of θ:
The above statement can be written in a simplified manner as follows: Selection of a particular eigenstate is governed by a purely probabilistic process. This requires a large number of readings. For this purpose, we must consider an ensemble – a large number of quantum particles in a similar state and treat them as a single quantum system. Then we measure each particle to ascertain a particular value; say color. We tabulate the results in a statement called the density matrix. Before measurement, each of the particles is in the same state with the same state vector. In other words, they are all in the same superposition state. Hence this is called a pure state. After measurement, all particles are in different classical states – the state (color) of each particle is known. Hence it is called a mixed state.
In common-sense language, what it means is that: if we take a box of billiard balls of say 100 numbers of random colors - say blue and green, before counting balls of each color, we could not say what percentage of balls are blue and what percentage green. But after we count the balls of each color and tabulate the results, we know that (in the above example) 36% of the balls belong to one color and 64% belong to another color. If we have to describe the balls after counting, we will give the above percentage or say that 36 numbers of balls are blue and 64 numbers of balls are green. That will be a pure statement. But before such measurement, we can describe the balls as 100 balls of blue and green color. This will be a mixed state.
As can be seen, our common-sense description is opposite of the quantum mechanical classification, which are written by two scientists about 75 years apart and which is accepted by all scientists unquestioningly. Thus, it is no wonder that one scientist jokingly said that: “A good working definition of quantum mechanics is that things are the exact opposite of what you thought they were. Empty space is full, particles are waves, and cats can be both alive and dead at the same time.”
We quote another example from the famous EPR argument of Einstein and others (Phys. Rev. 47, 777 (1935): “To illustrate the ideas involved, let us consider the quantum-mechanical description of the behavior of a particle having a single degree of freedom. The fundamental concept of the theory is the concept of state, which is supposed to be completely characterized by the wave function ψ, which is a function of the variables chosen to describe the particle’s behavior. Corresponding to each physically observable quantity A there is an operator, which may be designated by the same letter.
If ψ is an eigenfunction of the operator A, that is, if ψ’ ≡ Aψ = aψ (1)
where a is a number, then the physical quantity A has with certainty the value a whenever the particle is in the state given by ψ. In accordance with our criterion of reality, for a particle in the state given by ψ for which Eq. (1) holds, there is an element of physical reality corresponding to the physical quantity A”.
We can write the above statement and the concept behind it in various ways that will be far easier to understand by the common man. We can also give various examples to demonstrate the physical content of the above statement. However, such statements and examples will be difficult to twist and interpret differently when necessary. Putting the concept in an ambiguous format helps in its subsequent manipulation, as is explained below citing from the same example:
“In accordance with quantum mechanics we can only say that the relative probability that a measurement of the coordinate will give a result lying between a and b is
Since this probability is independent of a, but depends only upon the difference b - a, we see that all values of the coordinate are equally probable”.
The above conclusion has been arrived at based on the following logic: “More generally, it is shown in quantum mechanics that, if the operators corresponding to two physical quantities, say A and B, do not commute, that is, if AB ≠ BA, then the precise knowledge of one of them precludes such a knowledge of the other. Furthermore, any attempt to determine the latter experimentally will alter the state of the system in such a way as to destroy the knowledge of the first”.
The above statement is highly misleading. The law of commutation is a special case of non-linear accumulation as explained below. All interactions involve application of force which leads to accumulation and corresponding reduction. Where such accumulation is between similars, it is linear accumulation and its mathematics is called addition. If such accumulation is not fully between similars, but partially similars (and partially dissimilar) it is non-linear accumulation and its mathematics is called multiplication. For example, 10 cars and another 10 cars are twenty cars through addition. But if there are 10 cars in a row and there are two rows of cars, then rows of cars is common to both statements, but one statement shows the number of cars in a row while the other shows the number of rows of cars. Because of this partial dissimilarity, the mathematics has to be multiplication of 10 x 2 or 2 x 10. We are free to use any of the two sequences and the result will be the same. This is the law of commutation. However, no multiplication is possible if the two factors are not partially similar. In such cases, the two factors are said to be non-commutable. If the two terms are mutually exclusive, i.e., one of the terms will always be zero, the result of their multiplication will always be zero. Hence they may be said to be not commutable though in reality they are commutable, but the result of their multiplication is always zero. This implies that the knowledge of one precludes the knowledge of the other. The commutability or otherwise depend on the nature of the quantities – whether they are partially related and partially non-related to each other or not.
Position is a fixed co-ordinate in a specific frame of reference. Momentum is a mobile co-ordinate in the same frame of reference. Both fixedity and mobility are mutually exclusive. If a particle has a fixed position, its momentum is zero. If it has momentum, it does not have a fixed position. Since “particle” is similar in both the above statements, i.e., since both are related to the particle, they can be multiplied, hence commutable. But since one or the other factors is always zero, the result will always be zero and the equation AB ≠ BA does not hold. In other words, while uncertainty is established due to other reasons, the equation Δx. Δp ≥ h is a mathematically wrong statement, as mathematically the answer will always be zero. The validity of a physical statement is judged by its correspondence to reality or as Einstein and others put it: “by the degree of agreement between the conclusions of the theory and human experience”. Since in this case the degree of agreement between the conclusions of the theory and human experience is zero, it cannot be a valid physical statement either. Hence, it is no wonder that the Heisenberg’s Uncertainty relation is still a hypothesis and not proven. In latter pages we have discussed this issue elaborately.
In modern science there is a tendency of generalization or extension of one principle to others. For example; the Schrödinger equation in the so-called one dimension (actually it contains a second order term; hence cannot be an equation in one dimension) is generalized (?) to three dimensions by adding two more terms for y and z dimensions (mathematically and physically it is a wrong procedure). We have discussed it in latter pages. While position and momentum are specific quantities, the generalizations are done by replacing these quantities with A and B. When a particular statement is changed to a general statement by following algebraic principles, the relationship between the quantities of the particular statement is not changed. However, physicists often bypass or over-look this mathematical rule. A and B could be any set of two quantities. Since they are not specified, it is easy to use them in any way one wants. Even if the two quantities are commutable, since they are not precisely described, it gives one the freedom to manipulate by claiming that they are not commutable and vice-versa. Modern science is full of such manipulations.
Here we give another example to prove that physics and modern mathematics are not always compatible. Bell’s Inequality is one of the important equations used by all quantum physicists. We will discuss it repeatedly for different purposes. Briefly the theory holds that if a system consists of an ensemble of particles having three Boolean properties A, B and C, and there is a reciprocal relationship between the values of measurement of A on two particles, the same type relationship exists between the particles with respect to the quantity B, the value of one particle measured and found to be a, and the value of another particle measured and found to be b, then the first particle must have started with state (A = a, B = b). In that event, the Theorem says that P (A, C) ≤ P (A, B) + P (B, C). In the case of classical particles, the theorem appears to be correct.
Quantum mechanically: P(A, C) = ½ sin2 (θ), where θ is the angle between the analyzers. Let an apparatus emit entangled photons that pass through separate polarization analysers. Let A, B and C be the events that a single photon will pass through analyzers with axis set at 00, 22.50, and 450 to vertical respectively. It can be proved that C → C.
Thus, according to Bell’s theorem: P(A, C) ≤ P(A, B) + P(B, C),
Or ½ sin2 (450) ≤ ½ sin2 (22.50) + ½ sin2 (22.50),
Or 0.25 ≤0.1464, which is clearly absurd.
This inequality has been used by quantum physicists to prove entanglement and distinguish quantum phenomena from classical phenomena. We will discuss it in detail to show that the above interpretation is wrong and the same set of mathematics is applicable to both macro and the micro world. The real reason for such deviation from common sense is that because of the nature of measurement, measuring one quantity affects the measurement of another. The order of measurement becomes important in such cases. Even in the macro world, the order of measurement leads to different results. However, the real implication of Bell’s original mathematics is much deeper and points to one underlying truth that will be discussed later.
A wave function is said to describe all possible states in which a particle may be found. To describe probability, some people give the example of a large, irregular thundercloud that fills up the sky. The darker the thundercloud, the greater the concentration of water vapor and dust at that point. Thus by simply looking at a thundercloud, we can rapidly estimate the probability of finding large concentrations of water and dust in certain parts of the sky. The thundercloud may be compared to a single electron's wave function. Like a thundercloud, it fills up all space. Likewise, the greater its value at a point, the greater the probability of finding the electron there! Similarly, wave functions can be associated with large objects, like people. As one sits in his chair, he has a Schrödinger probability wave function. If we could somehow see his wave function, it would resemble a cloud very much in the shape of his body. However, some of the cloud would spread out all over space, out to Mars and even beyond the solar system, although it would be vanishingly small there. This means that there is a very large likelihood that his, in fact, sitting here in his chair and not on the planet Mars. Although part of his wave function has spread even beyond the Milky Way galaxy, there is only an infinitesimal chance that he is sitting in another galaxy. This description is highly misleading.
The mathematics for the above assumption is funny. Suppose we choose a fixed point A and walked in the north-eastern direction by 5 steps. We mark that point as B. There are an infinite number of ways of reaching the point B from A. For example, we can walk 4 steps to the north of A and then walk 3 steps to the east. We will reach at B. Similarly, we can walk 6 steps in the northern direction, 3 steps in the eastern direction and 2 steps in the Southern direction. We will reach at B. Alternatively; we can walk 8 steps in the northern direction, 6 steps in the eastern direction and 5 steps in the South-eastern direction. We will reach at B. It is presumed that since the vector addition or “superposition” of these paths, which are different sorts from the straight path, lead to the same point, the point B could be thought of as a superposition of paths of different sort from A. Since we are free to choose any of these paths, at any instant, we could be “here” or “there”. This description is highly misleading.
To put the above statement mathematically, we take a vector V which can be resolved into two vectors V1 and V2 along the directions 1 and 2, we can write: V = V1 + V2. If a unit of displacement along the direction 1 is represented by 1, then V1 = V11, wherein V1 denotes the magnitude of the displacement V1. Similarly, V2 = V22. Therefore:
V = V1 + V2 = V11 + V22. [1 and 2 are also denoted as (1,0) and (0,1) respectively].
This equation is also written as: V = λ1 + λ2, where λ is treated as the magnitude of the displacement. Here V is treated as a superposition of any standard vectors (1,0) and (0,1) with coefficients given by the numbers (ordered pair) (V1 , V2). This is the concept of a vector space. Here the vector has been represented in two dimensions. For three dimensions, this equation is written as V = λ1 + λ2 + λ3. For an n-tuple in n dimensions, the equation is written as V = λ1 + λ2 + λ3 +…… λn.
It is said that the choice of dimensions appropriate to a quantum mechanical problem depends on the number of independent possibilities the system possesses. In the case of polarization of light, there are only two possibilities. The same is true for electrons. But in the case of electrons, it is not dimensions, but spin. If we choose a direction and look at the electron’s spin in relation to that direction, then either its axis of rotation points along that direction or it is wholly in the reverse direction. Thus, electron spin is described as “up” and “down”. Scientists describe the spin of electron as something like that of a top, but different from it. In reality, it is something like the nodes of the Moon. At one node, Moon appear to be always going in the northern direction and at the other node, it always appears to be going in the southern direction. It is said that the value of “up” and “down” for an electron spin is always valid irrespective of the directions we may choose. There is no contradiction here, as direction is not important in the case of nodes. It is only the lay out of the two intersecting planes that is relevant. In many problems, the number of possibilities is said to be unbounded. Thus, scientists use infinite dimensional spaces to represent them. For this they use something called the Hilbert space. We will discuss about these later.
Any intelligent reader would have seen through the fallacy of the vector space. Still we are describing it again. Firstly, as we have described in the wave phenomena in later pages, superposition is a merger of two waves, which lose their own identity to create something different. What we see is the net effect, which is different from the individual effects. There are many ways in which it could occur at one point. But all waves do not stay in superposition. Similarly, the superposition is momentary, as the waves submit themselves to the local dynamics. Thus, only because there is a probability of two waves joining to cancel the effect of each other and merge to give a different picture, we cannot formulate a general principle such as the equation: V = λ1 + λ2 to cover all cases, because the resultant wave or flat surface is also transitory.
Secondly, the generalization of the equation V = λ1 + λ2 to V = λ1 + λ2 + λ3 +…… λn is mathematically wrong as explained below. Even though initially we mentioned 1 and 2 as directions, they are essentially dimensions, because they are perpendicular to each other. Direction is the information contained in the relative position of one point with respect to another point without the distance information. Directions may be either relative to some indicated reference (the violins in a full orchestra are typically seated to the left of the conductor), or absolute according to some previously agreed upon frame of reference (Kolkata lies due north-east of Puri). Direction is often indicated manually by an extended index finger or written as an arrow. On a vertically oriented sign representing a horizontal plane, such as a road sign, “forward” is usually indicated by an upward arrow. Mathematically, direction may be uniquely specified by a unit vector in a given basis, or equivalently by the angles made by the most direct path with respect to a specified set of axes. These angles can have any value and their inter-relationship can take an infinite number of values. But in the case of dimensions, they have to be at right angles to each other which remain invariant under mutual transformation.
According to Vishwakaema the perception that arises from length is the same that arises from the perception of breadth and height – thus they belong to the same class, so that the shape of the particle remains invariant under directional transformations. There is no fixed rule as to which of the three spreads constitutes either length or breadth or height. They are exchangeable in re-arrangement. Hence, they are treated as belonging to one class. These three directions have to be mutually perpendicular on the consideration of equilibrium of forces (for example, electric field and the corresponding magnetic field) and symmetry. Thus, these three directions are equated with “forward-backward”, “right-left”, and “up-down”, which remain invariant under mutual exchange of position. Thus, dimension is defined as the spread of an object in mutually perpendicular directions, which remains invariant under directional transformations. This definition leads to only three spatial dimensions with ten variants. For this reason, the general equation in three dimensions uses x, y, and z (and/or c) co-ordinates or at least third order terms (such as a3+3a2b+3ab2+b3), which implies that with regard to any frame of reference, they are not arbitrary directions, but fixed frames at right angles to one another, making them dimensions. A one dimensional geometric shape is impossible. A point has imperceptible dimension, but not zero dimensions. The modern definition of a one dimensional sphere or “one sphere” is not in conformity with this view. It cannot be exhibited physically, as anything other than a point or a straight line has a minimum of two dimensions.
While the mathematicians insist that a point has existence, but no dimensions, the Theoretical Physicists insist that the minimum perceptible dimension is the Planck length. Thus, they differ over the dimension of a point from the mathematicians. For a straight line, the modern mathematician uses the first order equation, ax + by + c = 0, which uses two co-ordinates, besides a constant. A second order equation always implies area in two dimensions. A three dimensional structure has volume, which can be expressed only by an equation of the third order. This is the reason why Born had to use the term “d3r” to describe the differential volume element in his equations.
The Schrödinger equation was devised to find the probability of finding the particle in the narrow region between x and x+dx, which is denoted by P(x) dx. The function P(x) is the probability distribution function or probability density, which is found from the wave function ψ(x) in the equation P(x) = [ψ(x)]2. The wave function is determined by solving the Schrödinger’s differential equation: d2ψ/dx2 + 8π2m/h2 [E-V(x)]ψ = 0, where E is the total energy of the system and V(x) is the potential energy of the system. By using a suitable energy operator term, the equation is written as Hψ = Eψ. The equation is also written as iħ ∂/∂tψ› = Hψ›, where the left hand side represents iħ times the rate of change with time of a state vector. The right hand side equates this with the effect of an operator, the Hamiltonian, which is the observable corresponding to the energy of the system under consideration. The symbol ψ indicates that it is a generalization of Schrödinger’s wave-function. The equation appears to be an equation in one dimension, but in reality it is a second order equation signifying a two dimensional field, as the original equation and the energy operator contain a term x2. A third order equation implies volume. Three areas cannot be added to create volume. Thus, the Schrödinger equation described above is an equation not in one, but in two dimensions. The method of the generalization of the said Schrödinger equation to the three spatial dimensions does not stand mathematical scrutiny.
Three areas cannot be added to create volume. Any simple mathematical model will prove this. Hence, the Schrödinger equation could not be solved for other than hydrogen atoms. For many electron atoms, the so called solutions simply consider them as many one-electron atoms, ignoring the electrostatic energy of repulsion between the electrons and treating them as point charges frozen to some instantaneous position. Even then, the problem remains to be solved. The first ionization potential of helium is theorized to be 20.42 eV, against the experimental value of 24.58 eV. Further, the atomic spectra show that for every series of lines (Lyman, Balmer, etc) found for hydrogen, there is a corresponding series found at shorter wavelengths for helium, as predicted by theory. But in the spectrum of helium, there are two series of lines observed for every single series of lines observed for hydrogen. Not only does helium possess the normal Balmer series, but also it has a second “Balmer” series starting at λ = 3889 Å. This shows that, for the helium atom, the whole series repeats at shorter wavelengths.
For the lithium atom, it is even worse, as the total energy of repulsion between the electrons is more complex. Here, it is assumed that as in the case of hydrogen and helium, the most stable energy of lithium atom will be obtained when all three electrons are placed in the 1s atomic orbital giving the electronic configuration of 1s3, even though it is contradicted by experimental observation. Following the same basis as for helium, the first ionization potential of lithium is theorized to be 20.4 eV, against the experimental value of 202.5 eV to remove all three electrons and only 5.4 eV to remove one electron from lithium. Experimentally, it requires less energy to ionize lithium than it does to ionize hydrogen, but the theory predicts ionization energy one and half times larger. More serious than this is the fact that, the theory should never predict the system to be more stable than it actually is. The method should always predict energy less negative than is actually observed. If this is not found to be the case, then it means that an incorrect assumption has been made or that some physical principle has been ignored.
Further, it contradicts the principle of periodicity, as the calculation places each succeeding electron in the 1s orbital as it increases nuclear charge by unity. It must be remembered that, with every increase in n, all the preceding values of l are repeated, and a new l value is introduced. The reasons why more than two electrons could not be placed in the 1s orbit has not been explained. Thus, the mathematical formulations are contrary to the physical conditions based on observation. To overcome this problem, scientists take the help of operators. An operator is something which turns one vector into another. Often scientists describe robbery as an operator that transforms a state of wealth to a state of penury for the robbed and vice versa for the robber. Another example of an operator often given is the operation that rotates a frame clockwise or anticlockwise changing motion in northern direction to that in eastern or western directions. The act of passing light through a polarizer is called an operator as it changes the physical state of the photons polarization. Thus, the use of a polarizer is described as measurement of polarization, since the transmitted beam has to have its polarization in the direction perpendicular to it. We will come back to operators later.
The probability does not refer to (as is commonly believed) whether the particle will be observed at any specific position at a specific time or not. Similarly the description of different probability of finding the particle at any point of space is misleading. A particle will be observed only at a particular position at a particular time and no where else. Since a mobile particle does not have a fixed position, the probability actually refers to the state in which the particle is likely to be observed. This is because all the forces acting on it and their dynamics, which influence the state of the particle, may not be known to us. Hence we cannot predict with certainty whether the particle will be found here or elsewhere. After measurement, the particle is said to acquire a time invariant “fixed state” by “wave-function collapse”. This is referred to as the result of measurement, which is an arbitrarily frozen time invariant non-real (since in reality, it continues to change) state. This is because; the actual state with all influences on the particle has been measured at “here-now”, which is a perpetually changing state. Since all mechanical devices are subject to time variance in their operational capacities, they have to be “operated” by a “conscious agent” – directly or indirectly - because, as will be shown later, only consciousness is time invariant. This transition from a time variant initial state to a time invariant hypothetical “fixed state” through “now” or “here-now” is the dividing line between quantum physics and the classical physics, as well as conscious actions and mechanical actions. To prove the above statement, we have examined what is “information” in latter pages, because only conscious agents can cognize information and use it to achieve the desired objects. However, before that we will briefly discuss the chaos prevailing in this area among the scientists.
Modern science fails to answer the question “why” on many occasions. In fact it avoids such inconvenient questions. Here we may quote an interesting anecdote from the lives of two prominent persons. Once, Arthur Eddington was explaining the theory of the expanding universe to Bertrand Russell. Eddington told Russell that the expansion was so rapid and powerful that even a most powerful dictator would not be able to control the entire universe. He explained that even if the orders were sent with the speed of light, they would not reach the farthest parts of the universe. Bertrand Russell asked, “If that is so, how does God supervise what is going on in those parts?” Eddington looked keenly at Russell and replied, “That, dear Bertrand does not lie in the province of the physicists.” This begs the question: What is physics? We cannot take the stand that the role of physics is not to explain, but to describe reality. Description is also an explanation. Otherwise, why and to whom do you describe? If the validity of a physical statement is judged by its correspondence to reality, we cannot hide behind the veil of reductionism, but explain scientifically the theory behind the seemingly “acts of God”.
There is a general belief that we can understand all physical phenomenon if we can relate it to the interactions of atoms and molecules. After all, the Universe is made up of these particles only. Their interactions – in different combinations – create everything in the Universe. This is called a reductionist approach because it is claimed that everything else can be reduced to this supposedly more fundamental level. But this approach runs into problem with thermodynamics and its arrow of time. In the microscopic world, no such arrow of time is apparent, irrespective of whether it is being described by Newtonian mechanics, relativistic or quantum mechanics. One consequence of this description is that there can be no state of microscopic equilibrium. Time-symmetric laws do not single out a special end-state where all potential for change is reduced to zero, since all instants in time are treated as equivalent.
The apparent time reversibility of motion within the atomic and molecular regimes, in direct contradiction to the irreversibility of thermodynamic processes constitutes the celebrated irreversibility paradox put forward by in 1876 by Loschmidt among others (L. Boltzmann: Lectures on Gas Theory – University of California Press, 1964, page 9). The paradox suggests that the two great edifices – thermodynamics and mechanics – are at best incomplete. It represents a very clear problem in need of an explanation which should not be swept under carpet. As Lord Kelvin says: If the motion of every particle of matter in the Universe were precisely reversed at any instant, the course of Nature would be simply reversed for ever after. The bursting bubble of foam at the foot of a waterfall would reunite and descend into water. The thermal motions would reconcentrate energy and throw the mass up the fall in drops reforming in a close column of ascending water. Living creatures would grow backwards – from old age to infancy till they are unborn again – with conscious knowledge of the future but no memory of the past. We will solve this paradox in later pages.
The modern view on reductionism is faulty. Reductionism is based on the concept of differentiation. When an object is perceived as a composite that can be reduced to different components having perceptibly different properties which can be differentiated from one another and from the composite as a whole, the process of such differentiation is called reductionism. Some objects may generate similar perception of some properties or the opposite of some properties from a group of substances. In such cases the objects with similar properties are grouped together and the objects with opposite properties are grouped together. The only universally perceived aspect that is common to all objects is physical existence in space and time, as the radiation emitted by or the field set up by all objects create a perturbation in our sense organs always in identical ways. Since intermediate particles exhibit some properties similar with other particles and are similarly perceived with other such objects and not differentiated from others, reductionism applies only to the fundamental particles. This principle is violated in most modern classifications.
To give one example, x-rays and γ-rays exhibit exclusive characteristics that are not shared by other rays of the electromagnetic spectrum or between themselves – such as the place of their origin. Yet, they are clubbed under one category. If wave nature of propagation is the criterion for such categorisation, then sound waves that travel through a medium such as air or other gases in addition to liquids and solids of all kinds should also have been added to the classification. Then there are mechanical waves, such as the waves that travel though a vibrating string or other mechanical object or surface, waves that travel through a fluid or along the surface of a fluid, such as water waves. If electromagnetic properties are the criteria for such categorisation, then it is not scientific, as these rays do not interact with electromagnetic fields. If they have been clubbed together on the ground that theoretically they do not require any medium for their propagation, then firstly there is no true vacuum and secondly, they are known to travel through various mediums such as glass. There are many such examples of wrong classification due to reductionism and developmental history.
The cults of incomprehensibility and reductionism have led to another deficiency. Both cosmology and elementary particle physics share the same theory of the plasma and radiation. They have independent existence that is seemingly eternal and may be cyclic. Their combinations lead to the sub-atomic particles that belong to the micro world of quantum physics. The atoms are a class by itself, whose different combinations lead to the perceivable particles and bodies that belong to the macro world of the so-called classical physics. The two worlds merge in the stars, which contain plasma of the micro world and the planetary system of the macro world. Thus, the study of the evolution of stars can reveal the transition from the micro world to the macro world. For example, the internal structures of planet Jupiter and protons are identical and like protons, Jupiter-like stars are abundant in the stars. Yet, in stead of unification of all branches of science, Cosmology and nuclear physics have been fragmented into several “specialized” branches.
Here we are reminded of an anecdote related to Lord Chaitanya. While in his southern sojourn, a debate was arranged between him and a great scholar of yore. The scholar went off explaining many complex doctrines while Lord Chaitanya sat quietly and listened with rapt attention without any response. Finally the scholar told Lord Chaitanya that he was not responding at all to his discourse. Was it too complex for him? The Scholar was sure from the look on Lord Chaitanya’s face that he did not understand anything. To this, Lord Chaitanya replied; “I fully understand what you are talking about. But I was wondering why you are making the simple things look so complicated?” Then he explained the same theories in plain language after which the scholar fell at his feet.
There has been very few attempts to list out the essence of all branches and develop “one” science. Each branch has its huge data bank with its specialized technical terms glorifying some person at the cost of a scientific nomenclature thereby enhancing incomprehensibility. Even if we read the descriptions of all six proverbial blind men repeatedly, one who has not seen an elephant cannot visualize it. This leaves the students with little opportunity to get a macro view of all theories and evaluate their inter-relationship. The educational system with its examination method of emphasizing the aspects of “memorization and reproduction at a specific instant” compounds the problem. Thus, the students have to accept many statements and theories as “given” without questioning it even on the face of ambiguities. Further, we have never come across any book on science, which does not glorify the discoveries in superlative terms, while leaving out the uncomfortable and ambiguous aspects, often with an assurance that they are correct and should be accepted as such. This creates an impression on the minds of young students to accept the theories unquestioningly making them superstitious. Thus, whenever some deficiencies have been noticed in any theory, there is an attempt at patch work within the broad parameters of the same theories. There have been few attempts to review the theories ab initio. Thus, the scientists cannot relate the tempest at a distant land to the flapping of the wings of the butterfly elsewhere.
Till now scientists do not know “what” are electrons, photons, and other subatomic objects that have made the amazing technological revolution possible? Even the modern description of the nucleus and the nucleons leave many aspects unexplained. Photo-electric effect, for which Einstein got his Noble Prize, deals with electrons and photons. But it does not clarify “what” are these particles. The scientists, who framed the current theories, were not gifted with the benefit of the presently available data. Thus, without undermining their efforts, it is necessary to ab initio re-formulate the theories based on the presently available data. Only this way we can develop a theory whose correspondence resembles to reality. Here is an attempt in this regard from a different perspective. Like the child revealing the secret of the Emperor’s clothes, we, a novice in this field, are attempting to point the lamp in the direction of the Sun.
Thousands of papers are read every year in various forums on as yet undiscovered particles. This reminds us of the saying which means: after taking bath in the water of the mirage, wearing the flower of the sky in the head, holding the bow made of the horns of a rabbit, here goes the son of the barren woman! Modern scientists are precisely making similar statements. This is a sheer waste of not only valuable time but also public money worth trillions for the pleasure of a few. In addition, this amounts to misguiding general public for generations. This is unacceptable because a scientific theory must stand up to experimental scrutiny within a reasonable time period. Till it is proved or disproved, it cannot be accepted, though not rejected either. We cannot continue for three quarters and more of a century to develop “theories” based on such unproven postulates in the hope that we may succeed someday – may be after a couple of centuries! We cannot continue research on the properties of the “flowers of the sky” on the ground that someday it may be discovered.
Experiments with the subatomic phenomena show effects that have not been reconciled with our normal view of an objective world. Yet, they cannot be treated separately. This implies the existence of two different states – classical and quantum – with different dynamics, but linked to each other in some fundamentally similar manner. Since the validity of a physical statement is judged by its correspondence to reality, there is a big question mark on the direction in which theoretical physics is moving. Technology has acquired a pre-eminent position in the global epistemic order. However, Engineers and Technologists, who progress by trial and error methods, have projected themselves as experimental scientists. Their search for new technology has been touted as the progress of science, questioning whose legitimacy is projected as deserving a sacrament. Thus, everything that exposes the hollowness or deficiencies of science is consigned to defenestration. The time has come to seriously consider the role, the ends and the methods of scientific research. If we are to believe that the sole objective of the scientists is to make their impressions mutually consistent, then we lose all motivation in theoretical physics. These impressions are not of a kind that occurs in our daily life. They are extremely special, are produced at great cost, time and effort. Hence it is doubtful whether the mere pleasure their harmony gives to a selected few can justify the huge public spending on such “scientific research”.
A report published in the Notices of the American Mathematical Society, October 2005 issue shows that the Theory of Dynamical Systems that is used for calculating the trajectories of space flights and the Theory of Transition States for chemical reactions share the same mathematics. This is the proof of a universally true statement that both microcosm and the macrocosm replicate each other. The only problem is to find the exact correlations. For example, as we have repeated pointed out, the internal structure of a proton and that of planet Jupiter are identical. We will frequently use this and other similarities between the microcosm and the macrocosm (from astrophysics) in this presentation to prove the above statement. Also we will frequently refer to the definitions of technical terms as defined precisely in our book “Vaidic Theory of Numbers”.
Subscribe to:
Posts (Atom)