Wednesday, September 07, 2011

REINVENTING THE THEORY OF GRAVITATION

RE-INVENTING THE THEORY OF GRAVITATION.

ABSTRACT.
The Pioneer anomaly, the unexplained change of direction of Voyager 2 beyond Saturn’s orbit, the Fly-by anomaly, the difference in the value of G measured precisely from the accepted value, and many more unexplained results of observation - all point to the defects of the modern theory of gravitation. Modified Newtonian Dynamics (MOND) that was proposed to answer some of these anomalies is also not satisfactory. The problem lies in the accumulative nature of scientific theories where all subsequent theories are built essentially upon the earlier theories. Since the presently available data was not known when the earlier theories were formulated, the developments on them may not be based on solid foundations – hence should not be accepted without proper re-examination. Reductionism, which restricts holistic vision, compounds the problem. We propose to review the earlier theories keeping in view the presently available data and reformulate the theory of gravitation from those modified theories.

We hold that gravitation is not an attractive force, but a stabilizing force that has two different functions: structure formation (we call it vyuhana – literally stitching) and displacement (we call it prerana) - moving bodies to equilibrium position with respect to the field that holds them. Confined bodies under the effect of gravitation act somewhat like a heterosexually polygamous population, where each member interacts with the field but not every other member or body. Since all other fundamental forces of Nature interact with bodies (and not the field) in a one-to-one correspondence like monogamous couples based on a combination of proximal and distance variables, these interactions are a class apart and could not be unified with the theory of gravitation. Structure formation due to gravity can be explained only if we understand its basic nature including the causes of and the fractional nature of spin (1/3 of ½ = 1/6) of elementary particles. This makes gravity a composite force of 6+1=7 both for macro (classical) as well as micro (quantum) bodies without involving the hypothetical graviton. This also makes ‘G’ a variable, not as proposed by Dirac in 1937, but like ‘g’ – the acceleration due to gravity, which varies with height and ‘g’ – the magnetic moment of the electron, which relates the size of the electron’s magnetism to its intrinsic spin. Displacement can be in 5 x 2 + 1 = 11 different types in positive and negative directions, which leads to action. This leads to a maximum of 11 x 11 + 1 = 122 types of different possible actions.

WHY WE NEED A NEW THEORY?

There are some observational results that are not adequately accounted for by conventional theories of gravity. Some of these are listed below:
• Pioneer anomaly: The two Pioneer spacecraft seem to be slowing down in a way which has yet to be explained.
• Flyby anomaly: Various spacecraft have experienced greater accelerations during slingshot maneuvers than expected.
• Extra fast stars: Stars in galaxies follow a distribution of velocities where stars on the outskirts are moving faster than they should according to the observed distributions of normal matter. Galaxies within galaxy clusters show a similar pattern. The postulated Dark Matter (DM), which would interact gravitationally but not electromagnetically, may account for the discrepancy if it exists. Various modifications to Newtonian dynamics have been proposed, which eliminate DM, but introduce extra fields through the backdoor.
• Accelerating expansion: The metric expansion of space seems to be speeding up. Dark energy has been proposed to explain this.
• Anomalous increase of the AU: Recent measurements indicate that planetary orbits are expanding faster than they should if this was solely through the sun losing mass by radiating energy. The observable changes to Earth’s orbit show an increase in the astronomical unit of between 5 and 9 cm per year. While the distance of closest approach of the Earth to the Sun in each cycle decreases, the distance taken to be the semi-major axis increases.
• Extra energetic photons: Photons traveling through galaxy clusters should gain energy and then lose it again on their way out. The accelerating expansion of the universe should stop the photons returning all the energy. But even taking this factor into account, photons from the cosmic microwave background radiation gain twice as much energy as expected. This may indicate that gravity falls off faster than inverse-squared at certain distance scales.
• Dark flow: Surveys of galaxy motions have detected a mystery dark flow towards an unseen mass. According to current models, such a mass is too large to have accumulated since the Big Bang and may indicate that gravity falls off slower than inverse-squared at certain distance scales.
• Extra massive hydrogen clouds: The spectral lines of the Lyman-alpha forest suggest that hydrogen clouds are more clumped together at certain scales than expected and, like dark flow, may indicate that gravity falls off slower than inverse-squared at certain distance scales.
• If gravitation is a consequence of space-time pressure on matter due to mutual antagonism, then gravitational phenomena are not linear but volume dependent. Instead of an inverse square law, gravitation should require an inverse cube law: i.e., Gravity at A = gravity at B × (distance B / distance A)3.
• When a plumb line is set up near a mountain range, it is attracted from the vertical towards the mountains but by far less than would be expected from calculations. This is called negative gravity anomaly. If the plumb bob is attracted more than expected from calculation, it is called positive gravity anomaly.

PIONEER ANOMALY: As indicated by their radio-metric data, reconstructions of the Pioneer10 and 11 spacecraft’s orbits were limited by a small, anomalous, constant, blue-shifted, Doppler frequency drift of approximately 6 x 10-9 Hz/s. The drift can be interpreted as due to a constant Sun-ward acceleration of P = (8.74 ± 1.33) 10-10 m/s2 (about 5000 km per annum). The anomaly at present is about 400,000 kilometers. This interpretation has come to be known as the Pioneer anomaly. Similar anomaly has been experience by the Cassini probe also. The trajectory data from the Voyagers can’t be used to examine Pioneer anomaly because they are three axis stabilized spacecrafts that fire thrusters to maintain the correct orientation with respect to the target objects. Those thruster firings introduce uncertainties in the tracking data that would mask the effects of the Pioneer anomaly. The Pioneer probes are spin stabilized, so that their orbital trajectories can be calculated far more precisely.

The distribution of satellite galaxies that orbit the Milky Way also presents a direct challenge to the Newtonian theory of gravitation, as the galaxies are not where they should be. According to Standard Cosmological Models, they should be uniformly arranged around the Milky Way. But that is not the case. Some of the possible reasons for such anomalous behavior being considered at present are:
1). The effect of Dark matter.
2). A defect in the modern theory of gravitation, i.e., Newtonian laws of gravitation, special and general relativity are wrong, and,
3). Some other exotic theory or a novel phenomenon that is still unknown.

DARK MATTER: DM is a hypothetical proposition that has not been directly verified under laboratory conditions. Hence postulates involving it can not be treated as a valid theory. Basically dark matter is being described not by empirical evidence, but to save the Standard Model (SM). It has been observed that the stars in the galaxies are moving faster than predicted due to some yet unknown reason. DM concept has been introduced to explain this phenomenon. But if we accept the effect of dark matter in galactic arrangements, it presents a paradox. Consider the brightest dwarf galaxies that lie more or less in the same plane like a disk shape and revolve in the same direction around the Milky Way like the planets in the Solar system. According to current Cosmological models, their arrangement could be explained only if we accept the possibility of colliding galaxies. But this introduces the paradox. Calculations suggest that if the dwarf satellite galaxies were created in this way, they cannot contain any dark matter. Moreover, all “proof” of DM involve very distant events over an extremely insignificant period on the cosmic scale and assumptions about how we should interpret these events. All of the more local scale observations including that of our galaxy, have provided null results for possible dark matter attributes.

Some scientists suggest that our knowledge of galactic formation is not complete. Hence we come across such paradoxes. In cosmology, the standard equations generally work for planets and stars, but at extreme proximities even their minor deviations from spherical symmetry in their distribution of mass introduce significant estimation errors. How accurate are the mass estimations with which the people involved in this field of endeavor actually work? People came up with dark matter (and dark energy?) to plug the holes in the equations simply because they never thought to question how accurate the estimates of mass on which they were relying actually are, or how to start trying to solve the problem by tweaking the equations. Others say that the stipulation of the SM - that gravity is constant – may not be correct. If we take away dark matter and believe that gravity is not constant in all states with all matter, this would explain the observed results.

All observations for which the requirement for dark matter was determined by applying the Inverse-Square Law to two relatively proximal objects, in which at least one is a virtual object composed of many discrete objects of mass, is technically in error and should be re-evaluated. This is especially true for the initial reports of observations describing the Galaxy Rotation Problem, which provided general credibility to the Dark Matter hypothesis.

MOND: The possibility of a defect in the modern theory of gravitation gave rise to Modified Newtonian Dynamics (MOND). It is a mathematical treatment that models stellar flow in many situations. But it falls short of describing all celestial objects. According to this view, over large spatial scales, gravity deviates from the inverse square law that characterizes the Newtonian law of gravitation. This tries to explain the arrangements of galactic clusters and flat rotation curves without DM. Since general relativity is also an inverse square theory, if MOND is correct, GR would also need modification. But for this modified versions to work, some sort of unseen or “dark” presence is a must, which looks a lot like dark matter. It won’t be described by particles in the way that dark matter is described - it may be described in a more wavelike form or a more field-like form. In other words, MOND can do away with dark matter but cannot describe the universe simply as the product of a tweaked Einsteinian gravity acting on the mass we can see. It modifies gravity, but through the backdoor it introduces extra fields, which mean that the distinction between dark matter and modified gravity isn’t very clear. In a paper “No Evidence for a Dark Matter Disk within 4 kpc From the Galactic Plane” (http://arxiv.org/abs/1011.1289) the authors note that their findings directly contradict the predictions of MOND.

DEFLECTION OF VOYAGER 2: An unexplained, but little discussed phenomenon is the sudden change in the direction of the Voyager 2 spacecraft after it crossed the orbit of Saturn. It was not so apparent for Voyager 1. But then it could have been due to its orientation. Because its trajectory was designed to fly close to Saturn’s large moon Titan, Voyager 1’s path was bent northward by Saturn’s gravity. That sent the spacecraft out of the Solar System’s ecliptic plane - the plane in which all the planets, except Pluto, orbit the Sun. The Voyager spacecrafts are guided by tiny thrusters that overwhelm the signal, while the Pioneers float freely and are pointed using gyroscopes. Since the Pioneer and Voyager probes do not have engines, they are constantly slowing down as gravity tries to pull them back (Pioneer anomaly?). However, they are moving fast enough to overcome the Sun’s gravitational field and eventually enter interstellar space. Voyager 2 is at −55.32° declination and 19.785 h right ascension, placing it in the constellation Telescopium as observed from Earth.

FLY-BY ANOMALY: Very often scientists use the gravitational fields of planets or moons to save fuel and travel much further through the solar system than that would otherwise be possible. According to reports (New Scientist, 20 September 2008, p-38), in December 1990, NASA’s Galileo space-craft passed like a slingshot around the Earth on its roundabout route to Jupiter. As the probe raced away from Earth, it was traveling 3.9 millimeters per second faster than it should have been according to NASA’s calculations. More such incidents have been reported since then. This is called the Fly-by anomaly. The biggest such discrepancy recorded, in 1998, affected NASA’s NEAR Shoemaker space-craft, whose speed was boosted by an additional 13.5 millimeters per second. Rosetta has already had a boost: in 2005. It sped up by about 1.8 millimeters per second more than expected as it slingshot around Earth. Nothing in known physics predicts this acceleration. Some relates the probes’ incoming and outgoing trajectory angles and Earth’s rotational velocity to the extra acceleration experienced by the spacecraft as they swing by us. The smallest anomalies arise when the incoming and outgoing trajectory are symmetrical with respect to Earth’s equator. There is no explanation from standard, accepted physics. These include dark matter, modifications to relativity and imbalances in the Earth’s gravitational field or something unknown to do with inertia or the nature of light.

CHANGING VALUE OF ‘G’: The value of G calculated by Newton was terribly wrong, because he did not have the benefit of sophisticated equipment. He assumed the mean distance of Moon from Earth to be 60.27 times the radius of the Earth. His estimation of the masses of the two bodies was also not precise. Einstein wanted to eliminate arbitrary constants like G from the expression of physical laws. But in GTR, it is still required to determine proportionality between mass and curvature empirically. Thus, G still remains, but there is no empirical formula to calculate its value theoretically. It has only to be measured experimentally. The present value of G has been found to be different with each successive measurement. This has put a question mark on the value and the nature of G. Though Dirac suggested a changing value of G, his concept was faulty and has rightly been discarded. Since most of the mathematics used in physics will crumble if the changing value of G is incorporated, the more accurate values have been put under the carpet. But this temporary respite is causing long term damage to Physics.

SUPERCONDUCTING LEVITATION coupling has been suggested by some to a gravity wave effect. Superconducting fields levitate objects. Several years ago, some researchers using a semiconductor plate formed in a high gauss magnetic field found that at superconducting temperatures, when current was applied opposite the plate’s magnetic alignment, a levitation ‘beam’ effect could be created. The beam apparently propagated infinitely and also worked on non-magnetically aligned matter to levitate it. When the current was applied in an opposite direction, matter in the beam became heavier. Since superconducting at high temperatures in the Earth’s core is a possibility, this view gets some credence.

COLD FUSION in outer planets has been suggested by some, because unexplainable electromagnetic field discharges have been seen in Jupiter and Saturn. But the true relation between gravitational and electromagnetic forces is to be found only through an understanding of why the elementary particles exist with just certain masses and not others and the relation between their masses and the electric and magnetic properties. One theoretical physicist argued that gravitational attraction could be the result of the way information about material objects is organized in space. According to this view, gravity is a phenomenon emerging from the fundamental properties of space and time. An example is the fluidity in water. Individual molecules have no fluidity, but collectively they do. Similarly, the force of gravity is not something ingrained in matter itself. It is an extra physical effect, emerging from the interplay of mass, time and space. His idea of gravity as an “entropic force” is based on the first principles of thermodynamics - but works within an exotic description of space-time called holography.

MORE REASONS: Some others have suggested two sorts of gravity: one that attracts the center of the bodies and the other that attracts towards the outer space like centripetal and centrifugal forces. In this view, the edge of the Universe is being attracted into the region that it does not yet occupy. May be some particles are subject to it and will have an affinity with the boundary. This could be an effect of the spin of the original quantum fluctuation that is supposed to have created the Universe.

These contradictions are not the only reasons for reviewing the theory of gravitation and other related theories. There is much more! Firstly, if gravity is actually an energy force (debatable in M-Theory) and is propagated in waves, then do competing gravitational waves reinforce, cancel, or otherwise interfere with one another? Second, if gravity is energy, and energy has mass, then does gravitational energy itself have mass and therefore generate more gravity? And if so, could the density of gravitational fields potentially warp or affect the cosmological constant on a local level so as to produce the effects usually attributed to the elusive and potentially nonexistent “dark matter”? Further, there are many un-reconciled problems of Quantum Mechanics (QM) and General relativity (GR). As one scientist pointed out: QM and GR handle serious problems such as:

Why does QED fail to get within 4% of the proton radius (in muonic hydrogen)?
Why is the core of the neutron negative? (QCD predicts positive).
Why are there three particle families?
Why does the decay into 3?
Why do 6 deuterium quarks not collapse to a sphere, but maintain ‘cigar’ shape?
Why does an electron exist in a non-dispersing (classical) orbit?
Why can’t QED derive the ‘fine structure constant?
What explains relative mass order of: electron, up, down quarks?
Why are Halo neutrons stable beyond the range of the strong force?

The same dilemma comes up in many guises: Why do photons from the Sun travel in directions that are not parallel to the direction of Earth’s gravitational acceleration toward the Sun? Why do total eclipses of the Sun by the Moon reach maximum eclipse about 40 seconds before the Sun and Moon’s gravitational forces align? How do binary pulsars anticipate each other’s future position, velocity, and acceleration faster than the light time between them would allow? How can black holes have gravity when nothing can get out because escape speed is greater than the speed of light?

Standard experimental techniques exist to determine the propagation speed of forces. When we apply these techniques to gravity, they all yield propagation speeds too great to measure, substantially faster than light speed. This is because gravity, in contrast to light, has no detectable aberration or propagation delay for its action, even for cases (such as binary pulsars) where sources of gravity accelerate significantly during the light time from source to target By contrast, the finite propagation speed of light causes radiation pressure forces to have a non-radial component causing orbits to decay (the “Poynting-Robertson effect”); but gravity has no counterpart force proportional to v/c to first order. GR explains these features by suggesting that gravitation (unlike electromagnetic forces) is a pure geometric effect of curved space-time, not a force of nature that propagates. Gravitational radiation, which surely does propagate at light speed but is a fifth order effect in v/c, is too small to play a role in explaining this difference in behavior between gravity and ordinary forces of nature.

It is widely accepted, even though less widely known, that the speed of gravity in Newton’s Universal Law is unconditionally infinite. (e.g., Misner et al., 1973, p.177) This contradicts the statement that GR is reduces to Newtonian gravity in the low-velocity, weak-field limit. It is because of the obvious contradiction that if the propagation speed of gravity in one model is equal to the speed of light (the limiting velocity), how can in the other model it would be infinite?

Quantum mechanics deals with quantum particles. Yet, we have not come across any satisfactory definition of what constitutes a quantum particle. The nearest definition we could find is that it is the smallest discreet quantity of some physical property that a system can possess. This definition is not satisfactory, as “particle” implies smallest possible discreet quantity of a physical entity, which includes a part thereof. Particle also implies an “action”, “actor” or “activity” that can seemingly exist of its own accord anywhere and everywhere. In mathematics, apparently there are “quantum quantities”, as well as other realities such as “infinities”, which have a different meaning than the number sequences associated with particles.

Till now there is no single cohesive theory that can be called Quantum Mechanics. There are a large number of different approaches or formulations to the foundations of Quantum Mechanics. There is the Heisenberg’s Matrix Formulation, Schrödinger’s Wave-function Formulation, Feynman’s Path Integral Formulation, Second Quantization Formulation, Wigner’s Phase Space Formulation, Density Matrix Formulation, Schwinger’s Variational Formulation, de Broglie-Bohm’s Pilot Wave Formulation, Hamilton-Jacobi Formulation etc. There are several Quantum Mechanical pictures based on placement of time-dependence. There is the Schrödinger Picture: time-dependent Wave-functions, the Heisenberg Picture: time-dependent operators and the Interaction Picture: time-dependence split. The different approaches are in fact, modifications of the theory. Each one introduces some prominent new theoretical aspect with new equations, which needs to be interpreted or explained. Thus, there are many different interpretations of Quantum Mechanics, which are very difficult to characterize. Prominent among them are; the Realistic Interpretation: wave-function describes reality, the Positivistic Interpretation: wave-function contains only the information about reality, the famous Copenhagen Interpretation: which is the orthodox Interpretation. Then there is Bohm’s Causal Interpretation, Everett’s Many World’s Interpretation, Mermin’s Ithaca Interpretation, etc. With so many contradictory views, quantum physics is not a coherent theory, but is truly weird.

String theory, which was developed with a view to harmonize General Relativity with Quantum theory, is said to be a high order theory where other models, such as supergravity and quantum gravity appear as approximations. Unlike super-gravity, string theory is said to be a consistent and well-defined theory of quantum gravity, and therefore calculating the value of the cosmological constant from it should, at least in principle, be possible. On the other hand, the number of vacuum states associated with it seems to be quite large, and none of these features three large spatial dimensions, broken super-symmetry, and a small cosmological constant. The features of string theory which are at least potentially testable - such as the existence of super-symmetry and cosmic strings - are not specific to string theory. In addition, the features that are specific to string theory - the existence of strings - either do not lead to precise predictions or lead to predictions that are impossible to test with current levels of technology.

There are many unexplained questions relating to the strings. For example, given the measurement problem of quantum mechanics, what happens when a string is measured? Does the uncertainty principle apply to the whole string? Or does it apply only to some section of the string being measured? Does string theory modify the uncertainty principle? If we measure its position, do we get only the average position of the string? If the position of a string is measured with arbitrarily high accuracy, what happens to the momentum of the string? Does the momentum become undefined as opposed to simply unknown? What about the location of an end-point? If the measurement returns an end-point, then which end-point? Does the measurement return the position of some point along the string? (The string is said to be a Two dimensional object extended in space. Hence its position cannot be described by a finite set of numbers and thus, cannot be described by a finite set of measurements.) How do the Bell’s inequalities apply to string theory? We must get answers to these questions first before we probe more and spend (waste!) more money in such research. These questions should not be put under the carpet as inconvenient or on the ground that some day we will find the answers. That someday has been a very long period indeed!

General relativity breaks down when gravity is very strong: for example when describing the big bang or the heart of a black hole. And the standard model has to be stretched to the breaking point to account for the masses of the universe’s fundamental particles, which are regularly revised. The two main theories of the last century; quantum theory and relativity, are also incompatible, having entirely different notions: such as for the concept of time. The incompatibility of quantum theory and relativity has made it difficult to unite the two in a single “Theory of everything”. There are almost infinite numbers of the “Theory of Everything” or the “Grand Unified Theory”. But none of them are free from contradictions. There is a vertical split between those pursuing the superstrings route and others, who follow the little Higgs route.

Thus, before thinking about the amalgamation of QM and GR, we should understand these issues that have a bearing on the theory of gravitation. A fundamental lesson of general relativity is that there is no fixed space-time background, as found in Newtonian mechanics and special relativity. The space-time geometry is dynamic. To a certain extent, general relativity can be seen to be a relational theory, in which the only physically relevant information is the relationship between different events in space-time. On the other hand, quantum mechanics has depended since its inception on a fixed background (non-dynamical) structure. In the case of quantum mechanics, it is time that is given and not dynamic, just as in Newtonian classical mechanics. In relativistic quantum field theory, just as in classical field theory, Minkowski’s space-time is the fixed background of the theory.

Quantum gravity theories are based on two underlying assumptions:
1) Classical gravitation can be quantized and consistently treated as a quantum field.
2) Its effects become perceivable at some large energy defining the unification scale.
But are these two assumptions correct? This question arises; because:
1) There is no direct evidence for gravitational waves, let alone for gravitons. We are not talking here about indirect evidence from binary stars but observations from LIGO and similar detectors.
2) There is no direct evidence that gravitation survives as an interaction field below an experimental limit such as 50 um or so. It is only inferred from what we know today that this indeed must be the case. But is this hypothesis correct? Physicists have just started to explore physics on the TeV scale. At least from where they stand today, it seems that the two assumptions listed above do not have a strong experimental support. The LHC has surprised physicists / cosmologists that the early universe was a ‘perfect fluid’, not an ‘explosion of gases’ that is the basis of all current theories. This may have implications for the theory of gravitation.

Gravity is completely different from the other forces described by the standard model. When one does some calculations about small gravitational interactions, one get stupid answers! The “mathematics” simply doesn’t work. All these anomalies and many more problems indicate that the present attempts to unite QM and Relativity are futile. We must re-look at the entire issue with a clean slate and develop a totally revised theory of gravitation from scratch.

METHODOLOGY FOR RE-BUILDING A NEW THEORY.

There is an unreasonable over emphasis on mathematics for describing physical laws. The validity of a physical statement is judged from its correspondence with reality. The physical description must correspond to what we perceive through our sense organs directly or infer indirectly. The particles that interact in Nature are bound by their properties. This puts severe restrictions on their degree of freedom. The validity of a mathematical statement is judged from its logical consistency irrespective of our perception or otherwise. Based on the new input, every step must logically follow from the previous step independent of other factors. The new input must be either accumulation of similars or reduction of dissimilars in available numbers. This is the only restriction on mathematical operations. Thus, mathematics, which is related to accumulation or reduction in numbers, describes how a system works, but not what it is or from where it comes.

For example, the mathematical descriptions of the laws of gravitation propagated by Newton and Einstein only explain how much (quantity or magnitude) the mass of a system affects the motion of a different system due to gravity. But it does not explain what gravity is or where from it comes or when and how it starts its initial operation. In fact with the concept of anti-gravity gaining ground due to the profound similarity between gravitational theories and the laws that govern the interactions of electric charges and magnetic poles, the theories do not explain with whom gravity acts or does not act!

Secondly, most of the so-called “mathematics” by physicists fails the test of logical consistency and other basic canons of mathematics. One example is renormalization, which violates the canonical principle of mathematics that all operations involving infinity are void. Another is the brute-force approach, where several parameters are arbitrarily brought down to unity to get some result without considering the effect of such operation on the whole system. Similarly, the Schrödinger equation in the so-called one dimension (it involves a second order factor.) is extended to three dimensions by adding two more terms. Mathematically, one dimension represents length (curved lines are at least two dimensional), two dimensions area and three dimensions volume, which arise due to multiplication (not addition) by one dimension only. For this reason, we will rebuild the theory from observational data and apply mathematics to test its authenticity by comparing theoretical predictions when the inputs are varied, with actual results.

A report published in the Notices of the American Mathematical Society, October 2005 issue shows that the Theory of Dynamical Systems that is used for calculating the trajectories of space flights and the Theory of Transition States for chemical reactions share the same sets of mathematics. It is a time honored scientific tradition that the same equations have the same solutions. This implies that the same chaotic trajectories that govern the motions of comets, asteroids and space crafts are traversed on the atomic scale by highly excited Rydberg electrons. This is the proof of a universally true statement that both microcosm and the macrocosm replicate each other. Only we have to identify the exact correlations. For example, as we have repeated pointed out, the internal structure of a proton and that of planet Jupiter are identical. We will frequently use this and other similarities between the microcosm and the macrocosm (from astrophysics and several other fields) in this presentation to prove the above statement.

FALACIES HISTORICALY BUILT INTO THE THEORY OF GRAVITATION:

Galileo’s experiments with falling bodies showed that at any angle, the rate of fall increases in direct proportion to time (counted from the moment of release) and that the distance covered increases in proportion to the square of the time. He also observed that a massive iron ball and a much lighter wooden ball roll down side by side if released simultaneously from the same height on the same inclined plane. Galileo inferred that in free fall, all material bodies, light or heavy, also move in exactly the same way. This relation of increased velocity or acceleration in proportion to the square of the time both proves and points to the inadequacies of Newton’s second law.

In Newton’s second law, f = ma, the term ‘f’ has not been qualified. Once an externally applied force acts on the body, the body is displaced. Thereafter, either the force loses contact with the body and ceases to act in it or moves with the body (it is possible only if it is moving in the same direction as the body, but at a higher velocity). Newton has not taken this factor into account. If the force ceases to act on the body, then assuming no other force is acting on the body, the body should move only due to inertia, which is constant. Thus, the body should move at constant velocity and the equation should be:

f = mv and not f = ma.

If some externally applied force is continuously acting on the body - like the train engine pulling it – the body would accelerate (the deceleration caused by friction with the tracks would be overcome by such acceleration). But in the case of a falling body, where is the source of acceleration? Once released, there is no externally applied force that is acting on the body. While ‘g’- acceleration due to gravity, ‘d’ - distance from Earth’s surface and ‘m1’ - mass of earth, remain constant, ‘m2’ - mass of the other bodies, change. Thus, according to the Newtonian law of gravitation, the different masses should accelerate differently. This is contrary to observation. Thus, the force that accelerates the bodies cannot be gravity. One possibility is that mass (which confines energy) is providing this force for acceleration. But then the masses of the bodies are different. Thus the question remain: What is the mechanism for different masses to provide equal acceleration? Newton’s second law is silent on this.

Here we must note the relationship between force, energy and action or work. Energy is linearly additive, which means similar type of energies add up while different types of energies co-exist in a non-linear relationship. There are plenty of examples where charged particles do not interact with other fields, though both co-exist. We will discuss on this elaborately later. Force is confined energy. Since it is confined, such energy, though available, cannot interact with other bodies. This provides potential energy to objects. Unleashed or applied force is energy proper that moves objects. This provides kinetic energy to objects due to addition of energy applied to the energy of the field that contains the body. The effect on a body after the applied energy ceased to operate is action of or work done by the body.

Newton thought that both the Earth and the apple and their intervening space are stationary and that the Earth attracts the apple through gravity by an action at a distance principle. According to his third law, while both the Earth and the apple exerted an equal but oppositely directed force on each other, the bodies accelerated differently due to their mass difference. Hence the apple moved towards the Earth and not vice-versa. But there is no proof to substantiate this postulate. As we have explained above, potential energy is the net confined energy in the body. This will be different for both bodies depending upon their mass and density. Due to conservation laws, every application of force activates a resistance (both in the medium and the body) due to inertia. This resistance is equal in magnitude and oppositely directed. This appears to validate Newton’s third law.

But the resistance may not act linearly. In such cases, the effect will appear differently. When we hit a ball, it leads to two actions: first, release of force by our leg puts kinetic energy into the ball that is displaced. The ball continues to move due to inertia. This in turn displaces the air leading to further resistance (we will discuss effect of gravity separately). Secondly, as our leg hits the ball, it meets the impedance of our leg with its attachment to the body, the air and then the ball, which is proportional to the force applied and the relative density of the interacting objects. The final action is: while the ball moves far away, our leg only recoils. These two actions are neither equal (magnitude of movement of the leg and the ball) nor opposite (direction of movement of the leg and the ball). Thus, the proper reading of Newton’s third law should be, “Every application of force generates inertia and corresponding impedance, which are equal and oppositely directed only in its totality”.

The falling body system is rather like people traveling in a train. To an observer standing on the same frame of reference as the train (i.e., the platform or Earth), without additional movement, all passengers move at the same velocity. This shows that the space between the point from where the weights were dropped and the Earth act like a moving train with reference to the observer standing on another point on Earth. In other words, there is actual movement in space. But it is different from space-time curvature (we will discuss it later). Newton missed this point.

Newton’s equation for estimating gravitational effects is mathematically correct only for pairs of discrete point-masses: either spherically symmetrical objects of mass or objects whose separation distance is so great in proportion to their spatial dimensions that the interaction is practically insignificant. Newton’s proof of mathematical correctness for his equation describing gravitation was based on his Shell Theorem, which explains how spherically symmetrical distributions of mass can be represented by a point-mass. Relatively proximal non-spherical distributions of mass do not qualify as point-masses.

FALACIES IN THE THEORY OF RELATIVITY – SPECIAL & GENERAL:

Now we will examine the fallacies inherent in the Special and General Theories of Relativity. In his 30-06-1905 paper, Einstein has discussed the principle of Relativity, which came to be known as the Special Theory of Relativity. We quote excerpts from it along with our comments to show the conceptual contradictions that have been generally overlooked.

Einstein: “Let there be given a stationary rigid rod; and let its length be l as measured by a measuring-rod which is also stationary. We now imagine the axis of the rod lying along the axis of x of the stationary system of co-ordinates, and that a uniform motion of parallel translation with velocity v along the axis of x in the direction of increasing x is then imparted to the rod. We now inquire as to the length of the moving rod, and imagine its length to be ascertained by the following two operations:-

(a) The observer moves together with the given measuring-rod and the rod to be measured, and measures the length of the rod directly by superposing the measuring-rod, in just the same way as if all three were at rest.

(b) By means of stationary clocks set up in the stationary system and synchronizing in accordance with §1, the observer ascertains at what points of the stationary system the two ends of the rod to be measured are located at a definite time. The distance between these two points, measured by the measuring-rod already employed, which in this case is at rest, is also a length which may be designated “the length of the rod”.

In accordance with the principle of relativity the length to be discovered by the operation (a) - we will call it the length of the rod in the moving system - must be equal to the length l of the stationary rod.

The length to be discovered by the operation (b) we will call “the length of the (moving) rod in the stationary system”. This we shall determine on the basis of our two principles, and we shall find that it differs from l.”

Our comments: The method described at (b) is self contradictory and impossible to measure accurately by the principles described by Einstein himself. He has described two frames: one fixed and one moving along it. First, the length of the moving rod is measured in the stationary system against the backdrop of the fixed frame and then the length is measured at a different epoch in a similar way in units of velocity of light. We can do this only in two ways, out of which one is the same as (a). Alternatively, we follow the method described in (b), take a photograph of the moving rod against the backdrop of the fixed frame and then measure its length in units of velocity of light or any other unit. But the picture will not give a correct reading due to the following reasons:

• If the length of the rod is small or the velocity is small, then the length contraction will not be perceptible according to the formula given by Einstein himself.
• If the rod is big or velocity is comparable to that of light, then light from different points of the rod will take different times to reach the camera at the same instant and we get a picture distorted due to the different Doppler shift of different points of the rod at any one instant. Thus, there is only one way of measuring the length of the rod as in (a). But there are problems in this case also.

Here we are reminded of an anecdote related to a famous Scientist. Once he directed two of his students to measure the wave-length of sodium light precisely. Both students returned with different results – one resembling the accepted value and the other different. Upon enquiry, the student replied that he had also come up with the same result as the other, but since everything including the Earth and the scale on it is moving, he applied length contraction to the scale treating Betelgeuse as a privileged frame of reference. This changed the result. The Scientist told them to follow the operation as at (a) above and recalculate the wave-length of light again without any reference to Betelgeuse or any other privileged frame of reference. After sometime, both the students returned to tell that the wave-length of light is infinite. To a surprised Scientist, they explained that since the scale is moving with light, its length would shrink to zero. Hence it will require an infinite number of scales to measure the wave-length of light!

Some scientists try to overcome the problem of length contraction in the above example by pointing out that, length contraction occurs only in the direction of travel. If we hold the rod in a transverse direction to the direction of travel, then there will be no length contraction for the rod. But we fail to understand how the length can be measured by holding it in a transverse direction to the direction of travel. If the light path is also transverse to the direction of motion, then the terms c+v and c-v vanish from the subsequent equations making the entire theory redundant. If the observer moves together with the given measuring-rod and the rod to be measured, and measures the length of the rod directly by superposing the measuring-rod while moving with it, he will not find any difference what-so-ever. Thus, the entire paper rests on a wrong premise. But this is not the only fallacy. There are many more fallacies in the concepts and the “mathematics”.

Special Relativity focuses on “inertial frames” and “inertial observers”, who are in special states of motion with no relative acceleration; hence no general relativistic effects. This means that the observers and frames have constant relative velocity: Thus, according to the Principle of Relativity, velocity between the related bodies is un-measurable, as the observers on them cannot notice it. Since this principle does not apply to non-related bodies, velocity between them is also un-measurable. This provides another contradiction!

What all of these arguments missed is the common reference frame in which all of them are moving like passengers in a train. To a person standing on the platform, all of them move with equal velocity. Inside the train, one passenger can stand still while others pass by him. He can measure their relative velocity at that point accurately. The relative velocity of the train with respect to the observer standing outside it does not affect the former. The light behaved like that while the students measured the wave-length of the passing light. Thus, the views of Einstein are contrary to observation. Since the rest of Einstein’s arguments follow this wrong notion, his entire theory of SR is wrong!

Further, in relativity, there is no place for a privileged frame of reference. But in the very same paper on SR, Einstein uses a privileged frame of reference to prove his theory! We quote from the paper with our comments:

Einstein: “If at the point A of space there is a clock, an observer at A can determine the time values of events in the immediate proximity of A by finding the positions of the hands which are simultaneous with these events. If there is at the point B of space another clock in all respects resembling the one at A, it is possible for an observer at B to determine the time values of events in the immediate neighborhood of B. But it is not possible without further assumption to compare, in respect of time, an event at A with an event at B. We have so far defined only an “A time” and a “B time”. We have not defined a common “time” for A and B, for the latter cannot be defined at all unless we establish by definition that the “time” required by light to travel from A to B equals the “time” it requires to travel from B to A. Let a ray of light start at the “A time” tA from A towards B, let it at the “B time” tB be reflected at B in the direction of A, and arrive again at A at the “A time” t’A.

In accordance with definition the two clocks synchronize if: tB – tA = t’A - tB.”

Our comments: Einstein has not defined time precisely, but has only given an operational definition of time, which, he subsequently manipulated to suit his convenience. He has specifically mentioned it in the paper as: “Now we must bear carefully in mind that a mathematical description of this kind has no physical meaning unless we are quite clear as to what we understand by ‘time’. We have to take into account that all our judgments in which time plays a part are always judgments of simultaneous events. If, for instance, I say, ‘That train arrives here at 7 o’clock’, I mean something like this: ‘The pointing of the small hand of my watch to 7 and the arrival of the train are simultaneous events”. “It might appear possible to overcome all the difficulties attending the definition of ‘time’ by substituting ‘the position of the small hand of my watch’ for ‘time’.”

Both space and time are perceived due to our notions of priority and posterity. Time is the interval between events just as space is the interval between objects. Space is related to interval between objects that are associated with numbers and time is related to the interval between two successive temporal states that are related to evolutionary changes in objects. Measurement is a process of comparison between similars. We measure these intervals individually with reference to a common yardstick and compare the result of such measurement to find the time in terms of multiples (or fractions) of the interval. Thus, the common yardstick is a privileged frame of reference negating the very principles of relativity. Ever-changing processes can’t be measured other than in time, which indicates the interval between two successive temporal states with another interval between two events that is fairly repetitive and easily intelligible like the movement of the hands of a clock. Since we observe the state and not the process during measurement, objects under ideal conditions are as they evolve independent of being perceived. What we see reflects only a temporal state of their evolution. Any mechanical or other functional defect in one will not affect the time evolution of the other, but will give a different reading. Thus, any mechanical defect in the clock will not delay the arrival of the train, but only show the arrival at a different time, which obviously is a wrong reading.

Light leaving A and reaching B are two different events with some interval. Similarly, Light leaving B and reaching A are two different events with some interval. Since the distance between points A and B and the velocity of light is assumed to be constant, then all the equation tB – tA = t’A - tB means is that, the need for “clock correction” between the identical clocks located at A and B and otherwise synchronized with reference to a third clock (explained by Einstein in the para below), is zero. It does not define the “A time” t’A or the “B time” t’B or a common “time” for both other than its synchronization with a common reference frame (which obviously is a privileged frame of reference negating relativity). Thus, the inference drawn by Einstein is wrong, as it has not taken into account the possibility of the clock malfunctioning either due to mechanical reasons or under different field conditions.

The constant speed of light only means that it measures equal distance in equal time units. Using this or a multiple or a fraction of this as the unit, the fixed (uniformly accelerating) distance between A and B can be measured by way of length comparison. But this will not be time measurement, as A and B are not time variant events, but time invariant positions. Of course we have the choice of taking the interval between the events when light leaves A and reaches B as the unit and compare the other intervals with it to get the time measured. But light travels at different velocities in different mediums and the interval for it to cross the same distance in various mediums will not be the same. This puts severe restrictions on the proposition, which cannot be used universally. For example, if there is a very hot or very cold cloud of gas between points A and B not equidistant from both, the results would be different as is evident from absorption and emission spectra. Some of the wave-lengths are absorbed by the gas cloud. Since the cloud is not at the center, this will happen at different intervals for both way motion.

Einstein: “We assume that this definition of synchronism is free from contradictions, and possible for any number of points; and that the following relations are universally valid:
1. If the clock at B synchronizes with the clock at A, the clock at A synchronizes with the clock at B.
2. If the clock at A synchronizes with the clock at B and also with the clock at C, the clocks at B and C also synchronize with each other.
Thus with the help of certain imaginary physical experiments we have settled what is to be understood by synchronous stationary clocks located at different places, and have evidently obtained a definition of “simultaneous”, or “synchronous”, and of “time”. The “time” of an event is that which is given simultaneously with the event by a stationary clock located at the place of the event, this clock being synchronous, and indeed synchronous for all time determinations, with a specified stationary clock.” (Italics and boldness marked by us.)

Our comments: Einstein sets out in the introductory part of his paper: “…the phenomena of electrodynamics as well as of mechanics possess no properties corresponding to the idea of absolute rest. They suggest rather that, as has already been shown to the first order of small quantities, the same laws of electrodynamics and optics will be valid for all frames of reference for which the equations of mechanics hold good. We will raise this conjecture (the purport of which will hereafter be called the “Principle of Relativity”) to the status of a postulate…” The “Principle of Relativity” is restricted to comparison of the motion of one frame of reference relative to another. Introduction of another or a privileged frame of reference collapses the equations as it no longer remains relativistic. Thus, what Einstein defines as the “Principle of Relativity” is different from what he tries to prove. The clock at A has been taken as a privileged frame of reference for comparison of other frames of reference, i.e., clocks at B and C. If privileged frames of reference are acceptable for time measurement, then the same should be applicable for space measurement also, which invalidates the rest of the paper. Thus, SR is a wrong concept.

Even the “mathematics” used by Einstein is wrong. We quote from the same paper with our comments:

Einstein: Let us in “stationary” space take two systems of co-ordinates, i.e. two systems, each of three rigid material lines, perpendicular to one another, and issuing from a point. Let the axes of X of the two systems coincide, and their axes of Y and Z respectively be parallel. Let each system be provided with a rigid measuring-rod and a number of clocks, and let the two measuring-rods, and likewise all the clocks of the two systems, be in all respects alike.

Now to the origin of one of the two systems (k) let a constant velocity v be imparted in the direction of the increasing x of the other stationary system (K), and let this velocity be communicated to the axes of the co-ordinates, the relevant measuring-rod, and the clocks. To any time of the stationary system K there then will correspond a definite position of the axes of the moving system, and from reasons of symmetry we are entitled to assume that the motion of k may be such that the axes of the moving system are at the time t (this “t” always denotes a time of the stationary system) parallel to the axes of the stationary system.

We now imagine space to be measured from the stationary system K by means of the stationary measuring-rod, and also from the moving system k by means of the measuring-rod moving with it; and that we thus obtain the co-ordinates x, y, z, and ξ, η, ζ. respectively. Further, let the time t of the stationary system be determined for all points thereof at which there are clocks by means of light signals in the manner indicated in §1; similarly let the time τ of the moving system be determined for all points of the moving system at which there are clocks at rest relatively to that system by applying the method, given in §1, of light signals between the points at which the latter clocks are located.

To any system of values x, y, z, t, which completely defines the place and time of an event in the stationary system, there belongs a system of values ξ, η, ζ. τ, determining that event relatively to the system k, and our task is now to find the system of equations connecting these quantities.

In the first place it is clear that the equations must be linear on account of the properties of homogeneity which we attribute to space and time.

If we place x’=x-vt, it is clear that a point at rest in the system k must have a system of values x’, y, z, independent of time. We first define τ as a function of x’, y, z, and t. To do this we have to express in equations that τ is nothing else than the summary of the data of clocks at rest in system k, which have been synchronized according to the rule given in §1.

From the origin of system k let a ray be emitted at the time τ0 along the X-axis to x’, and at the time τ1 be reflected thence to the origin of the co-ordinates, arriving there at the time τ2; we then must have ½ (τ0+ τ2) = τ1, or, by inserting the arguments of the function τ and applying the principle of the constancy of the velocity of light in the stationary system:
½ [τ (0,0,0,t) + τ {0,0,0,t + x’/(c-v) + x’/(c+v)}] = τ {(x’,0,0,t) + x’/(c-v)}

Hence, if x' be chosen infinitesimally small,
½ {1/(c-v) + 1/(c+v)}(∂ τ / ∂ t) = (∂ τ / ∂ x’) + {1/ (c-v)} (∂ τ / ∂ t’)
or (∂ τ / ∂ x’) + {v / (c2 – v2)} (∂ τ / ∂ t) = 0.

It is to be noted that instead of the origin of the co-ordinates we might have chosen any other point for the point of origin of the ray, and the equation just obtained is therefore valid for all values of x’, y, z.

An analogous consideration - applied to the axes of Y and Z - it being borne in mind that light is always propagated along these axes, when viewed from the stationary system, with the velocity √(c2 – v2) gives us:
∂ τ / ∂ y = 0, and ∂ τ / ∂ z = 0,

Since τ is a linear function, it follows from these equations that
τ = α [{t – v / (c2 – v2) } x’]
where α is a function φ(v) at present unknown, and where for brevity it is assumed that at the origin of k, τ = 0, when t=0.

Our comment: Apart from the fallacies with the measuring devices discussed earlier, although Einstein creates a time function (or “funktion” as he calls it) as noted in his original manuscript, he incorrectly treats it as an equation in the remainder of his derivation as follows. According to his original manuscript (Einstein has used V for velocity of light):
“ Aus diesen Gleichungen folgt, da τ eine lineare Funktion ist:


wobei α eine vorlaufig unbekannte Funktion g (υ) ist und der
Kürze halber angenommen ist, dαβ im Anfangspunkte von k
für τ = 0 t = 0 sei.”

The problem occurs because Einstein wrote the function as well as the equation informally. The differences between the formal and informal equation can be exemplified as follows:
While the informal equation is:

The formal equation would be:


Functions must be invoked before they are used. They can be explicitly or implicitly invoked. In computer science, functions are typically explicitly invoked due to the specific way in which instructions are communicated to the computer. This is a fundamental difference between the human brain and the computer and the reason why computers can never become “alive”. In other disciplines, which are basically functions of the brain, the functions are more often implicitly invoked and treated as equations. Generally, this does not result in a problem unless the arguments invoked have complex numbers. In order to illustrate the problem, the function must be explicitly invoked. By presenting the derivation in this manner, it will be easy to show that Einstein actually uses two time equations; one as the stand alone equation and the other that is used to produce the x-axis transformation equation. The time function:

τ (x’,y,z,t) = α [t- {υx’/ (V2 - υ2)}] is invoked as τ {x-vt, 0, 0, x’/ (V-v)} = τ {x-vt, 0, 0, t}

By invoking the function explicitly, Einstein implicitly invoked the function twice, once to produce the stand-alone time equation and once as used to create the X-axis transformation equation. Einstein implicitly performed the function invocation by replacing t with x’/(V-v) in creating the X-axis transformation. However, he does not use the same complex argument when producing the stand-alone time equation. The result is an invalid system of equations that is discovered and validated using the "if a=bc then b=a/c" math rule. Earlier, we had shown that replacing t with x’/(c-v) or x’/(c+v) is erroneous.

Further, Einstein says that, “instead of the origin of the co-ordinates we might have chosen any other point for the point of origin of the ray, and the equation just obtained is therefore valid for all values of x’, y, z.” This statement is not correct, as he has assumed a light pulse to spread out spherically and while there is no valid equation for a sphere, the equation for a circle with its center at the origin is different from that of another, whose center is not at the origin. Thus the mathematics is wrong.

Einstein: With the help of this result we easily determine the quantities ξ, η, ζ. by expressing in equations that light (as required by the principle of the constancy of the velocity of light, in combination with the principle of relativity) is also propagated with velocity c when measured in the moving system.

For a ray of light emitted at the time τ = 0 in the direction of the increasing ξ,

ξ = c τ or ξ = α c [t – {v /(c2 – v2)} x’]

But the ray moves relatively to the initial point of k, when measured in the stationary system, with the velocity c-v, so that: x’/(c-v) = t.

If we insert this value of t in the equation for ξ, we obtain: ξ = α {c2 / (c2 – v2)}x’.

In an analogous manner we find, by considering rays moving along the two other axes, that: η = c τ = αc [t – { v / (c2 – v2)}x’]. When: y / √(c2 – v2) = t, x’ = 0.

Thus: η = α {c / √ (c2 – v2)}y and ζ = α {c / √ (c2 – v2)}z.

Substituting for x' its value, we obtain:
τ = φ(v) β {t – (υx / c2)
ξ = φ(v) β (x- υt)
η = φ(v) y
ζ = φ(v) z,
where,
1
β = ----------------
√ 1 – (υ2/ c2)
and φ is an as yet unknown function of v. If no assumption whatever be made as to the initial position of the moving system and as to the zero point of τ, an additive constant is to be placed on the right side of each of these equations.

Our comments: As per the mathematical rule: “if a = bc, then b = a/c”. However, the above formulation of Einstein fails the test of this rule. By following Einstein’s computation, we find that:

ξ = c τ as also ξ = φ(v) β (x- υt) and τ = φ(v) β {t – (υx / c2)

Now, we find further that generally: ξ / c ≠ τ, because ξ indicates a certain position on x-axes at a certain time τ from a designated epoch.

or, φ(v) β (x- υt) /c ≠ φ(v) β {t – (υx / c2) or, (x- υt) /c ≠ t – (υx / c2)

This represents a mathematical fallacy that needs be corrected. This mistake invalidates the remainder of Einstein’s derivations of Special Theory of Relativity.

Einstein: We now have to prove that any ray of light, measured in the moving system, is propagated with the velocity c, if, as we have assumed, this is the case in the stationary system; for we have not as yet furnished the proof that the principle of the constancy of the velocity of light is compatible with the principle of relativity.

At the time t = τ = 0, when the origin of the co-ordinates is common to the two systems, let a spherical wave be emitted there from, and be propagated with the velocity c in system K. If (x, y, z) be a point just attained by this wave, then
x2+y2+z2=c2t2.

Transforming this equation with the aid of our equations of transformation we obtain after a simple calculation: ξ2 + η2 + ζ2 = c2 τ2

The wave under consideration is therefore no less a spherical wave with velocity of propagation c when viewed in the moving system. This shows that our two fundamental principles are compatible.

Our comments: Einstein has allowed the time variable “t” to behave as an independent variable in the final equations. In each derivation, the time variable “t” begins (in some manner) as a dependent variable. For example, he has used equations x2+y2+z2-c2t2 = 0 and ξ2 + η2 + ζ2 - c2 τ2 = 0 to describe two spheres that the observers see of the evolution of the same light pulse. The above equation of the sphere is mathematically wrong. It describes a sphere with the center at origin, whose z-axis is zero, i.e., not a sphere, but a circle. This is because; the term c2t2 represents the radius of the sphere. For a circle with its center at the origin, the general equation is x2+y2 -r2 = 0, where r represents radius, hence is equal to c2t2 in the above example, as the radius of the sphere is equal to the radius of the circle it contains. Thus, we can write x2+y2 - c2t2 = 0. Since x2+y2+z2 -c2t2 = 0, and x2+y2 - c2t2 = 0, the obvious possibility is z2 = 0 or z=0. Thus, the equation x2+y2+z2-c2t2 = 0 represents a circle with its center at the origin.

It also shows how Einstein treats time differently. Since general equation of sphere is supposed to be: x2+y2+z2+Dx+Ey+Fz+G = 0, (which is not mathematically valid), both the equations can at best describe two spheres with origin at the points (x,y,z) and (ξ, η, ζ ) respectively. Since the second person is moving away from the origin, the second equation is not applicable in his case. Assuming he sees the same sphere, he should know its origin (because he has already seen it, otherwise he will not know that it is the same light pulse. In the later case there is no way to correlate both pulses) and its present location. In other words, he will measure the same radius as the other person, implying:

c2t2 = c2 τ2 or t = τ.

Again, if x2+y2+z2-c2t2 = ξ2 + η2 + ζ2 - c2 τ2, then t ≠ τ.

This creates a contradiction, which invalidates his mathematics. There are many more examples like these that conclusively prove SR even mathematically invalid.

In 2005, researchers at the MAGIC gamma-ray telescope on La Palma in the Canary Islands were studying gamma-ray bursts emitted by the black hole in the centre of the Markarian 501 galaxy about half a billion light years away from us. The burst’s high-energy gamma rays arrived at the telescope 4 minutes later than the lower energy rays. Both parts of the spectrum should have been emitted at the same time. Questions have been raised regarding the reasons for the time lag. Is it due to the high-energy radiation traveling slower through space? That would contravene one of the central tenets of special relativity: that all electromagnetic radiation always travels through vacuum at the speed of light. The energy of the radiation should be absolutely irrelevant. The MAGIC result suggests that Special Relativity is only an approximation of how things really work. The mystery has only deepened with the launch of NASA’s Fermi gamma-ray space telescope. It has observed high-energy photons arriving up to 20 minutes behind zippier low-energy ones from a source 12 billion light years away.

In General Relativity (GR), mass is taken as the agency that causes space-time curvature. But neither space nor time is mass as they represent the interval between the objects and their temporal evolutionary stages. There cannot be any geometry of space, as space is the apparent “void” between objects. Since this apparent “void” could not be perceived by us, we use alternative symbolism (we call it vikalpana) of the geometry of the objects that define the particular space to describe the contours of space. Thus, the geometry belongs to the objects and not to the space. If we point out Mr.X as the “person wearing the red hat”, all persons wearing red hat cannot be described as Mr.X not does the red hat becomes a defining characteristic of Mr.X.

Particles are said to follow the shortest paths between points (if unaffected by other forces). The definition of “distance” is changed so that “shortest paths” makes sense for geodesics. But this leaves out the difference between distance and shortest paths, which may not be the same for two objects placed on a flat surface and a curved surface, i.e., a geodesic. While an object may cover the shortest path on a flat surface without any hindrance, the same is not true for a curved surface.

It is generally not realized that Einstein has laid the foundations for the principle of equivalence in his SR paper dated 30-06-1905, though in a different context. He substituted “the position of the small hand of my watch” as equivalent for “time”. He wrote:

Einstein: “It might appear possible to overcome all the difficulties attending the definition of “time” by substituting “the position of the small hand of my watch” for “time”. And in fact such a definition is satisfactory when we are concerned with defining a time exclusively for the place where the watch is located; but it is no longer satisfactory when we have to connect in time series of events occurring at different places, or - what comes to the same thing - to evaluate the times of events occurring at places remote from the watch.

We might, of course, content ourselves with time values determined by an observer stationed together with the watch at the origin of the co-ordinates, and coordinating the corresponding positions of the hands with light signals, given out by every event to be timed, and reaching him through empty space. But this co-ordination has the disadvantage that it is not independent of the standpoint of the observer with the watch or clock, as we know from experience. We arrive at a much more practical determination along the following line of thought”.

Our comments: The line of thought referred above has already been discussed. Through this, Einstein has laid the foundations for the principle of equivalence when he equates two different actions and their intervals: the fairly repetitive and easily intelligible position of the successive intervals between the rising of the Sun or the small hand of the watch or clock starting from an epoch (here zero hours) and the interval between the same epoch and any other event like the arrival of the train in the platform. Einstein says that since these two events synchronize, the interval of these events from any epoch will be equivalent - hence indistinguishable - so that we can take any one of the pair of events as the correct interval. If we leave aside the epoch, the two intervals or the events cannot be linked. If we accept the epoch, it becomes a special frame of reference negating relativity. Simultaneously, it introduces the concept of equivalence by giving us a choice to use either “time” as the interval between successive repetitive and easily intelligible natural events like Sun rise to next Sun rise or “the position of the small hand of my watch”, which is a mechanically determined event.

Thus, “when we have to connect in time series of events occurring at different places, or - what comes to the same thing - to evaluate the times of events occurring at places remote from the watch”, we must refer to a common reference point for time measurement, which means that we have to apply “clock corrections” to individual clocks with reference to a common clock at the time of measurement which will make the readings of all clocks over the same interval identical. (Einstein has also done it with clocks at A, B and C, when he defines synchronization). This implies that to accurately measure time by some clocks, we must depend upon a preferred clock, whose time has to be fixed with reference to the earlier set of clocks whose time is to be accurately measured. Alternatively, we will land with a set of unrelated events like the cawing of a crow and falling of a ripe date palm simultaneously. We cannot declare that whenever a crow caws, a date palm falls! A stationery clock and a clock in a moving frame do not experience similar forces acting on them. If the forces acting on them affect the material of the clock, the readings of the clocks cannot be treated as time measurement. Because, in that case, we will land with different time units not related to a repetitive natural event - in other words, they are like individual elements not the members of a set. Hence, the readings cannot be compared to see whether they match or differ. The readings of such clocks can be compared only after applying clock correction to the moving clock. This clock correction has nothing to do with time dilation, but only to the mechanical malfunction of the clock.

There is nothing like empty space. Space, and the universe, is not empty, but full of the so-called Cosmic Background Microwave Radiation from the Big-Bang – as it is generally referred to. In addition to this, space would also seem to be full of a lot of other wavelengths of electromagnetic radiation from low radio frequency to gamma rays. This can be shown by the fact that we are able to observe this radiation across the gaps between galaxies and even across the “voids” that have been identified. Since the universe is regarded as being homogeneous in all directions, it follows that any point in space will have radiation passing through it from every direction, bearing in mind Olber’s paradox about infinite quantities etc. The “rips” in space-time that Feynman and others have written about are not currently a scientifically defined phenomenon. They are just a hypothetical concept - something that has not been observed or known to exist. Thus, “light signals, given out by every event to be timed, and reaching him through empty space” would be affected by these radiations and get distorted.

According to GR, which espouses the Principle of Equivalence, acceleration is the same as gravity. Einstein also assumed that the Earth and the apple are stationary. But he also assumed that their intervening space moves or “curves” in time, so that the “space-time geometry” changes towards the higher mass and the apple touches the Earth. We have earlier shown that the concept of “space-time geometry” is fallacious. In reality, it is the geometry of the objects whose interval describes the designated space that is erroneously called as the “space-time geometry”. Thus, the terms used by Einstein is highly misleading.

Einstein’s Law of Gravity is based on the postulate that no observations made inside an enclosed chamber can answer the question of whether the chamber is at rest or moving along a straight line at constant speed. But the deviation from uniform motion is all too apparent. In order to deal with the problem of non-uniform motion Einstein imagined a laboratory in a spaceship located far from any large gravitating masses. When he advocated his thought experiment, no one had any idea about the conditions in space as we have now. He postulated that if the space vehicle is at rest, or in uniform motion with respect to distant stars, the observers inside, and all their instruments that are not secured to the walls, will float freely without any up and down direction, which is correct. When the ship accelerates, he postulated that the instruments and the people will be pressed to the wall opposite to the direction of motion, which is not true. According to his postulate, this wall will appear as the floor, the opposite wall will appear as the ceiling. He thought that if the acceleration is equal to the acceleration of gravity on the surface of the earth, the passengers may well believe that their ship is still standing on its launching pad.

Then he assumed that one of the passengers simultaneously releases two spheres, one of iron and one of wood, which he has been holding next to each other in his hands. What, according to him, “actually” happens is that both the spheres were undergoing accelerated motion, along with the observer and the whole ship. Now they will move side by side. The ship itself, however, will continuously gain speed and the “floor” of the ship will quickly overtake the two spheres and hit them simultaneously, (which is not true in space, which Einstein did not know at that time). Einstein assumed that to the observer inside the ship, the experiment will look different: the balls drop and hit the “floor” at the same time. Recalling Galileo’s demonstration from the leaning tower of Pisa, Einstein claimed that the observer will think that a (gravitational) field exists in his space laboratory.

Einstein thought that the so-called principle of equivalence is quite general and holds also for optical and other electromagnetic phenomena, for example, for a beam of light propagating across the space laboratory in a “horizontal” direction. If its path can be traced by means of a series of vertical fluorescent glass plates spaced at equal distances, to an observer inside the chamber it will look like a parabola bending toward the floor. If he considers acceleration phenomena as being caused by gravity, he will say that a light ray is bent when propagating through a (gravitational) field. But this is a wrong description of facts.

The cornerstone of GR is this principle of equivalence. It has been generally accepted without much questioning. But if we analyze the concept scientifically, we find a situation akin to the Russell’s paradox in Set theory as described below. The general principle is that: in one there cannot be many, implying, there cannot be a set of one element or a set of one element is superfluous. There cannot be many without one meaning there cannot be many elements, if there is no set - they would be individual members unrelated to each other as is a necessary condition of a set. Thus, in the ultimate analysis, a collection of objects is either a set with its elements or individual objects, which are not the elements of a set.

Let us examine set theory and consider the property p(x): x  x, which means the defining property p(x) of any element x is such that it does not belong to x. Nothing appears unusual about such a property. Many sets have this property. A library [p(x)] is a collection of books. But a book is not a library [x  x]. Now, suppose this property defines the set R = {x : x  x}. It must be possible to determine if RR or RR. However if RR, then the defining properties of R implies that RR, which contradicts the supposition that RR. Similarly, the supposition RR confers on R the right to be an element of R, again leading to a contradiction. The only possible conclusion is that, the property “x  x” cannot define a set. This idea is also known as the Axiom of Separation in Zermelo-Frankel set theory, which postulates that; “Objects can only be composed of other objects” or “Objects shall not contain themselves”.

In order to avoid this paradox, it has to be ensured that a set is not a member of itself. It is convenient to choose a “largest” set in any given context called the universal set and confine the study to the elements of such universal set only. This set may vary in different contexts, but in a given set up, the universal set should be so specified that no occasion arises ever to digress from it. Otherwise, there is every danger of colliding with paradoxes such as the Russell paradox, which says that “S is the set of all sets which do not have themselves as a member. Is S a member of itself?” Or as it is put in the everyday language: “A man of Serville is shaved by the Barber of Serville if and only if the man does not shave himself?” Such is the problem in the theory of General Relativity and the principle of equivalence.

Inside a spacecraft in deep space, objects behave as suspended particles in a fluid or like the asteroids in the asteroid belt. Usually, they are stationary in the medium that moves unless some other force acts upon them. This is because of the distribution of mass inside the spacecraft and its volume that determines the average density of the spacecraft. Further the average density of the local medium of space is factored into in this calculation (we will explain this calculation later while theorizing the value of G). The passengers will notice this difference once they leave the confines of the Earth. Similarly, the light ray can be related to the space craft only if we consider the bigger frame of reference of the space with the spacecraft. If we do not consider outside space (if the ray is emitted within the spacecraft), the ray will move straight. If we consider outside space, the reasons for the curvature will be apparent. In either case the description of Einstein is faulty. Thus, both SR and GR including the principles of equivalence are wrong descriptions of facts. Hence all mathematical derivatives built upon these wrong descriptions are wrong. We will explain all so-called experimental verifications of the SR and GR by alternative mechanisms or other verifiable explanations.

Problems with the causality principle also exist for GR in this connection, such as explaining how the external fields between binary black holes manage to continually update without benefit of communication with the masses hidden behind event horizons. These causality problems would be solved without any change to the mathematical formalism of GR, but only to its interpretation, if gravity is once again taken to be a propagating force of nature in flat space-time with the propagation speed indicated by observational evidence and experiments: not less than 2 x 1010 c. Such a change of perspective requires no change in the assumed character of gravitational radiation or its light-speed propagation. Although faster-than-light force propagation speeds do violate Einstein special relativity (SR), they are in accord with Lorentzian relativity, which has never been experimentally distinguished from SR - at least, not in favor of SR. Indeed, far from upsetting much of current physics, the main changes induced by this new perspective are beneficial to areas where physics has been struggling, such as explaining experimental evidence for non-locality in quantum physics, the dark matter issue in cosmology, and the possible unification of forces. Recognition of a faster-than-light-speed propagation of gravity, as indicated by all existing experimental evidence, may be the key to taking conventional physics to the next plateau.

FALLACIES OF RELATIVISTIC MASS, TIME-DILATION & VELOCITY OF LIGHT:

The special theory of relativity is a quintessential example of mathematical physics at odds with reality. Assuming the theory is correct, if we try to interpret its mathematical results in terms of every day experience, it leads to strange anomalies. This has led some people to postulate that we should disregard what our sense organs perceive as reality. As a result, theoretical physics has gone awry, engaging in flights of fancy with no constraint as long as it is supported by esoteric “mathematics”. It is time to rectify the misinterpretations of Einstein and others. The areas that need clarification in this regard are Time Dilation, Relativistic Mass, and velocity of light as a limit (denial of Superluminal Velocities). These three are all tied together, one dependent upon the other. The principle of mass-energy equivalence also needs clarification, as the present understanding of mass-energy, which exhibit opposing characteristics, is not correct.

The Dual Velocity Theory of Relativity negates one of the postulates of Einstein that the velocity of light is the limiting velocity. In fact it proves the existence of superluminal velocities and non-existence of relativistic mass as is described below. Let us assume that the “observed” longitudinal length in an observed frame decreases with an increase in relative velocity. We can treat longitudinal length as the same thing as “distance” because longitudinal length is also a distance. Further, velocity is distance covered per unit time. This leads to the conclusion: Since longitudinal length reduces with an increase in velocity, the “observed” velocity reduces with an increase in velocity of the moving frame. This can be restated as: the velocity of the moving frame is greater than the observed velocity. Thus we have two velocities, the observed velocity - and the true velocity of the moving frame. The two velocities must be Lorentz variant, i.e., the observed velocity equals true velocity multiplied by the Lorentz transformation factor. Thus: V x R = v, where V is the velocity of the moving frame, v the observed velocity, and R the Lorentz transformation factor: √(1- v2/c2). Thus, according to the formulations of Einstein, there are two velocities, that which is extant - and that which is observed. It can be seen that: V2 = v2c2/(c2-v2) or V = vc/√(c2-v2)

As V tends to infinity, (c2-v2) tends to zero so that v tends to c. Therefore we can conclude that superluminal velocities exist. We may also conclude that in p = mv/ R, the R applies to velocity v and not to mass m. Hence there is no such thing as relativistic mass. The mass can be treated as invariable and v is the observation of V that tends to infinity as R tends to zero.

Another oversight by Einstein was that he did not make perfectly clear that mass increase and longitudinal foreshortening were “hypothetical” – observed transformations only and not real. His flawed mathematics showed infinite requirements for momentum and energy as a body approached c. Since longitudinal length seemed to contract only to the observer, the velocity along that path must also be accepted as contracted only for the observer and not for the moving system itself. In that case it could be assumed that Newtonian infinite velocity was possible. As that velocity approached infinity, its “observed measurement” approached c. This would account for the inordinate energy and momentum that accompany the “measured” velocities. It can be seen that those parameters fit exactly the corresponding Newtonian velocities. This would settle the “relativistic mass” controversy because that was created to explain the excess momentum.

Einstein’s original math showed that:
υ
τ = β (t - --- x)
V2
This implies that as the velocity of a body approaches c, its transit time of a longitudinal distance approaches zero. The transit time approaching zero was described as time dilation. He gave no cause and effect reason for the mass increase and time dilation. Before we proceed further, it is necessary to examine “time” and its measurement in detail. Time is a perceived aspect of the Universe, which, orders the sequence of events. A designated instant in that sequence is called time of the day or technically an epoch. Time is the extraordinary cause for the notions of sequence, i.e., the arrangements of priority and posterity of events (for detailed discussion, please refer to our book: Vaidic Theory of Numbers). Alternatively, time is the interval between events. Spatial dimensions, volume and mass etc., are used to differentiate objects from each other. Events are used to differentiate time and number is used to differentiate everything from each other.

Time does not have a physical existence, but is only a mental construct, described using alternative symbolism to describe the intervals between successively occurring events and comparing it with a perceivable and repetitive natural phenomena. We perceive objects as they are “now”, which is a perpetually moving state. Thus, along with the object as it is at “present”, we also observe the evolutionary changes by comparing the changing state with the previous state in our memory. This changing state creates an impression of succession of states, which may not be uniform. This perception creates the impression of interval between the different states from an epoch - usually called “now”. When we observe such evolutionary changes in other objects from the same epoch, we perceive the interval to be different from the previous interval. Thus, to compare the two intervals with each other, we use a standard interval by observing a repetitive natural phenomena and subdividing it suitably to compare the unit with these intervals. This is exactly what Einstein has done in a primitive way. His clock at A is nothing but the standard time unit. His clocks at B and C are nothing but the two action sequence measuring devices and measure the intervals between the two events. Without the standard unit (clock at B in Einstein’s 1905 paper), the intervals (readings of clocks at B and C) cannot be compared – hence they do not make any sense. Yet, in his subsequent mathematics, Einstein has ignored the clock at A and tried to correlate the clocks at B and C (or rather A and B), thereby creating various paradoxes, including a situation resembling the Russell’s paradox discussed earlier. Both BA and BC form two equivalent sets. But only A or C cannot be a member of a set.

Further, the observation of any event by an external observer is not relevant, as the external observer is not the cause for the progression of the action sequence of an independently observed event. The place of observer in quantum physics has been described separately to show that the above view is correct (Please refer to our essay “Is Reality Digital or Analog” published in FQXi Forum). For the moment, it would suffice to point out that the discovery of “un-collapse” or “collapse reversal” has shaken the earlier beliefs regarding observer created reality. Thus, an observer may be aware of the “existence” of an action sequence at any given point of time (arbitrarily frozen time “now”), but his awareness or otherwise does not affect the evolution of the action sequence in time, as it is a continuing process. This is very important, as the division of time as past, present and future depends on the moment of observation, where as the existence of an object is related to the present only. The velocity of light, though constant, cannot be used as a time unit because it cannot be measured first by counting the repetitions as it is not a recurring phenomenon, and the interval between successive recurrences is not sensible. Further the velocity of light varies according to the density of the medium. Thus, at best, it can be used as a measuring tape since it measures equal distance over equal time interval.

Since measurement is a process of comparison between similars, time is measured first by counting the repetitions of any recurring phenomenon and if the interval between successive recurrences is sensible, in sub-dividing it. A time interval is measured as the duration between two known epochs or by counting from an arbitrary starting point (like in stop watches). Thus, determination of time is synonymous with establishing an epoch, which may require “clock correction”, which is the correction that should be applied to the readings of a clock at a specified epoch. Time units are the intervals between successive recurring phenomena, such as the period of rotation of earth subdivided as hours, minutes or seconds. The time units are then compared with the interval of other events starting from an epoch, the result of which gives the time for that event. Several phenomena are used as the time base to be divided into hours etc. For example, for astronomical purposes, sidereal time is used. For terrestrial purposes, solar time is used, etc. Thus, the hands of the clock are not the measure of time per se, but they become an hour, if they rotate 24 times during Earth’s one rotation and so on. But if there is any mechanical, metallurgical or any other factor that makes the ticks irregular, “the position of the small hand of my watch” can not be a measure for “time”.

Further, without linking to any natural unit, such as the Earth’s rotation, the ticks or the position of the hand of a clock cannot be a unit for time. While measurement is a process of comparison between similars, the unit has to be described specifically by a defining characteristic. Einstein has followed this principle (described earlier), while comparing the clocks at B and C with the clock at A, which he has treated as a preferred frame of reference, for synchronization. Thus, only the clocks, the multiple of whose ticks synchronize with a natural repetitive event will qualify to be used as time measurement devices. In other words, the readings of a clock, which is not synchronized with the sub-divisions of a repetitive natural event (preferred frame of reference), cannot be accepted as giving correct reading of time. The repetitive natural event becomes a preferred frame of reference and all clocks must synchronize with it. This is true even for atomic clocks, whose second has been determined by synchronizing the number of successive oscillations of the cesium atom to the natural second. The readings of a clock that shows different reading cannot be considered for time measurement and its different reading has to be explained by the conditions affecting the material of the clock and not time itself. Otherwise, the readings of the clocks will be treated as not synchronized and separate readings, and we will land in a problem like the Russell’s paradox, as has been described elsewhere.

The absolute length of a rod can not change because it is measured in a different frame, because the scale also undergoes a similar observed dilation. This principle applies equally to an absolute time interval and an absolute mass: they do not change when measured in different frames. However, an absolute length, time interval or mass can be described using different parameters (e.g. different units like c-g-s, m-k-s or f-p-s methods). One must conclude that lengths, time intervals and masses are absolute and exist independently of the observer. They never change as long as they remain within one constant frame. However, they appear to change with respect to an observer from a different frame because they are then compared with new (observed) units located in a different frame. When a rod changes frames, the change of its length may be real, as the changed gravitational potential changes the Bohr radius. But the same happens to the scale also. When the observer changes frames but not the unit, the unit does not change length or area or volume or changes in proportion to others. There is a change in the reading on the measured unit only if the reference unit is changed such as from inch to centimeter. But the rod has not changed.

The usual definition of the meter is 1/299 792 458 of the distance traveled by light during one second. This definition takes the value of the velocity of light in the so-called empty space. Since no place in the Cosmos is truly empty and the velocity of light changes with the change of density in various mediums, what the above definition means is that it is the velocity of light in the “least dense medium”. Thus, the term “least dense medium” needs a scientific definition, which is absent. There is no direct way to reproduce an absolute meter within a randomly chosen frame. Carrying a scale (a piece of solid matter) from one frame to another (in which the potential or kinetic energy is different) leads to a change in the Bohr radius of the atoms constituting it consequently changing the dimensions of the scale. Some may suggest that a local meter can be reproduced in any other frame using a solid meter previously calibrated in outer space (the so-called “least dense medium”) and brought to the local frame. Though the absolute length of that local meter in the new frame will not be equal to its absolute length when it was in outer space because the potential and kinetic energies may change from frame to frame, the change will not be perceptible because the readings will match.

The local clock (suitable divisions of the rotation or revolution of Earth around its own axis and Sun respectively) is used to determine the second. In atomic clocks the reference changes from the Earth to the successive oscillations of the cesium atom (which itself is not constant). This definition is not absolute because we do not measure time or distance by using the velocity of light itself, but depend on the motion of the clock as the unit for time measurement and a fraction of the predetermined value of the velocity of light as the unit for distance measurement. This definition of the second which is a function of the local clock rate which changes with changes in the potential and kinetic energies of the frame of reference is not scientific. A clock on Earth will not function at the same rate on a neutron star. Events on a neutron star cannot be measured by using a clock on Earth.

Mass contains internal energy, which is an aspect of potential energy. Due to the principle of mass-energy conservation, clocks synchronized with each other run at different rates in different gravitational potentials and show different readings between two events, which otherwise would coincide with each other. Thus, from an objective reality angle, the “time” does not elapse more slowly, but the atoms and molecules in the clock in a different frame of reference appear to function at a slower rate as in the case of a change of the gravitational potential, the Bohr radius is larger, when the so-called electron mass is smaller. This is in conformity with quantum mechanics. When one says that an atomic clock runs more slowly, it means, for that atomic clock, it takes more “time” to complete one full cycle than for an atomic clock in the initial frame, where the so-called electron has a larger mass. That slower rate can only be measured by comparing the duration of a cycle in the initial frame with the duration of a cycle in the new frame. It is the time rate measured in the initial frame at rest that can be considered the “reference time rate”. All observations are compatible with this unchanging “reference time rate”.

The change of clock rate is not unique to atomic clocks. Quantum mechanics shows that the intermolecular distances in molecules and in crystals are proportional to the Bohr radius. Consequently, due to velocity, the length of a mechanical pendulum will change. Therefore it can be shown that the period of oscillation of all clocks (electronic or mechanical) will also change with velocity. It cannot be said that “time” flows at the rate at which all clocks run because not all clocks run at the same rate. However, a coherent measure of time must always refer to the reference time rate. That reference rate corresponds to the one given by a reference clock for which all conditions are fully described. It never changes. However, all matter around us (including our own body) is influenced by a change of so-called electron mass so that we are deeply tied to the rate of clocks running in our frame. Since our body and all experiments in our frame are closely synchronized with local clocks, it is much more convenient to describe the results of experiments as a function of the clock rate in our own frame. This is what is called the “apparent time”.

In his clock gedanken experiment, Einstein described his moving clock was running “behind” the rest clock (based on his wrong concepts and wrong mathematics) and concluded that the moving clock ran slower. This is a wrong interpretation. The moving clock can be assumed to have traveled faster than that measured in the fixed frame (as explained earlier) and therefore made its transit faster than experienced by the inertial clock, but then would have to cover a proportionately longer distance. Therefore, both clocks will be keeping the same rate. Thus we see that the moving clock keeping proper time but assumed to be behind the inertial clock at the end of its journey is not correct. There is no actual slowing of the clock. This shows that time dilation as proposed by Einstein does not exist.

Time dilation, as it is understood in relativity, does not really exist due to the fact that it creates a reductio ad absurdom known as the Twin Paradox. Both twins evolved equally in their local time. Any comparison between them has to be based on a reference clock, which is a preferred frame of reference. Introduction of a third frame destroys the equations of relativity, as relativity is confined only between two frames related to each other and not between all frames with reference to a preferred frame of reference. Such a condition indicated that something is wrong with the theory and that it has to be reworked.

In the Twin Paradox Einstein discusses twins moving with two identical clocks, one of which goes on a journey at a very high velocity and returns to the stationary one. According to Einstein, he notes that the clock that journeyed with him was “behind” the stationary clock, i.e., showed less time. Einstein assumed without any justifiable reason that the moving clock had kept slower time. Hence he concluded that the moving twin was younger than his brother. Since the moving clock showed less time when returned, it must be assumed that the loss of time was real and not an aberration of observation. That in turn means that the clock had to run slower in its own coordinate system - and that in turn is a contradiction; for special relativity maintains that clocks keep proper time in their respective coordinate systems. Einstein should have known that and worked on the solution. Further, he assumed without any justification that biological growth or longevity is regulated only on the state of motion.

We generally refer to the clock rate of our organism believing that we are referring to the “real time”. What appears as a “time interval” for our organism is in fact the difference between two “clock displays”: one reference clock and the other the clock located in our own frame. Of course, clocks are instruments for measuring time but during the same time interval there is a difference by a factor of proportionality between the “differences of clock displays” of different frames. Thus, the common notion of “time” is an apparent interval between two events corresponding to the difference of clock displays in a given frame when no correction has been made to compare it with the reference time. Since all our clocks and biological mechanisms depend on the electron’s so-called mass and energy, humans feel nothing unusual when going to a new frame. However, the time measured by the observer in that new frame is an apparent time and it must be corrected to be compared with a time interval on the fundamental reference frame. For example, life cycles of all species are absolutely identical. From being they become becoming, transmute with food intake, grow, become old and finally die. Yet, each cycle is different for each species. If we compare the life cycle of a horse with human life-cycle, we will find the same cycle progressing differently, but if we look at the proportionate growth, it will be more or less identical. This is the real meaning of time dilation.

Time dilation arises primarily due to the reason that the law, universal and external to our internal evolution, is correlated to the events that are internal to our physio-chemical evolution – they affect us internally. It has been known since ages. Aristotle noted the swelling of the ovaries of sea urchins at full moon. Hippocrates observed daily fluctuations in the symptoms of some of his patients and thought that regularity was a sign of good health. Cicero mentioned that the flesh of oysters waxed and waned with the moon. These are internal processes rather than being a direct response to external factors. In 1729, it was noticed that the leaves and stems of the plant Mimosa pudica closed and contracted when darkness approached. But the cycles continued even when the plants were kept in darkness. Many examples of the 24-hour-long rhythms or body-clocks have been noticed since then. These rhythms are now called the circadian rhythms (circa means “approximately” and dies means “day”). However, these clocks are synchronized with the environment and had plenty of time to cope with the changes wrought by the gradual slowing of Earth’s rotation.

Circadian rhythms are not exceptions, but appear to be the rule in the world. In protozoa and algae, there are 24-hour periods regulating photosynthesis, cell division, movement and luminescence. In some fungi, there is a fixed daily hour for the discharge of spores. Other plants possess diurnal rhythms governing the movements of leaves and the opening and closing of flowers. Birds navigate not only relying on the position of the Sun, but also supplement it using an internal body clock. Bees use their body clock to help them follow a “bee line” back to the hive and anticipate when plants will release pollen. Circadian clocks in more complex organisms consist of not a single clock, but many clocks. Our dominant rhythm does not follow a 24 hour cycle, but a 25 hour cycle. This innate tendency to let schedules drift later and later is restrained each day as our circadian body clock is reset to the right time by the cycle of day and night through a process known as entrainment. The internal body clocks of plants and animals vary from 22 hours to 28 hours. Typical of circadian rhythms, the mechanism that determines these periods is cunningly designed by Nature to be independent of temperature unlike most other physiological processes that can double their rate with a 10 degree Celsius rise in temperature. This ensures that biological time-pieces remain reliable under a wide range of climatic changes.

The result of measurement does not describe only the time invariant fixed state frozen in hypothetical time called “now” or in space-time called “here-now”. It is the culmination of the changes in its state up to the time labeled as “now” or “here now”. Since the result of measurement describes the “evolutionary state as of now” or “as of here now”, there is no “uncertainty” about the past evolutionary state, though the future evolutionary state continues to remain uncertain. Both these states (ever changing past and future) are unreal as they are only mathematical (hypothetical) structures without any physical presence. “Now” or “here-now” are the only real states, which are transitory in nature and continuously get converted into “past” indicating the time’s arrow. Since we describe them at a different time and may be a different place, our description of those particles or events cannot be exact, but has to be probabilistic.

The “past” (also future) is not a moment (unit) of time, but is a basket of moments bunched together in a mathematical (hypothetical) manner. A particle has its mass concentrated at or around a single point (position) at any given time. This describes its state in a unique manner. When we describe the state of the particle over a basket of moments bunched together, we place all the states in the same “position”, because the basic structure (extent) of the particle, which is responsible for its position, has not changed. Thus, it can be called a “superposition of states”. The particles exist in a “superposition of states” always except for the time known as “now” or “here now”. Since measurements are taken only at “now” or “here now”, it is said that particles do not have a fixed state, but exist in a “superposition of states when not being observed”. This does not mean that particles cease to exist when not being observed, but it only indicates the continually changing state of the particles in the quantum level. In between two observations, the particle continues to change intrinsically. Hence, the earlier observed state does not exist, though it may not be apparent. This has been interpreted by some people to say that the particle exists only when being observed. In the macro world, the changes with time are not evident always. But since events usually occur in cycles, and the cycles are clearly evident, manifest time is regarded as cyclic

Einstein’s thesis is entitled “On the Electrodynamics of Moving Bodies”. It actually deals with the observation of fast moving systems. Here the operational word is observation. There is no proof to show that a rod in the system does really contract along the line of motion. Actually, it does not contract. Similarly, the clock does not change the rate of its ticking. The mass of any object in the system does not change. Everything in the system stays the same. As we have shown earlier, only the observation is distorted. The aberration is due to the fact that light - the message carrier – appears to travels at an invariable velocity (c). Light creates two aberrations: the aberration caused by refraction and the aberration caused by the constancy of velocity. In the first instance, the fish in the river pool is not where it appears to be - and in the second it has been assumed that the condition of items in the moving system are not undergoing the changes they appear to. One of the errors assumed from the theory is that c is the limiting velocity of the universe. One conclusion could be that c is the limiting observed velocity in the universe - because that is the limit of the message carried through the medium of observed space. The message carrier can be “tricked” into revealing superluminal velocities as explained below.

Let us assume two points A and B - one light year apart. Let us assume that there is an observer equidistant from both these points. Next, we have an object expend sufficient energy to attain a velocity of 12c. Thus it will transit one light year in one month. The observer, being equidistant from both A and B can disregard the transit time for light to carry the messages of departure and arrival. He will receive these signals one month apart. Thus he will measure a superluminal velocity of 12c. However, an observer at A will measure the velocity as given by Einstein. This is because for A, B is along the x axis. So Lorentz contraction will apply. Thus there are two velocities - the velocity extant, and the velocity observed. It is just like the actual location of the fish - and where it appears to be. Now if we calculate the kinetic energy and relative momentum for unit mass we will discover that it is the same for 12c using Newtonian equations. In other words the parameters for the Newtonian velocity are to be found accompanying the relativistic velocity.

Let us examine Einstein’s thought experiment of riding the crest of a light wave. If we pursue a beam of light, our velocity has to be relative to the emitter. As we gain in velocity the frequency reduces. As we attain the velocity v = c the frequency of the emitted light is reduced to zero. This implies that there will be no “spatial oscillation”. Also, at that velocity v = c, the energy of the photon is zero. Thus the momentum is also zero. Zero frequency, zero energy, zero momentum implies that there is nothing to observe. But how can that be? The photon must have consisted of something. The only conclusion is that what we observe are static electric and static magnetic fields. They are at rest to us and have no energy, momentum or frequency.

The above position can be disputed on the ground that the speed of light is constant to all frames. When we state that we pursue a beam of light, what it means is that the beam of light will “always” precede us at constant velocity of c - thus we can never attain a velocity equal to the photon. So even as the frequency becomes zero (and the energy and momentum too become zero!), the beam is preceding us at c. This will imply that the velocity of the photon is 2c again contradicting SRT. The real interpretation of this phenomenon of constant velocity of light can be understood only if we treat the velocity of light as like that of a bullet fired from the rifle of a person moving with any velocity. Once the bullet is fired, all observers will observe it with the same velocity irrespective of the velocity in which they are moving with respect to one another. But in that case, the velocity of the bullet cannot be used to “measure” anything.

The equation E = mv2/2 is only good for low velocities. The correct equation for all velocities is E mv2/ R + R2 which degenerate to mv2/2 at low velocities. It will be found that this is exactly equal to Einstein’s E = (1/ R-1) mc2. Examining this dual velocity further, we see it dissolves the Twin Paradox. The Earth twin measures the relativistic velocity. The Astronaut twin experiences the Newtonian velocity. Since the Newtonian velocity is faster than the relativistic, the astronaut accomplishes it in less time than that observed on Earth by the Earth twin.

The longitudinal length of a moving system “appears” to contract due to the assumption that the measurement can be made by the use of light which has an invariant velocity of c. Had Einstein concluded that as the apparent length of a moving system contracted, its velocity itself underwent a foreshortening, it would have resulted in another contradiction. As has been shown earlier, as the velocity of a body approached infinity, the foreshortened “measured” velocity would approach c - the maximum velocity of the measuring tool. As a matter of fact, if one does the arithmetic, he will find that the energy requirements, the momentum and the transit time of a body do not fit the relative velocity but do fit the corresponding actual (Newtonian) velocity. Thus we see the parameters of the Newtonian velocity in the company of the measured corresponding relative velocity.

Then there is the general misinterpretation of the energy-momentum 4 vector equation. This equation is nothing more or less than the combination of Einstein’s equations for the energy and momentum of moving “bodies”, not radiation. The general interpretation of the equation, E2 = (mc2)2 + (pc)2 is as follows: “If we set the m in the right hand first term to zero, then we get E = pc which we know is true. This shows that the mass of the photon is zero”. Now let us examine this interpretation. The first term is the square of mc2. In should be remembered that mc2 is rest energy and m is rest mass. A photon brought to rest (by absorption) is no longer a photon. Hence its rest mass is zero. The second right hand term in the above equation is pc, the energy of a photon in flight. The p is momentum of the photon in flight. Every equation for momentum contains mass and motion. Therefore E = pc should mean that the photon in flight has mass. When faced with that fact, some try to maintain their position by declaring a new physics whereby there exists momentum with no mass. As Lord Kelvin says: “When you can measure what you are speaking about and express it in numbers, you know something about it”. But the problem is that the proponents of the new physics cannot substantiate it.

Let us consider the spatial contraction aspect. At the velocity v = c, the space up-front is apparently contracted to zero. This can be interpreted to mean that the photon is everywhere up-front at once. That is why time in the photon’s frame is considered to have stopped. It can cover any distance with no lapse in time! With a little imagination one realizes that this fits the description of infinite velocity. Since the space does not really contract, we are drawn to the conclusion that the velocity of light can be infinite. Thus, in section 4 of his famous paper Einstein states: “… the velocity of light in our theory plays the part, physically, of an infinitely great velocity”. The true explanation of this phenomenon is that photon is not a particle at all. It is just a wave. Hence it has no mass and it has spin zero. Like a wave, it only involves momentum transfer disturbing the medium (thereby revealing itself). What is seen as the particle is the disturbance of the medium due to such momentum transfer. Thus the velocity of light changes with the change in density of the medium. We will discuss more on this when we examine the double slit experiment and the diffraction experiment. This also explains zero frequency, zero energy, zero net momentum and zero time described above. Time is the interval between two events, which involve displacement of mass. Since there is no transfer of mass for the movement of photon, there is no event to be observed. Hence no time for riding the light crest!

This declaration of slower time led physicists, great and small to recount that if one were to observe a space ship at high velocities, one would observe the on board clock to run at dilated time t' = t√(1-v2/c2). This we know is not so. Any known constant emitter is a clock. Astronomers observe moving emitters, i.e., astral bodies that act like clocks every night. What they see is Doppler time. The cesium atom is an arbitrary choice. The variation of its frequency is the variation of “observed” time. In recession, the apparent time is slower than normal - in approach the apparent time is faster than normal. Following this precept, the Twin Paradox never appears. The correct explanation of the “behind” clock is that it traveled at Newtonian velocities and thus completed the course in less time than measured by the inertial clock which measured the velocity as relativistic, i.e., slower. Now let us look at some of the experiments that are cited as proof for the validity of the theory.

Hafele and Keating (1972) carried out experiments that purported to confirm the Theory of Special Relativity and Time Dilation. The evidence provided was derived from the differences in time recorded by cesium clocks transported in aero-planes. They were sent first Eastwards for 65.4 hours with 13 landings and take-offs. Then sent westwards for 80.3 hours with 15 landings and take-offs. The entire process took over 26 days. To minimize the effect of the Earth’s magnetic field, the clocks were triple shielded. The clocks had serial numbers 120, 361, 408, and 447. The average of their time was used to lessen the effect of changes in individual drift patterns relative to the standard clock station at Washington DC. The time scales were selected by averaging 16 selected large cesium beam clocks. Clocks were replaced if their performance deteriorated. The standard deviation of the mean of the assembly was given as 2ns to 4ns when tested every 3 hours over several 5 day periods. In that station, the clocks were housed in six vaults, free from vibration, with controlled temperature and humidity, elaborate power supplies, vacuum systems and signal sources, and a fixed magnetic field. It has been established that the accuracy of small portable clocks is worse by a factor of 2 than large stationary clocks. The variations in the magnetic fields are among the influences that contribute to the inaccuracy of cesium clocks. The records of the US Naval Observatory (USNO) show the following results:
Movement
Of clocks Clock
No.120 Clock
No.361 Clock
No.408 Clock
No.447
Before Eastward test. -4.50 +2.66 -1.78 -7.16
During Eastward test. -4.39 +1.72 +5.00 -1.25
After Eastward test. -8.89 +4.38 +3.22 -8.41
Before Westward test. -8.88 +6.89 +4.84 -7.17
During Westward test. +4.31 -2.93 -2.68 -2.25
After Westward test. -4.56 +3.97 +2.16 -9.42









The individual portable clocks should have displayed a steady drift rate relative to the ground clock station and not the result shown above. These figures were used by Hafele and Keating in the test report published a month after the test. The report says: “Portable cesium clocks cannot be expected to perform as well under traveling conditions as they do in the laboratory. Our results show that changes as large as 120 nsec/day may occur during trips with clocks that have shown considerably better performance in the laboratory”. Hafele himself has reported, “Most people (myself included) would be reluctant to agree that the time gained by any one of these clocks is indicative of anything” and “the difference between theory and measurement is disturbing”. Yet, after 4 months, they submitted a paper for publication with altered figures and reversing their earlier report to prove the Special Theory of Relativity and Time Dilation right!

The first attempt by Hafele and Keating to bring the results closer to the theoretical forecasts was to take the average of the drift rates before and after a flight and assume that this average was the drift rate that applied through-out the flight. This is equivalent to assuming that one single sudden change in drift rate occurred midway. Such an assumption would have some credence had the alteration in drift rate been very small; e.g., a change from +3.34 to +3.35 ns/h, which would not significantly affect the end result. The actual drift rates either doubled or halved or reversed.

Some people believe that the above anomaly could be explained by examining how the GPS satellites stay synchronized. But this view is misleading. The GPS satellites are adjusted according to the Sagnac effect and Gravitational calculation proven by Pound-Rebka. The contribution of the Sagnac and Gravitational effects is completely dependent upon the orientation of the clocks in regard to their travel and Earth’s gravity. Since the clocks were kept on passenger seats in commercial flights around the globe, they must have been oriented parallel to the surface of the Earth, so that Sagnac effect would be the primary concern, though there still will be a small gravitational effect due to the transversal Doppler effect. However, both of these effects do not require relativity and have nothing to do with time dilation. Thus it is not correct to infer that the atomic clock experiment has proved time dilation.

Further, the experiment was based on a third reference point called “proper time”, which actually negates relativity out of the equation as it is analogous to adding a Universal reference frame. There is no room for a Universal reference frame in relativity, which is confined to the relationship between two inertial frames of reference. Adding a Universal reference frame of reference negates the very concept of relativity. Any argument of relativity that includes a third frame of reference other than the emitter and the detector is inferring a Universal reference frame.

FALLACIES IN THE COULOMB’S LAW:

The peculiar behaviors of charge interactions – opposite charges seemingly attract and same charges seemingly repel each other – led Coulomb to formulate his famous law. Coulomb’s law states that the electrical force between two charged objects is directly proportional to the product of the quantity of charge on the objects and is inversely proportional to the square of the distance between the centers of the two objects. The interaction between charged objects is a non-contact force which acts over some distance of separation. In equation form, Coulomb’s law can be stated as:

where Q1 represents the quantity of charge on one object in Coulombs, Q2 represents the quantity of charge on the other object in Coulombs, and d represents the distance between the centers of the two objects in meters. The symbol k is the proportionality constant known as the Coulomb’s law constant. The value of this constant is dependent upon the medium that the charged objects are immersed in. In case of air, the value of k is 9.0 x 109 N • m2 / C2 approximately. If the charged objects are present in water, the value of k can be reduced by as much as a factor of 80. Mathematically, the force value would be found to be positive when Q1 and Q2 are of like charge - either both “+” or both “-”. And the force value would be found to be negative when Q1 and Q2 are of opposite charge - one is “+” and the other is “-”. This is consistent with the widely accepted concepts that oppositely charged objects have an attractive interaction and like charged objects have a repulsive interaction.

The Coulomb’s law equation provides a description of the force between two objects whenever the objects act as point charges. A charged conducting sphere interacts with other charged objects as though all of its charge were located at its center. While the charge is uniformly spread across the surface of the sphere, the center of charge can be considered to be the center of the sphere. The sphere acts as a point charge with its excess charge located at its center. Since Coulomb’s law applies to point charges, the distance d in the equation is the distance between the centers of charge for both objects and not the distance between their nearest surfaces.

According to the Coulomb’s law equation, interaction between a charged particle and a neutral object (where either Q1 or Q2 = 0) is impossible as in that case the equation becomes meaningless. But it goes against everyday experience. Any charged object - whether positively charged or negatively charged - has an attractive interaction with a neutral object. Positively charged objects and neutral objects attract each other; and negatively charged objects and neutral objects attract each other. Alternatively, this shows that there is no charge neutral object and the so-called charge neutral objects are really objects in which both the positive and the negative charges are balanced. This puts a question mark on the charge of neutrinos and neutrons. If the neutrinos are objects whose positive and negative charges are balanced, it must have an internal structure, as are the neutrons. Within the atom, the positively charged atoms co-exist with each other and interact with neutrons. Two similarly charged quarks exist within the protons and the neutrons. Particles and their anti-particles with the same mass exist. These opposite charges do not attract, but annihilate each other. Yet, quark-anti-quark pairs co-exist as mesons. All these point to the deficiencies of the Coulomb’s law. These problems have been sought to be described by various explanations, which are well-known. Most of these are questionable. Often Newton’s laws of motion are invoked to explain charge interactions on the ground that charge interactions are forces.

For example, the electrical force is said to be a non-contact force, which is a “push” or “pull” exerted upon an object as a result of interaction with another object. It exists despite the fact that the interacting objects are not in physical contact with each other. The two objects can act over a separation distance and exert an influence upon each other. The interaction is the result of electrical charges. Thus it is called an electrical force. Yet, it cannot explain “pull”, which is physically impossible. What we describe as “pull” is in reality a “push” from the opposite direction. The forces always scatter and never bind. All forces only “push out” and never “pull in”. Hence the concept of “binding energy” is not scientific as no one ever has seen energy “binding” two particles – it is always either zero net internal energy or a push from opposite direction by an external field that keeps particles together. As will be shown later, only this can explain the co-existence of protons and neutrons and their interaction in a scientific manner. This will also explain the internal structure of the neutron and explain why electrons orbit protons in classical orbits. The apparent attractive force can be explained by fully discarding the Coulomb’s law and rewriting it, as is explained below.

The reason for the wrong notion that led to Coulomb’s law is easy to understand. Different forces co-exist, but do not couple with each other. Since the forces cannot be observed directly, but are observed only through their effects, this leads to the mistaken notion about their interaction. We will explain it by giving an example. Suppose we are crossing a river 60 meters wide by a motor boat from West to East with a velocity of 4 m/s directly across the river. Suppose that the river was moving with a velocity of 3 m/s towards North. How much time it would take to cross the river? We may say the answer is 60 ÷ 4 = 15 seconds. But is it the right answer?

The boat would not reach the shore directly across from its starting point. The river current influences the motion of the boat and carries it downstream. The motor boat may be moving with a velocity of 4 m/s directly across the river, yet the resultant velocity of the boat will be greater than 4 m/s and at an angle in the downstream direction. While the speedometer of the boat may read 4 m/s, its speed with respect to an observer on the shore will be greater than 4 m/s. The resultant velocity of the motor boat can be determined in the same manner as is done for a plane. The resultant velocity of the boat is the vector sum of the boat velocity and the river velocity. Since the boat heads straight across the river and since the current is always directed straight downstream, the two vectors are at right angles to each other. Thus, the magnitude of the resultant can be found as follows:

(4.0 m/s)2 + (3.0 m/s)2 = R2
Or 16 m2/s2 + 9 m2/s2 = R2
Or 25 m2/s2 = R2 or R = 5m/s.

The direction of the resultant is the counterclockwise angle of rotation which the resultant vector makes with due East. This angle can be determined using a trigonometric function: tan (θ) = (3/4). Hence θ = 36.9 degrees.

With the above data, let us calculate how much time does it take the boat to travel shore to shore and what distance downstream does the boat reach the opposite shore? The river is 60-meters wide. That is, the distance from shore to shore as measured straight across the river is 60 meters. The time to cross this 60-meter wide river can be determined by rearranging and substituting into the average speed equation.
Time = distance / (average speed)

The distance of 60 m can be substituted into the numerator. But what about the denominator? What value should be used for average speed? Should 3 m/s (the current velocity), 4 m/s (the boat velocity) or 5 m/s (the resultant velocity) be used as the average speed value for covering the 60 meters? With what average speed is the boat traversing the 60 meter wide river? Most persons want to use the resultant velocity in the equation since that is the actual velocity of the boat with respect to the shore. Yet the value of 5 m/s is the speed at which the boat covers the diagonal dimension of the river. And the diagonal distance across the river is not known in this case. If one knew the distance from the initial position to where the boat reach the opposite shore downstream, then the average speed of 5m/s could be used to calculate the time to reach the opposite shore. Similarly, if one knew the distance from the position diagonally across from the initial position to where the boat reach the opposite shore downstream, then the river speed of 3m/s could be used to calculate the time to reach the opposite shore. And finally, if we consider the river width of 60m, then the boat speed of 4m/s could be used to calculate the time to reach the opposite shore.

In the above problem, the river width is 60 m. Hence the average speed of 4 m/s (average speed in the direction straight across the river) should be substituted into the equation to determine the time. Time = 60m/ (4m/s) = 15 seconds.

It requires 15 seconds for the boat to travel across the river. During these 15 seconds of crossing the river, the boat also drifts downstream. What distance downstream does the boat reach the opposite shore? The same equation must be used to calculate this downstream distance.
Distance = Time x (average speed) = 15 seconds x (3m/s) = 45 m. or
Time = 45m/ (3m/s) = 15 seconds.

The boat is carried 45 meters downstream from the point opposite to the initial point during the 15 seconds it takes to cross the river. Now to calculate what distance downstream does the boat actually traveled to reach the opposite shore, we have to apply the simple formula: (60 m)2 + (45 m2) = (75m)2.
Hence the answer is 75 m. The time taken is: Time = 75m / (5m/s) = 15 seconds.

Thus, whichever way we calculate, we come to the same conclusion. If we change the speed of the river current to 5m/s so that the river current is faster than the boat speed, we come to the same conclusion about time taken. In this case only the distance downstream from the point opposite to the initial point changes to 75 m and the total distance actually traveled changes to about 96 m. If we use other speeds, the result remains similar. This would mean that an across-the-river variable would be independent of (i.e., not be affected by) a downstream variable. The time to cross the river is dependent upon the velocity at which the boat crosses the river. It is only the component of motion directed across the river (i.e., the boat velocity) which affects the time to travel the distance directly across the river. The component of motion perpendicular to this direction - the current velocity - only affects the distance which the boat travels down the river.

The motion of the river boat can be divided into two simultaneous parts - a motion in the direction straight across the river and a motion in the downstream direction. These two parts (or components) of the motion occur simultaneously for the same time duration (which was 15 seconds in the above problem). The boat’s motor is what carries the boat across the river. Hence any calculation involving the distance between the shores must involve the boat speed relative to the water in the direction of the opposite shore. Similarly, it is the current of the river which carries the boat downstream. Hence any calculation involving the downstream distance must involve the river speed in the direction of the flow of the river. The direction of the two motions may or may not be perpendicular to each other. Hence we cannot find the net result as easily as has been shown above. The forces behind these two motions co-exist and are independent of each other. However, what we observe is the total effect of these two forces. Together, these two parts (or components) add up to give the observable resulting motion of the boat. That is, the across-the-river component of displacement adds to the downstream displacement to equal the resulting displacement, even though the two forces do not add up. The boat velocity (across the river) appears to add up with the river velocity (down the river) to equal the observed velocity, even though in reality they do not add up. If we could properly analyze the motion of the boat, it would appear to be a step-by-step action like the picture below.

Each force acts independent of the other. But since we experience the net effect of the totality of all forces, we presume the law of parallelogram of forces, which is only apparent, but not real. This implies that the relationship between different forces is that of co-existence (we call it sahachara) and not coupling (we call it granthi vandhana) with each other. Co-existence implies coupling with similar forces to attain stability or equilibrium (we call it pratisthaa) leading to seemingly attraction by similar charges. This is because of the inherent nature of forces, which has been misunderstood due to wrong classification of energy, as it developed incrementally with new discoveries.

Coming back to charge interactions, these can be of four types. Since all particles have both positive and negative charges (we call these as male and female charges) in different proportions, and since the deficient part (we call this nyoona – literally deficient) always tries to be complete (we call these poorna – literally full) due to electron affinity (we call these ashanaayaa – literally hunger), which refers to the relative amount of attraction which a material has for electrons, there can be two types of interaction between the oppositely dominating charges as described below:

The first type of positive-negative interaction is the total interaction (we call these samudaaya) where the positive charge component interacts with the positively charged component of the other and the negative charge component interacts with the negatively charged component of the other. This type of interaction leads to the increase (or decrease in the opposite case) in the atomic number (we call these pustikara – literally rejuvenative) which remains to the same class of particle (atoms). For example; if we bombard aluminum atoms with alpha particles, we have: 2713Al + 42He → 3014Si + 11H

The base product and the end product belong to the same class (atoms) only with changed atomic number. The reaction can also produce 3015P and a neutron. But 3015P does not exist naturally and quickly breaks down by emitting a positron. This changes the 3015P to 3014Si by converting the neutron to a proton. Thus, the effect is the same. In the case of radioactivity the opposite effect is seen. But the end product remains to the same class (atoms) only with changed atomic number.

The second type of positive-negative interaction is the partial interaction or ionic interaction (we call these avayava) where, ions of particles interact with the oppositely charged ions. Such interactions create new class of particles with properties distinctly different from their components (we call these sristikara – literally creative). For example: Na+ + Cl- = NaCl. Here the positively charged component of Sodium atom interacts with negatively charged component of chlorine atom to form common salt, which belongs to a different type of particle. Here we must distinguish between the two types of reactions and their products because atoms are products of nucleons by fusion and have limited variety, whereas the other products such as common salt or water are compounds of the atoms and have infinite variety.

The third type of interaction is the interaction only between two positively charged particles. As is well known, it leads to fusion reaction (we call these visphotaka – literally explosive). This process provides energy to stars and galaxies. Within the quark, the two up quarks with positive charge explode to become down quarks. In this process, they disturb the balance of the down quark converting it to up quark. This process keeps the protons and neutrons together within the nucleus without requiring any further binding energy. The mutual annihilation of particle-antiparticle will be explained after explaining “what” are electrons and photons.

The fourth is the interaction only between two negatively charged particles. Contrary to general belief, it leads to no reaction by itself (we call these nirarthaka – literally useless). The observed repulsion between negatively charged particles can be explained differently. As has already been explained, negative charge always flows towards positive charge and confines it. Hence where there is already negative charge, the positive charge has already been confined. Thus the negative charge searches the other direction for positive charges to flow towards and confine it. This appears as repulsion between negatively charged particles. In case of electricity, the negatively charged electrons flow in unison and do not repel each other and flow in opposite directions.

FALACIES IN THE EXPANDING UNIVERSE CONCEPT:

It is now well established that it was the “Big Bounce” (we call it spanda) and not “Big Bang” that started the creation. The reason for the “Big Bounce” is the fundamental nature of the Universe. Everything is made out of the same stuff of uniform density – we call it tattwa – literally meaning “that-ness” because it cannot be otherwise described as we describe objects or ideas with alternative symbolism (we call it vikalpana) and nothing is ever like it. It has only two characteristics: force (we call it the effect of vala) that moves objects and generates inertia, and impedance (we call it the effect of rasa) that arises once the velocity of the moving particle is more than the medium in which it is moving (this gives rise to “bow shock effect”).

Some will jump to discard this view based on the observed acceleration of the galactic clusters. But this inference is misleading as our data is over too little a period to have any meaningful impact on cosmic scales. If we observe the motion of the planets around the Sun as they appear from Earth, we will find some planets moving away from us while others are coming closure. Neither is a true description of the phenomena. The planets are orbiting the Sun at their own pace and only appear to speed away or slow down or coming closure to us. We call it dolana – literally swinging. Similarly, the so-called speeding galaxies (we call it atichaara – literally accelerated motion) are a temporary phenomenon on the cosmic scale. Actually, all the galactic clusters are circling the galactic center, which is a universal phenomenon. An analysis of the present view in this regard will prove our point.

In the 1930’s, Edwin Hubble obtained a series of observations that indicated our Universe began with a Creation event. Observations since 1930s show that clusters and super-clusters of galaxies, being at distances of 100-300 mega-parsec (Mpc), are moving away from each other. Hubble discovered that all galaxies have a positive red-shift. Registering the light from the distant galaxies, it has been established that the spectral lines in their radiation are shifted to the red part of the spectrum. The farther the galaxy, the greater the red-shift! Thus, the farther the galaxy, velocity of recession is greater creating an illusion that we are right at the center of the Universe. In other words, all galaxies were receding from the Milky Way.

By the Copernican principle (we are not at a special place in the Universe), the cosmologists deduce that all galaxies are receding from each other, or we live in a dynamic, expanding Universe. The expansion of the Universe is described by a very simple equation called Hubble's law; the velocity of the recession v of a galaxy is equal to a constant H times its distance d (v = Hd). Where the constant is called Hubble’s constant and relates distance to velocity in units of light years. Some cosmologists think that there is no such thing as the center of the Universe. Every observer, irrespective of the galaxy he or she is located in, will see all the galaxies moving away with a speed progressively increasing with distance and this is what is meant by the term “expansion of the universe”. An observer in the Milky Way galaxy will see other galaxies moving away just the same way that an observer in an alien galaxy would see. Some scientists describe the expanding Universe by the following diagram to show that we are not in any privileged location and every one in the Universe “see” the same thing as we see.


While the above description is true for the observer at Milky Way related to A, B and C, what about his observation of the galaxy in the opposite direction? He will see them approaching him. Unlike the observer at the Milky Way, who will see A, B and C moving away from him, the observer at the Alien’s galaxy C will see “A and B approaching him” and not moving away from him. This shows that “they” are “not seeing” the same thing as we “see”. This has been shown by the arrows in the above diagram. The galaxy at A and B move in one direction for us, but in the opposite direction for the other. Both are not the same and cannot be true.

Moreover, the cosmological expansion is not uniform at all distances, but only becomes apparent at great lengths of clusters and super clusters of galaxies. It is not apparent in lesser scales of say, the solar system or even the stars within the Galaxy. Yet, it is not correct to say that the planets and the stars are not moving away from each other. They are moving away from each other periodically to come closer again. We could observe these over a few years in the Solar system. This is due to the fact that the planets orbit the Sun and not due to the fact that the solar system is expanding. Something similar is happening in our Galaxy also. The stars go round the Galaxy and in the interim appear to be moving away from each other. Thus, the only explanation for the receding galaxies phenomenon is to treat the galactic clusters as spinning around a common center. Spin is a feature common from atoms to stars to galaxies. This will give the appearance of galaxies receding from each other only to converge at some distant future.

The “Big Bounce” started when the previous process of expansion got reversed due to impedance overcoming inertia creating a boundary from which the upcoming waves rebounded. In our theory, the first moment of creation occurs when the previous creation ends in a process reversal due to impedance overtaking inertia.

Coming back to big bounce, the reversal of creation process continued almost till it reached the starting point of uniform density (we call it sama rasa). The inherent instability is the cause of creation (we call it spanda purusha). The detailed mechanism along with the evolution and classification of forces has been described elsewhere.

FALACIES IN THE DARK MATTER AND DARK ENERGY CONCEPT:

Astrophysical observations are pointing out huge amounts of “dark matter” and dark energy” needed to explain the observed large scale structure and cosmic dynamics. The emerging picture is a spatially flat, homogeneous Universe undergoing the presently observed accelerated phase. Despite of the good quality of astrophysical surveys, commonly addressed as Precision Cosmology, the nature and the nurture of dark energy and dark matter, which should constitute the bulk of cosmological matter-energy, are still unknown. Furthermore, up to now, no experimental evidence has been found, at fundamental level, to explain such mysterious components. Let us examine the necessity for assuming the existence of dark matter and dark energy.

The three Friedmann models of the Universe are described by the following equation:
Matter density curvature dark energy

8 πG kc2 Λ
H2 = -------- ρ -- ---- + -----, where,
3 R2 3
H = Hubble’s constant. ρ = matter density of the universe. c = Velocity of light
k = curvature of the Universe. G = Gravitational constant. Λ = cosmological constant.
R = radius of the Universe.

In this equation, ‘R’ represents the scale factor of the Universe, and H is Hubble's constant, which how fast the Universe is expanding. Every factor in this equation is a constant and has to be determined from observations - not derived from fundamental principles. These observables can be broken down into three parts: gravity (which is treated as the same as matter density in relativity), curvature (which is related to but different from topology) and pressure or negative energy given by the cosmological constant that holds back the speeding galaxies. Earlier it was generally assumed that gravity was the only important force in the Universe, and that the cosmological constant was zero. Thus, by measuring the density of matter, the curvature of the Universe (and its future history) was derived as a solution to the above equation. New data has indicated that a negative pressure, called dark energy, exists and the value of the cosmological constant is non-zero. Each of these parameters can close the expansion of the Universe in terms of turn-around and collapse. Instead of treating the various constants in real numbers, scientists prefer the ratio of the parameter to the value that matches the critical value between open and closed Universes. For example, if the density of matter exceeds the critical value, the Universe is assumed as closed. These ratios are called as Omega (subscript M for matter, Λ for the cosmological constant, k for curvature). For reasons related to the physics of the Big Bang, the sum of the various Omega is treated as equal to one. Thus: ΩM + ΩΛ + Ωk = 1.

`The three primary methods to measure curvature are luminosity, scale length and number. Luminosity requires an observer to find some standard `candle', such as the brightest quasars, and follow them out to high red-shifts. Scale length requires that some standard size be used, such as the size of the largest galaxies. Lastly, number counts are used where one counts the number of galaxies in a box as a function of distance. To date all these methods have been inconclusive because the brightest, size and number of galaxies changes with time in a ways that we have not figured out. So far, the measurements are consistent with a flat Universe, which is popular for aesthetic reasons. Thus, the curvature Omega is expected to be zero, allowing the rest to be shared between matter and the cosmological constant.

To measure the value of matter density is a much more difficult exercise. The luminous mass of the Universe is tied up in stars. Stars are what we see when we look at a galaxy and it is fairly easy to estimate the amount of mass tied up in self luminous bodies like stars, planets, satellites and assorted rocks that reflect the light of stars and gas that reveals itself by the light of stars. This contains an estimate of what is called the baryonic mass of the Universe, i.e. all the stuff made of baryons - protons and neutrons. When these numbers are calculated, it is found that Ω for baryonic mass is only 0.02, which shows a very open Universe that is contradicted by the motion of objects in the Universe. This shows that most of the mass of the Universe is not seen, i.e. dark matter (+VªÉÉäÊiɹÉ), which makes the estimate of ΩM to be much too low. So this dark matter has to be properly accounted for in all estimates. ΩM = Ωbaryons + Ωdark matter

Gravity is measured indirectly by measuring motion of the bodies and then applying Newton's law of gravity. The orbital period of the Sun around the Galaxy gives us a mean mass for the amount of material inside the Sun's orbit. But a detailed plot of the orbital speed of the Galaxy as a function of radius reveals the distribution of mass within the Galaxy. Some scientists describe the simplest type of rotation as wheel rotation such as that shown below.


Wheel like rotation. Rotation curve for wheel like rotation.

Planet like rotation. Rotation curve for planet like rotation.

Rotation following Kepler’s 3rd law is shown above as planet-like or differential rotation. This is called a Keplerian rotation curve. In this type of rotation the orbital speeds falls off as one goes to greater radii within the Galaxy. To determine the rotation curve of the Galaxy, stars are not used due to interstellar extinction. Instead, 21-cm maps of neutral hydrogen are used. When this is done, one finds that the rotation curve of the Galaxy stays flat out to large distances, instead of falling off as in the figure above. This means that the mass of the Galaxy increases with increasing distance from the center.

What is expected to be seen. What is actually observed.

The surprising thing is there is very little visible matter beyond the Sun's orbital distance from the center of the Galaxy. So the rotation curve of the Galaxy indicates a great deal of mass, but there is no light out there. In other words, the halo of our Galaxy is filled with a mysterious dark matter of unknown composition and type.

Most galaxies occupy groups or clusters with about 10 to hundreds of members. Each cluster is held together by the force of gravity from each galaxy. The velocities of the members are directly proportional to their mass. This fact can be used to test for the presence of unseen matter. Since the physics of the motions of galaxies is so basic, there is no escaping the conclusion that a majority of the matter in the Universe has not been identified, and that the matter around us is special. The question that remains is whether dark matter is baryonic (normal) or a new substance. All the current postulates on the baryonic dark matter candidates suffer from infirmities. Neutron stars and black dwarf stars are ruled out as time scale for the initial stuff to cool to form these stuff is too long and the Universe is too young for that. Black holes are ruled out as the mechanism of their formation from the primordial stuff cannot be explained. Brown dwarf stars, planets and rocks are ruled out as there are not many in nearby places to explain the required mass. The current postulates on the non-baryonic dark matter candidates also suffer from infirmities. The neutrino mass is too little to explain the required mass. There is no evidence about the Weakly Interacting Massive Particles (WIMPs), cosmic strings and modified gravity.

The current observations and estimates of dark matter is that 20% of dark matter is probably in the form of massive neutrinos, even though that mass is uncertain. Another 5% to 10% is in the form of stellar remnants and low mass, brown dwarfs. The rest of dark matter is called CDM (cold dark matter) of unknown origin, but probably cold and heavy. The combination of all these mixtures only makes 20 to 30% the amount mass necessary to close the Universe. Thus, the Universe appears to be open, i.e. ΩM is 0.3. With the convergence of our measurement of Hubble's constant and ΩM, the end appeared to be in sight for the determination of the geometry and age of our Universe. However, all was throw into turmoil recently with the discovery of dark energy.

Dark energy is implied by the fact that the Universe appears to be accelerating, rather than decelerating, as measured by distant supernovae. This new observation implies that something else is missing from our understanding of the dynamics of the Universe, in math terms this means that something is missing from Friedmann’s equation. That missing something is supposed to be the cosmological constant, Λ.
8 πG kc2 Λ
H2 = -------- ρ -- ---- + -----,
3 R2 3
Einstein first introduced Λ to produce a static Universe in his original equations. However, until the supernova data, there was no data to support its existence in other than a mathematical way. The implication here is that there is some sort of pressure that counteracts gravity in the fabric of the Universe and is pushing the expansion faster. A pressure is usually associated with some sort of energy. Thus, this pressure has been named dark energy. Like dark matter, scientists do not know its origin or characteristics. Only it is known that it produces a contribution of 0.7 to Ω, called ΩΛ, so that matter plus dark energy equals an Omega of 1, i.e., a flat Universe.

With a cosmological constant, the possible types of Universes are very open. Almost any kind of massive or light, open or closed curvature, open or closed history is possible. Also, with high Λ's, the Universe could race away. However, observations and measurements of Ω put a constraint the possible models for the Universe. Data gives Ω&Lambda=0.7 and ΩM=0.3. This results in Ωk=0, or a flat curvature. This is sometimes referred to as the Benchmark Model which gives an age of the Universe of 12.5 billion years.

A new theory says that dark matter and dark energy could arise from a single dark fluid that permeates the whole universe. And this could mean Earth-based dark matter searches will come up empty. According to some scientists, a fluid-like dark energy can act like dark matter when its density becomes high enough. They compare this dark fluid to Earth's atmosphere. Atmospheric pressure causes air to expand, but part of the air can collapse to form clouds. In the same way, the dark fluid might generally expand, but it also could collect around galaxies to help hold them together. Their model involves positing a preferred time direction, in some sense a special time frame, thus modifying Einstein's theory of general relativity. The idea is similar to the "ether," an invisible medium that physicists once thought light waves traveled through. Einstein's relativity did away with the need for such a medium, but cosmologists have recently found that an ether-like substance can mimic dark matter. The presence of such a substance changes the way gravity works. This is most noticeable in the distant outskirts of a galaxy, where the galaxy's gravitational pull would be expected to be small, but the ether makes it much stronger. The ether effectively softens space-time in regions of low [gravitational] acceleration making it more sensitive to the presence of mass than usual. This approach can match a lot of astronomical data, as reported in a recent article in Astrophysical Journal Letters. This fluid divides itself into a dark energy part and a dark matter part with the same ratio that is seen from observations (dark energy is about 75 percent of the universe's mass-energy content, while dark matter is about 21 percent and normal matter makes up the last 4 percent).This fluid can keep galaxies from flying apart just as well as dark matter can. Although the fluid is all around us, it does not affect the motion of Earth or the other planets, because data shows that our solar system obeys traditional gravity to very high accuracy. But the fluid does affect the speed at which galaxies can rotate. Dark matter model has been tested in the bullet cluster of galaxies, where a massive collision appears to have stripped hot gas from its dark matter envelope. This "naked" dark matter was seen as iron-clad proof for traditional dark matter theories, but this new fluid can reproduce the same effect.

To confuse the issue further, some galaxies have been found to be devoid of dark matter. The current models of galaxy formation hold that galaxies form inside of dark matter haloes. In the outer regions of most galaxies, stars orbit around the center so fast that they should fly away unless held by the dark matter. But in the spiral galaxy NGC 4736, the rotation slows down as we move farther out from the crowded inner reaches of the galaxy. Rotation measurements stretching up to 35000 light years from the galactic center shows a declining rotation curve as if there are no extended haloes of dark matter. Ordinary luminous stars and gas can account for all the mass in NGC 4736. This shows that the galaxy does not contain any dark matter. Other galaxies have shown declining rotation curves also. But later observations have always shows that beyond a certain distance, they flatten out, which cannot be explained by ordinary gravity from visible stars and gas.

The above problem could be completely reversed if we consider dark matter and dark energy as “shortcomings” of General Relativity in its simplest formulation (a linear theory in the Ricci scalar R, minimally coupled to the standard perfect fluid matter) and devise the “correct” theory of gravity by matching the largest number of observational data, without imposing any theory a priori. More on this will be discussed later. For the present, let us go back to particle physics. But before understanding the Vaidic principles, it is necessary to understand the fundamental concept of creation.

One of the most mysterious objects in the universe is what is known as the black hole. It is said to be the ultimate fate of a super-massive star that has exhausted its fuel that sustained it for millions of years. In such a star, gravity overwhelms all other forces and the star collapses under its own gravity to the size of a pinprick. It is called a black hole as nothing – not even light – can escape it. A black hole has two parts. At its core is a singularity, the infinitesimal point into which all the matter of the star gets crushed. Surrounding the singularity is the region of space from which escape is impossible, the perimeter of which is called the event horizon. Once something enters the event horizon, it loses all hope of exiting.

The known laws of physics are clear that a singularity forms, but they are hazy about the event horizon. But what exactly happens at a singularity? Matter is crushed, but what becomes of it then? Most people assume that a horizon must indeed form - the horizon is very appealing as a scientific fig leaf. By hiding the singularity, it isolates the gap in our knowledge. All kinds of processes unknown to science may occur at the singularity, yet they have no effect on the outside world. Astronomers plotting the orbits of planets and stars can safely ignore the uncertainties introduced by singularities and apply the standard laws of physics with confidence. Whatever happens in a black hole stays in a black hole. Yet Researchers have found a wide variety of stellar collapse scenarios in which an event horizon does not form. In such a scenario, the singularity remains exposed to our view. Physicists call it a naked singularity. Matter and radiation can both fall in and come out. Whereas visiting the singularity inside a black hole would be a one-way trip, at least in principle one could come as close as one likes to a naked singularity and return to tell its tale.

If naked singularities exist, the implications would be enormous and would touch on nearly every aspect of physics. The lack of horizons could mean that the mysterious processes that occur near the singularities would impinge on the outside world. Naked singularities might account for unexplained high-energy phenomena that astronomers have seen, and they might offer a laboratory to explore the fabric of space-time on its finest scales. Event horizons were supposed to have been the easy part about black holes. The concepts of event horizon and time cone are based on twisted logic. If we take two spatial and one time dimensions, the time evolution of a pulse of light will be represented by a chain of concentric circles. If we add the third spatial dimension, it would be a chain of concentric spheres and not cones. Some books show the time evolution in two spatial and one time dimensions by a plane moving in the direction of time, which is wrong. In the said example, the pulse of light is evolving and not the plane or the space. It cannot be assumed that the light pulse carries the space with it. It is contrary to experience. The light pulse only illuminates different regions of space in its time evolution.

Singularities are mysterious. They are places where the strength of gravity becomes infinite and the known laws of physics break down. According to the current understanding of gravity, encapsulated in Einstein’s general theory of relativity, singularities inevitably arise during the collapse of a giant star. General relativity does not account for the quantum effects that become important for microscopic objects. May be those effects intervene to prevent the strength of gravity from becoming truly infinite! Scientists are still struggling to develop the quantum theory of gravity that can explain singularities.

What happens to the region of space-time around the singularity appears to be rather straightforward. Stellar event horizons are many kilometers in size, far larger than the typical scale of quantum effects. Assuming that no new forces of nature intervene, the so-called event horizons should be governed purely by general relativity. But applying the theory to stellar collapse is still a formidable task. Einstein's equations of gravity are notoriously complex, and solving them requires physicists to make simplifying assumptions. To simplify the equations, some scientists considered only perfectly spherical stars: they assumed that the stars consisted of gas of a homogeneous (uniform) density and the gas pressure is negligent. They found that as this idealized star collapses, the gravity at its surface intensifies and eventually becomes strong enough to trap all light and matter, thereby forming an event horizon. The star becomes invisible to outside observers and soon thereafter collapses all the way down to a singularity.

But the real stars are more complicated. Their density is inhomogeneous, the gas in them exerts pressure, and they can assume other shapes. Does every sufficiently massive collapsing star turn into a black hole? Some scientists suggested that the answer is yes. They conjectured that the formation of a singularity during stellar collapse necessarily entails the formation of an event horizon. Nature thus forbids us from ever seeing a singularity, because a horizon always cloaks it. This conjecture is termed as the cosmic censorship hypothesis.

In 1973 some scientists found that layers of in-falling matter could intersect to create momentary singularities that were not covered by horizons. Although the density at one location became infinite, the strength of gravity did not. Thus, the singularity did not crush matter and in-falling objects to an infinitesimal pinprick, the general relativity never broke down, and matter continued to move through this location rather than meeting its end. Subsequently, a numerical simulation of a star with a realistic density profile: highest at its center and slowly decreasing toward the surface was performed. The studies found that the star shrank to zero size and that a naked singularity resulted. But the model still neglected pressure and it was shown that the singularity was gravitationally weak.

Further studies showed that in a wide variety of situations, collapse ends in a naked singularity and that most naked singularities are stable to small variations of the initial setup. These counterexamples to Penrose’s conjecture suggested that cosmic censorship is not a general rule. Some scenarios lead to a black hole and others to a naked singularity. In some models, the singularity is visible only temporarily, and an event horizon eventually forms to cloak it. In others, the singularity remains visible forever. Typically the naked singularity develops in the geometric center of collapse, but it does not always do so, and even when it does, it can also spread to other regions. Nakedness also comes in degrees: an event horizon might hide the singularity from the prying eyes of faraway observers, whereas observers who fell through the event horizon could see the singularity prior to hitting it.

According to Einstein’s theory, gravity is a complex phenomenon involving not only a force of attraction but also effects such as shearing, in which different layers of material are shifted laterally in opposite directions. If the density of a collapsing star is very high so high that by all rights it should trap light but also inhomogeneous, those other effects may create escape routes. Shearing of material close to a singularity, for example, can set off powerful shock waves that eject matter and light in essence, a gravitational typhoon that disrupts the formation of an event horizon. To be specific, consider a homogeneous star, neglecting gas pressure. (Pressure alters the details but not the broad outlines of what happens.) As the star collapses, gravity increases in strength and bends the paths of moving objects ever more severely. Light rays, too, become bent, and there comes a time when the bending is so severe that light can no longer propagate away from the star. The region where light becomes trapped starts off small, grows and eventually reaches a stable size proportional to the star’s mass. Meanwhile because the star’s density is uniform in space and varies only in time, the entire star is crushed to a point simultaneously. The trapping of light occurs well before this moment, so the singularity remains hidden. Now consider the same situation except that the density decreases with distance from the center.

In effect, the star has an onion-like structure of concentric shells of matter. The strength of gravity acting on each shell depends on the average density of matter interior to that shell. Because the denser inner shells feel a stronger pull of gravity, they collapse faster than the outer ones. The entire star does not collapse to a singularity simultaneously. The innermost shells collapse first, and then the outer shells pile on, one by one. The resulting delay can postpone the formation of an event horizon. If the horizon can form anywhere, it will form in the dense inner shells. But if density decreases with distance too rapidly, these shells may not constitute enough mass to trap light. The singularity, when it forms, will be naked. Therefore, there is a threshold: if the degree of inhomogeneity is very small, below a critical limit, a black hole will form; with sufficient inhomogeneity, a naked singularity arises.

In other scenarios, the salient issue is the rapidity of collapse. This effect comes out very clearly in models where stellar gas has converted fully to radiation and, in effect, the star becomes a giant fireball. Again there is a threshold: slowly collapsing fireballs become black holes, but if a fireball collapses rapidly enough, light does not become trapped and the singularity is naked. One reason it has taken so long for physicists to accept the possibility of naked singularities is that they raise a number of conceptual puzzles. A commonly cited concern is that such singularities would make nature inherently unpredictable. Because general relativity breaks down at singularities, it cannot predict what those singularities will do. As long as singularities remain safely ensconced within event horizons, this randomness remains contained and general relativity is a fully predictive theory, at least outside the horizon. But if singularities can be naked, their unpredictability would infect the rest of the universe.

Unpredictability is actually common in general relativity and not always directly related to censorship violation. The theory permits time travel, which could produce causal loops with unforeseeable outcomes, and even ordinary black holes can become unpredictable. For example, if we drop an electric charge into an uncharged black hole, the shape of space-time around the hole radically changes and is no longer predictable. A similar situation holds when the black hole is rotating. Specifically, what happens is that space-time no longer neatly separates into space and time, so physicists cannot consider how the black hole evolves from some initial time into the future. Only the purest of pure black holes, with no charge or rotation at all, is fully predictable. The loss of predictability and other problems with black holes actually stem from the occurrence of singularities; it does not matter whether they are hidden or not.

NEWTON’S FORMULATIONS:
From the Galilean experiment, Newton introduced the notions of force and of inertial mass. When a force is applied to material bodies, it changes their speed or direction of motion or both. He concluded that the inertial mass of the body opposes these changes. In the force equation, the masses that are used are the gravitational masses, mg. The acceleration, derived from Newton’s second law is proportional to the inertial mass, mi, which describes a different property of a body and how it reacts to a force to acquire its acceleration. It has been found experimentally that all bodies have the same ratio mg/mi. This comes out of the Eötvös experiment, especially in its more modern incarnations. The measurements of Roll, Krotkov and Dicke in the early 1960s (Misner, Thorne & Wheeler, pages 14-17) showed that the variation: mg/mi <10-12 over all bodies, i.e., the gravitational acceleration is independent of the mass of the body being accelerated. In no other force, the amplitude of the acceleration caused is independent of the “charge” (e.g., electric charge, or color). A higher electric charge on a particle would cause it to accelerate faster in an electric field. A larger color would cause stronger interactions in the strong force. But for gravitation, doubling the mass of a body has no effect on its acceleration.

Since acceleration in a gravitational field is independent of mass, and independent of what a body is made of, some scientists absorb the ratio mg/mi into the definition of G and write:
mg = mi = m,
They say that gravitation causes an acceleration of a body which is a function of all other bodies’ locations and histories. The concept of inertial mass has been accepted without further examination. Till date its existence has not been proved. All such controversies can be removed, if we accept the macro example of a group of persons traveling in a train. Irrespective of their individual separate activities which belong to a different class of motion, the train carrying them is traveling at constant velocity. When they get down, their individual separate activities may cause them to alight slightly differently than the order and respective position in which they boarded. Similarly, the falling bodies behaved as if they are “boarding a train”. This implies that everything in the Universe – the falling bodies included – is moving through some field or the other. Some call this the C-field or the gravito-magnetic field. Newtonian gravitation gives no clue as to why inertial and gravitational mass are so accurately proportional - which means that we can take the overall constant of proportionality into the gravitational constant, G, and call them equal. Alternatively, we can assume that there is nothing called inertial mass.

One important point that has been missed in Galilean experiment is related to penetrating power. The body dropped from a height falls to the ground where it remains static. If it falls on water, it goes to the bottom. If it falls on mud, it goes down till it finds a solid surface of higher density than the object. On the contrary, if the body is less dense, it floats on water. This shows a relationship between density and penetrating power of an object. Density is related to mass and binding energy. Thus, solids have more penetrating power than fluids (liquids and gases), which in turn have more penetrating power than plasma. Similarly, strong nuclear interaction (we call it dhruva) is stronger than electroweak interaction (we call it dhartra), which in turn is stronger than radioactive disintegration (we call it dharuna). We call this characteristic of penetration vishtambhakatwa, which literally means breaking confinement. The idea is that all objects (and not only quarks) are confined bodies. A body cannot interact with any other body without breaking its confinement. Such interaction takes place only after the relative densities exceed some critical value. Once this value is exceeded, the denser body moves through the less dense medium at varying velocities proportionate to the relative densities between them. This has given rise to the theory of transition states in chemical reactions.

The apple fell not because the Earth pulled it due to gravity. Gravity was present all along. But the apple fell only after it ripened. Ripening led to softening or reduction of the relative density of the apple and its stem. While the stem remained dense, the apple became less dense. The positive and negative charges of the apple and the Earth attracted each other to attain equilibrium. When this attraction exceeded the binding energy of the stem and the apple, the apple fell.

The above description shows that the falling bodies accelerate not due to gravity, but because of the additional force supplied by mass due to vishtambhakatwa, as air is much less dense than the falling bodies, whose confinement remains above the critical value. But then force is experienced only in a field. A medium or a field is said to be a substance or material which carries the wave. It is a region of space characterized by a physical property having a determinable value at every point in the region. This means that if we put something appropriate in a field, we can then notice “something else” out of that field, which makes the body interact with other objects put in that field in some specific ways, that can be measured or calculated. This “something else” is a type of force. Depending upon the nature of that force, the scientists categorize the field as gravity field, electric field, magnetic field, electromagnetic field, etc. The laws of modern physics suggest that fields represent more than the possibility of the forces being observed. They can also transmit energy and momentum. Light wave is a phenomenon that is completely defined by fields. Here it is important to remember that like a particle, the field also has a boundary, but unlike a particle, it is not a rigid boundary. A particle interacts with its environment as a stable system - as a whole. Its equilibrium is within. It is always rigidly confined till it breaks up due to some external or internal effect. A field, on the other hand, interacts with its environment to bring in uniform density – to bring in equilibrium with the environment. These are the distinguishing characteristics that are revealed in fermions and bosons and explain superposition of states.

While the concept of fields as mediators for the transmission of forces is intuitively helpful, the standard definition of a field is somewhat different. A field is generally defined as a system with an infinite number of degrees of freedom for which certain field equations must hold. A point particle, in contrast, can be described by its position x(t) which changes as the time t progresses so that, in a three-dimensional space, there are three degrees of freedom for the motion of a point particle corresponding to the three coordinates of the particle’s position. In the case of a field the description is more complex since one needs a specification of a field value φ for each point x in space where this specification can change as the time t progresses. A field is therefore specified by φ(x,t), i.e., a (time-dependent) mapping from each point of space to a field value. Whereas the general intuitive notion of a field is that it is something transient and fundamentally different from matter, it is perfectly normal in physics to ascribe energy and even momentum to a pure field where no particles are present. This surprising feature shows how gradual the distinction between fields and matter can be.

From the above analysis, it is clear that since the falling bodies in the Galilean experiment experienced a force that accelerated them equally, they were moving in a field, whose density was less than that of the bodies. Since they were falling at the same rate irrespective of their mass, this field acted like a train carrying passengers, where the velocity of all passengers with respect to the Earth remain constant (same as that of the train) irrespective of their individual mass. Hence the bodies were moving with velocity of the field, whereas vishtambhakatwa was providing the additional force for the bodies to accelerate with time, just like passengers moving ahead may alight ahead of others. This has been wrongly interpreted as inertial mass.

TRANSITION STATES:

Transition states are surfaces (manifolds) in the so-called many dimensional (see The Ten Dimensions below in foot notes) phase space (the set of all possible positions and moments that particles can attain) that regulate mass transport through bottlenecks in that phase space. The transition rates are then computed using a statistical approach developed in chemical dynamics. The rate of intra-molecular energy redistribution is first related to the reaction rate, which is then be expressed as the ratio of the flux across the transition state divided by the total volume of phase space associated with the reactants.

It was found during a study of the potential energy surface of the collinear chemical reaction between the hydrogen atom and hydrogen molecule, where one atom changes partners that the surface contains a minimum associated with the reactants and another minimum for the products. They are separated by a barrier that must be crossed for the chemical reaction to take place. The path of the steepest ascent from the barrier’s saddle point is called the “transition state” of the surface. Once the transition state is crossed, it could be re-crossed due to dynamical effects due to coupling terms in the kinetic energy.

It is already known that the surface of minimum flux, corresponding to the transition state, must be an unstable periodic orbit whose projections on to projection space connect the two branches of the relevant equipotentials. As a result, these surfaces of minimum flux are called “periodic orbit dividing surfaces” (PODS). Since it describes how a set of “reactants” in Hamiltonian dynamical systems evolve into a set of “products”, it could be used to study “reaction rates” in a diverse array of physical situations. For example, each position-momentum pair constitutes one of the system’s “degrees of freedom” (DOF). The partitioning of phase space into separate regions corresponding to reactants and products can be used to study dynamical systems theory.

For two DOF Hamiltonian systems, the stable and unstable manifolds of the orbit provide an invariant partition of the system’s energy shell into reactive and non-reactive dynamics. The defining periodic orbit also bounds a surface in the energy shell (at which the Hamiltonian is constant) partitioning it into reactant and product regions. This defines a surface of no return and yield an unambiguous measure of the flux between the reactants and products.

LIBRATION POINTS AND THE ROCHE LIMIT:

A mechanical system with three objects, say the Earth, Moon and Sun, constitutes a three-body problem. The three-body problem is famous in both mathematics and physics circles, and mathematicians in the 1950s finally managed an elegant proof that it is impossible to solve. However, approximate solutions can be very useful, particularly when the masses of the three objects differ greatly. One of Lagrange’s observations from the potential contours was that there were five points at which the third body could be at equilibrium, points which are now referred to as Libration points or Lagrange points. The Libration points L1, L2, and L3 are unstable equilibrium points. Like standing a pencil on its point, it is possible to achieve equilibrium, but any displacement away from that equilibrium would lead to forces that take it further away from equilibrium. Remarkably, the Libration points L4 and L5 are stable equilibrium points for the small mass in the three-body system and this three-body geometry could be maintained as M2 orbited about M1.

Originally, he had set out to discover a way to easily calculate the gravitational interaction between arbitrary numbers of bodies in a system, because Newtonian mechanics concludes that such a system results in the bodies orbiting chaotically until there is a collision, or a body is thrown out of the system so that equilibrium can be achieved. The logic behind this conclusion is that a system with one body is trivial, as it is merely static relative to itself; a system with two bodies is the relatively simple two-body problem, with the bodies orbiting around their common center of mass. However, once more than two bodies are introduced, the mathematical calculations become very complicated. It becomes necessary to calculate the gravitational interaction between every pair of objects at every point along its trajectory.

Lagrange, however, wanted to make this simpler. He did so with a simple hypothesis: The trajectory of an object is determined by finding a path that minimizes the action over time. This is found by subtracting the potential energy from the kinetic energy. A stationary particle is a confined object whose internal energy is balanced by the energy of its external field. This is called the potential energy. When the position of the object is disturbed without affecting its internal energy distribution, it moves on inertia at a constant velocity (assuming other forces are absent). This means, internal energy continues to be balanced by the energy of its external field. Thus, this energy, which is called the kinetic energy, is ½ mv2.

With this way of thinking, Lagrange re-formulated the classical Newtonian mechanics to give rise to Lagrangian mechanics. With his new system of calculations, Lagrange’s work led him to hypothesize how a third body of negligible mass would orbit around two larger bodies which were already in a near-circular orbit. In a frame of reference that rotates with the larger bodies, he found five specific fixed points where the third body experiences zero net force as it follows the circular orbit of its host bodies (planets). These points were named “Lagrangian points” in Lagrange’s honor. It took over a hundred years before his mathematical theory was validated by the discovery of the Trojan asteroids at the L4 and L5 Lagrange points of the Sun–Jupiter system in 1906.

In the more general case of elliptical orbits, there are no longer stationary points in the same sense: it becomes more of a Lagrangian “area”. The Lagrangian points constructed at each point in time, as in the circular case, form stationary elliptical orbits which are similar to the orbits of the massive bodies. This is due to Newton’s second law (Force = Mass times Acceleration, or F = dp / dt), where p = mv (p the momentum, m the mass, and v the velocity) is invariant if force and position are scaled by the same factor. A body at a Lagrangian point orbits with the same period as the two massive bodies in the circular case, implying that it has the same ratio of gravitational force to radial distance as they do. This fact is independent of the circularity of the orbits, and it implies that the elliptical orbits traced by the Lagrangian points are solutions of the equation of motion of the third body.

One of the contributions of Lagrange was to plot contours of equal gravitational potential energy for systems where the third mass was very small compared to the other two. Below is a sketch of such equipotential contours for a system like the Earth-Moon system. The equipotential contours that makes a figure-8 around both masses is important in assessing scenarios were one partner loses mass to the other. These equipotential loops form the basis for the concept of the Roche lobe.

The Roche lobe is the region of space around a star within which orbiting material is gravitationally bound to that star. If the star expands past its Roche lobe, then the material can escape the gravitational pull of the star. If the star is in a binary system then the material will fall in through the inner Lagrangian point. It is an approximately tear-drop shaped region bounded by a critical gravitational equipotential, with the apex of the tear-drop pointing towards the other star (and the apex is at the L1 Lagrangian point of the system). It is different from the Roche limit which is the distance at which an object held together only by gravity begins to break up due to tidal force. It is different from the Roche sphere which approximates the gravitational sphere of influence of one astronomical body in the face of perturbations from another heavier body around which it orbits.

In a binary system with a circular orbit, it is often useful to describe the system in a coordinate system that rotates along with the objects. In this non-inertial frame, one must consider centrifugal force in addition to gravity. The two together can be described by a potential, so that, for example, the stellar surfaces lie along equipotential surfaces. Close to each star, surfaces of equal gravitational potential are approximately spherical and concentric with the nearer star. Far from the stellar system, the equipotentials are approximately sllipsoidal and elongated parallel to the axis joining the stellar centers. A critical equipotential intersects itself at the L1 Lagrangian point of the system, forming a two-lobed figure-of-eight with one of the two stars at the center of each lobe. This critical equipotential defines the Roche lobes. Where matter moves relative to the co-rotating frame it will seem to be acted upon by a Coriolis force. This is not derivable from the Roche lobe model as the Coriolis force is a non-conservative force (i.e. not representable by a scalar potential).

A three-dimensional representation of the Roche potential in a binary star with a mass ratio of 2, in the co-rotating frame. The droplet-shaped figures in the equipotential plot at the bottom of the figure are called the Roche lobes of each star. L1, L2, and L3 are the Libration points where forces cancel out. Mass can flow through the saddle point L1 from one star to its companion, if the star fills its Roche lobe.

The Roche Limit is the radius inside which a satellite, held together only by its gravity, will disintegrate under the tidal forces of the body about which it is orbiting. When you draw a set of equipotential curves for the gravitational potential energy for a small test mass in the vicinity of two orbiting stars, there is a critical curve shaped like a figure-8 which can be used to portray the gravitational domain of each star. If you rotate the figure-8 around the line joining the two stars, you produce two lobes known as Roche lobes, after the French mathematician Edouard Roche.

When a star “exceeds its Roche lobe”, its surface extends out beyond its Roche lobe and the material which lies outside the Roche lobe can “fall off” into the other object’s Roche lobe via the first Lagrangian point. In binary evolution this is referred to as mass transfer via Roche-lobe overflow. In principle, mass transfer could lead to the total disintegration of the object, since a reduction of the object’s mass causes its Roche lobe to shrink. However, there are several reasons why this does not happen in general. First, a reduction of the mass of the donor star may cause the donor star to shrink as well, possibly preventing such an outcome. Second, with the transfer of mass between the two binary components, angular momentum is transferred as well. While mass transfer from a more massive donor to a less massive accretor generally leads to a shrinking orbit, the reverse causes the orbit to expand (under the assumption of mass and angular-momentum conservation). The expansion of the binary orbit will lead to a less dramatic shrinkage or even expansion of the donor's Roche lobe, often preventing the destruction of the donor.
To determine the stability of the mass transfer and hence exact fate of the donor star, one needs to take into account how the radius of the donor star and that of its Roche lobe react to the mass loss from the donor; if the star expands faster than its Roche lobe or shrinks less rapidly than its Roche lobe for a prolonged time, mass transfer will be unstable and the donor star may disintegrate. If the donor star expands less rapidly or shrinks faster than its Roche lobe, mass transfer will generally be stable and may continue for a long time.

Mass transfer due to Roche-lobe overflow is responsible for a number of astronomical phenomena, including Algol systems, recurring novae (binary stars consisting of a red giant and a white dwarf that are sufficiently close enough together that material from the red giant dribbles down onto the white dwarf), X-ray binaries and millisecond pulsars.

The precise shape of the Roche lobe depends on the mass ratio, and must be evaluated numerically. However, for many purposes it is useful to approximate the Roche lobe as a sphere of the same volume. An approximate formula for the radius of this sphere is:
for and

for

where A is the semi-major axis of the system and r1 is the radius of the Roche lobe around mass M1. These formulas are accurate to within about 2%.

ROCHE LIMIT OR ROCHE RADIUS:

It is the distance within which a celestial body, held together only by its own gravity, will disintegrate due to a second celestial body’s tidal forces exceeding the first body's gravitational self-attraction. Inside the Roche limit, orbiting material will tend to disperse and form rings, while outside the limit, material will tend to coalesce. Typically, the Roche limit applies to a satellite disintegrating due to tidal forces induced by its primary, the body about which it orbits. Some real satellites, both natural and artificial, can orbit within their Roche limits because they are held together by forces other than gravitation. Jupiter’s moon Metis and Saturn’s Moon Pan are examples of such satellites, which hold together because of their tensile strength. In extreme cases, objects resting on the surface of such a satellite could actually be lifted away by tidal forces. A weaker satellite, such as a comet, could be broken up when it passes within its Roche limit.

Since tidal forces overwhelm gravity within the Roche limit, no large satellite can coalesce out of smaller particles within that limit. Indeed, almost all known planetary rings are located within their Roche limit, Saturn’s E-ring and Phoebe ring being notable exceptions. They could either be remnants from the planet's proto-planetary accretion disc that failed to coalesce into moonlets, or conversely have formed when a moon passed within its Roche limit and broke apart. It is also worth considering that the Roche limit is not the only factor that causes comets to break apart. Splitting by thermal stress, internal gas pressure and rotational splitting are a more likely way for a comet to split under stress.

The limiting distance to which a satellite can approach without breaking up depends on the rigidity of the satellite. At one extreme, a completely rigid satellite will maintain its shape until tidal forces break it apart. At the other extreme, a highly fluid satellite gradually deforms leading to increased tidal forces, causing the satellite to elongate, further compounding the tidal forces and causing it to break apart more readily. Most real satellites would lie somewhere between these two extremes, with tensile strength rendering the satellite neither perfectly rigid nor perfectly fluid. The Roche limit is also usually calculated for the case of a circular orbit, although it is straightforward to modify the calculation to apply to the case (for example) of a body passing the primary on a parabolic or hyperbolic trajectory.

The rigid body Roche limit is a simplified calculation for a spherical satellite, where the deformation of the body by tidal effects is neglected. The body is assumed to maintain its spherical shape while being held together only by its own self-gravity. Other effects are also neglected, such as tidal deformation of the primary, rotation and orbit of the satellite, and its irregular shape. These assumptions, although unrealistic, greatly simplify the Roche limit calculation. The Roche limit for a rigid spherical satellite excluding orbital effects, is the distance, d, from the primary at which the gravitational force on a test mass on the surface of the object is exactly equal to the tidal force pulling the object away from the object:
ρM
d = R (2 --------)⅓,
ρm
where R is the radius of the primary, ρM is the density of the primary, and ρm is the density of the satellite. Note that this does not depend on how large the orbiting object is, but only on the ratio of densities. This is the orbital distance inside of which loose material (e.g., regolith or loose rocks) on the surface of the satellite closest to the primary would be pulled away, and likewise material on the side opposite the primary will also be pulled away from, rather than toward, the satellite. If the satellite is more than twice as dense as the primary, as can easily be the case for a rocky moon orbiting a gas giant, then the Roche limit will be inside the primary and hence not relevant.

No comments:

Post a Comment

let noble thoughts come to us from all around