GRAVITY - A CONCEPTUAL REVIEW.
ABSTRACT:
Everyone
knows what gravity does. But no one knows what gravity is! Unlike mass, forces
are inferred through their interaction with other bodies. Each application of
force generates an entangled couple of equal and opposite interactions that
generate impedance and stress in the medium. Within any system, this creates
four entangled sets of proximity-distance variables between the bodies
(proximity-proximity, proximity-distance, distance-distance, and distance-proximity).
These are the four fundamental forces of Nature: strong, two types of weak
interactions, and electromagnetic interactions respectively. These are confined
intra-body variables that produce all particles in different combinations and
determine dimension. Gravity is the all pervading force that acts on each body linearly.
Due to differential mass, the resultant nonlinear movement appears as an inter-body
force.
INTRODUCTION:
This
is a conceptual review. Whether the concept should start with the language of
mathematics? The validity of a physical statement rests on its correspondence
to reality. The validity of a mathematical statement rests on its logical
consistency. There can be physical theories without first writing down the
equations. For example, there is no equation for the Observer. Yet, it is
universally accepted in physics. Our thoughts, emotions and expressions are
linguistic - not mathematical. Newton
did not write the equations first. Mathematics is unavoidable only when we
verify the validity of a concept – how much the system changes when a parameter
is changed. It does not explain the what, why, when, where and with whom, about
the system. These are the subject matters of physics. Physics explains what
gravity is. Mathematics explains what gravity does. The left-hand side of all
equations represents freewill, as we are free to choose the parameters. The
right hand side represents determinism as once any or all parameters are
changed, the right-hand side changes deterministically. The equality or
inequality sign prescribes the special conditions (such as a temperature
threshold in chemical reactions) or constants (G in the law of gravitation)
that need to be overtaken before starting the reaction and getting
deterministic results. Arbitrary changes to or exchanges of parameters between
the two sides of the equations are not permitted, though often wrongly resorted
to in mathematical physics.
Modern
science is built incrementally over earlier “established theories”. Sometimes
this process stretches the original theory too far to breaking point. Sometimes
fiction dominates over physical concepts. For example, ocular perception of
form is possible only with electromagnetic radiation, where the electric field
and the magnetic fields move perpendicular to each other and both move
perpendicular to their direction of motion. Hence dimension, which is the
perception of differentiation between the internal structural space and
external relational space of any object - thus ocular perception of form - is
described by three mutually perpendicular axes that are invariant under mutual transformation.
Yet, even after a century of failure to find extra-dimensions, physicists are
reluctant to discard the fiction perpetuated by the novel FLAT LANDS. Those who
superstitiously believe that the “established theories” are sacrosanct; need
not read further.
Newton codified the then
available knowledge into a mathematical format to show how the body moves under
gravitation, when any of the parameters like mass or distance was changed.
Subject to limitations of precision measurement, it worked well. But his
physics; i.e., what, why, when, and with whom, about the system; lacked
precision. He treated the apple and the Earth as fixed, and explained the
falling of the apple due to gravity. Just before the fall, the apple had the
same mass and distance – hence the same force of gravity. It did not fall
because it was bound to its stem by a force that surpassed a threshold
signified by the gravitational constant G that permeated the field between them
(the continuum). When this bond weakened (due to ripening) to below this
threshold, it fell due to the change in density gradient (G has units in mass
and distance scales over time). It rested on Earth, because the density
gradient of the continuum between the contact point and centers of both the apple
and the Earth was much above this threshold.
Einstein
also treated the apple and the Earth as fixed, but explained the falling apple
due to curvature of the space between them that brought them near. He explained
the separation of the apple and its stem by an equal and opposite curvature of
space. His explanation of gravity involved two complementary localities – that
between the apple and the Earth and the apple and its stem. The changes in
curvature of space during that instant (spacetime) in these localities balanced
each other due to the equivalence principle (EP). But unlike the Newtonian
theory, it does not explain the universal threshold limit G, though he used it
in his equations. Now let us examine the EP and GR, because after the recent Black
Hole Information Paradox controversy, it has become extremely necessary to
either conform or refute these concepts.
EQUIVALENCE PRINCIPLE REVISITED:
The cornerstone of GR is the principle of
equivalence of inertial and gravitational masses: mi = mG. The EP does not flow
from any mathematics. No one has given any mathematical reason (like a
consistency constraint) why all matter fields should couple universally to
gravity. This is not the case for the other fundamental forces or the Higgs
field (which is why different particles have different masses. Higgs field is
specific as to which particle couples to it. Gravity is a universal field - an
all pervading medium. Every particle in the universe, whether massive or not,
couples to it the same way). If F=ma and universal free fall for all mass types
hold, F ≈ g ≈ a holds. It can be explained only if gravity acts like a river
current propelling all objects uniformly,
but differentially based on their mass, causing differential local density
gradients. The galactic and star systems are like a “free vortex” arising out
of conflicting currents in which the tangential velocity ‘v’ increases as the center line is approached, so that the angular
momentum ‘rmv’ is constant.
The EP
has been generally accepted without much questioning. Actually GR
assumes general covariance and the equivalence of the two masses follows.
General covariance means invariance under diffeomorphisms. This implies the EP.
This implies that gravitational and
inertial masses are equal. It is
not a first principle of physics, but merely an ad hoc metaphysical concept
designed to induce the uninitiated to imagine that gravity has magical
non-local powers of infinite reach. The appeal to believe in such a miraculous
form of gravity is very strong. Virtually everyone accepts EP as an article of
faith even though it has never been positively verified directly
by either experimental or observational physics. All indirect experiments show
that the equivalence or otherwise of gravitational and inertial masses is only
one of description.
No one
knows why there should be two or more mass terms. In principle there is no
reason why mi = mG: why should the
gravitational charge and the inertial mass be equal? The EP states that the
effect of gravity does not depend on the nature or internal structure of a
body. The experiments of Galileo, who dropped balls of different masses from
the top of the Leaning Tower of Pisa, were conformed in 1971, when Apollo 15
Commander Dave Scott performed a similar experiment. A heavy object (a 1.32 kg
aluminum geological hammer) and a light object (a 0.03-kg falcon feather) were
released simultaneously from the same height (approximately 1.6 m) and were
allowed to fall to the surface. Within the accuracy of the simultaneous release,
the objects were observed to undergo the same acceleration and simultaneously
strike the lunar surface. Because they were essentially in near vacuum, there
was no air resistance and the feather fell at the same rate as the hammer
proving, all objects released together fall at the same rate regardless of
their mass. Thus, like c, the acceleration in free space should not be related
to mass. This independence of acceleration from mass should be inbuilt in the
gravitational equation. But the values of G (constant – though it might be
changing: doi/10.1103/ PhysRevLett.111.101102) and g (variable), depend on mass,
also like a steamer powered by an engine that is driven due to free will.
The
underlying gauge symmetries that describe the fundamental interactions require
the fundamental fields to be massless. The Higgs mechanism of spontaneous
symmetry breaking appears in the equation of motion of the field particle,
i.e., mi (in the
classical limit). If we put the particle in a gravitational field, then it will
“feel a force” given by the “gravitational charge” times the gravitational
field. This appears as two masses “mG”
and “mi”, though
there is only one mass term associated with each field. The inertial mass is said to measure the “inertia”, while the gravitational
mass is the coupling strength to the universal gravitational field. The
gravitational mass plays the same role as the electric charge for
electromagnetic interactions, the color charge for strong interactions and the
particle flavor for weak interactions.
Inertial
mass mi is said to
be the mass in Newton’s
law F=mia.
Gravitational mass mg
is said to be the coupling strength in the Newton’s law of gravitation: FG = (gm1m2/r2) x mG.
Thus: mia = FG
= (gm1m2/r2) x mG.
The quantity gm1m2/r2 is the “gravitational
field” (say G) and mG
is the “gravitational charge”, so that one can write: F x g = mG x G, just like we
write: mi x a = q x
E for the electric field. This has nothing to do with the Brout-Englert-Higgs
mechanism.
The gravitational mass mg is said to
produce and respond to gravitational fields. It is said to supply the mass
factor in the inverse square law of gravitation: F=Gm1m2/r2.
The inertial mass mi is said to supply the mass factor in Newton’s 2nd
Law: F=ma. If gravitation is proportional to g, say F=kg (because the
weight of a particle depends on its gravitational mass, i.e. mg), and
acceleration is given by a, then
according to Newton’s
law, ma=kg. Since according to GR, g=a, combining both we get m=k. Here m is
the so-called “inertial mass” and k is the “gravitational mass”. Acceleration
due to gravity g has the same value for all bodies placed at the same height –
hence a function of the distance from the center of Earth. Some think that the
EP implies that a test particle travels along a geodesic in the background
space-time. This is due to the “swirl” in the CMB due to B-mode polarization
pattern. EP assumes that in all locally Lorentz (inertial) frames, the laws of
Special Relativity (SR) must hold. From this, it is concluded that only the
geometric structure of spacetime can define the paths of free bodies. If x is a
particle’s world-line, parameterized by proper time, T is its tangent vector, D
denotes covariant differentiation along the world-line, and R is the Ricci
tensor, then: D(T) = 0 and D(T)=R(T) are both tensorial; hence generally
covariant. But only one of them describes a geodesic in a general curved
space-time.
It is
believed that Gravity does not couple to the “gravitational mass” but rather to
the Ricci Tensor, which works only if space-time is flat. Ricci Tensor does not
provide a full description in more than three dimensions. Schwarzschild
equation for black holes, where space-time is extremely curved, uses the
Riemann Tensor. Using Riemann tensor, instead of Ricci tensor to calculate
energy momentum tensor in 3+1 dimensions would not lead to any meaningful
results, though in most cases, the Riemann Tensor is needed before one can
determine the Ricci Tensor. Thus, there is really no relation between “gravitational
mass” and “inertial mass”, except in Newtonian physics. This is why photons
(with zero inertial mass) are affected by gravity. Only manipulations of the
Standard Model (SM) to include classical gravity (field theory in curved
spacetime) leads to effects like Hawking radiation and the Unrih effect. This
is where gravitation and the SM can hypothetically meet.
Gravitation and GR are not included in the
SM. Hence the SM really cannot say anything about gravitational mass. If any
theory conclusively unifies gravitation with the SM, it may be able to explain
the equivalence of the inertial mass and the gravitational mass. The Higgs
Boson and the Higgs fields are predictions of the SM which incorporates SR. The
Higgs mechanism is intended to explain
the “rest mass” of fundamental particles such as quarks and electrons that
constitute only about 4.9 % of the total theorized mass of the universe.
This rest mass of fundamental particles comprises only a tiny fraction (~1%) of
the “rest mass” of atoms. Most of the invariant mass of protons and neutrons is
the product of quark kinetic energy confinement when bound by the strong
interaction mediated by gluons. It is not directly the result of the
Higgs mechanism. However, since
SR is part of the SM and since e = mc2, the SM may be said to imply
that rest mass from the Higgs mechanism and binding energy from the color force
will both contribute equivalently to inertial rest mass of all particles. It is
believed that the Higgs field obeys ordinary theory of GR. It means that
equivalence of inertial and gravitational masses takes place. The mass-energy
of the universe that Dark Energy is said to represent has been reduced from
72.8% to 68.3%. At the same time Dark Matter has been increased from 22.7% to
26.8%. This means the percentage of ordinary matter has gone up from 4.5% to
4.9%. The constituent particles of these mysterious fields most likely do not
couple to Higgs field at all. Then, was it imprecise calculation or is
something changing?
EQUIVALENT OR DIFFERENT?
If we
think of gravitational and inertial masses outside the context of a generally
covariant theory, then there is still no evidence that they are equal. They may
differ by an arbitrary factor which may be absorbed into G or by a variable G. The equivalence of the inertial and
gravitational masses has been proved
indirectly by the Eötvös experiment and many later experiments. An
analysis of the experiments of Eötvös about the ratio of gravitational to
kinetic mass of a few substances by some scientists yields the result that this
ratio for the hydrogen atom, and for the binding energies are equal to that for
the neutron with a precision of one part in at least 5.105, and 104
respectively. No conclusion can be drawn about these ratios for the proton and
the electron separately. The Eöt-Wash experiment of University of Washington
tried to measure the difference in the two masses indirectly by considering “charge/mass” ratios. They have obtained
a result, which can be summarized as: (mG/mi) -1│≤ 10-13.
Lunar Laser
Ranging (LLR) experiment used to test for 35 years the EP with the moon, earth and sun being the
test-masses to determine whether, in
accordance with the Einstein EP, these two celestial bodies are falling toward
the Sun at the same rate, despite their different masses, compositions, and
gravitational self-energies. Analyses of precision laser ranges to the Moon
continue to provide increasingly stringent limits on any violation of the
equivalence principle. Current LLR solutions give Δ(mG/mi)EP=(-1.0±1.4)×10-13 for any possible inequality in Δ(mG/mi) - the ratios of the
gravitational and inertial masses for the Earth and Moon. This result, in
combination with laboratory experiments on the weak EP, yields a strong
equivalence principle (SEP) test of: Δ(mG/mi)SEP = (-2.0 ± 2.0)
× 10-13.
Also,
the corresponding SEP violation parameter η is (4.4±4.5)×10-4,
where η=4β-γ-3 and both β and γ are
post-Newtonian parameters. Using the Cassini γ, the η result
yields β-1 = (1.2±1.1)×10-4. The geodetic precession test,
expressed as a relative deviation from general relativity, is: Kgp=-0.0019±0.0064.
The time variation in the gravitational constant results in G˙/G=(4±9)×10-13yr-1.
Consequently there is no evidence for local (1AU) scale expansion of the solar
system. (DOI: 10.1103/PhysRevLett. 93.261101). Apart from the technical problems in these indirect methods and the
assumed values of various parameters - including latest precisely measured
value of G - continuing the uncertainty, the measured result that the Moon is
moving away from the Earth at the rate of about 3.8 centimeters higher
in its orbit each year shows that these indirect results cannot be fully relied
upon.
The
indirect methods to prove equivalence or otherwise; are questionable. It has
been accepted as given that ma=mg. This equivalence is faulty because
the description: F=ma is faulty. Once
a force is applied to move a body along any axis and the body moves, the force
ceases to act on the body and the body moves at constant velocity v’ due to inertia (assuming no other
forces are present). The relation between the original velocity v (zero if the body is at rest) and v’ is the rate of change. To accelerate
the body further, we need another
force to be applied to the body. Without such a new force, the body cannot be
accelerated. What is this new force and from where it comes? If any
other force acts, then it has to be introduced into the equation. Further, the
new force will change the velocity v’
to v’’ – an action chain like continuous
change in gravity due to changing distance). Acceleration or “rate of change of
the rate of change” means relating v
to v’, v’ to v’’, etc. Why should we compare v’’
with v instead of v’?
When
answering a question, one should first determine the framework. If we assume
nothing then there can be no answer. However, if we take as given that we are
going to formulate theories in terms of Lagrangians then there is essentially
only one mass parameter that can appear, i.e., the coefficient of the quadratic
term. Thus, whatever mass is there, it is only one mass. The Higgs field
clearly modifies the on-shell condition in flat space and general relativity
simply says that anyone whose frame is locally flat should reproduce the same
result. Thus, the Higgs field appears to modify the gravitational mass. It may
also modify the inertial mass by the same amount as can be verified by
analyzing some scattering diagrams. However, knowing that we are working within
the context of a Lagrangian theory, the fact that inertial and gravitational
mass are equal is essentially a foregone conclusion. Are they really different?
Let us examine.
RUSSELL’S PARADOX:
Now we will examine EP in the light of Russell’s
paradox of Set theory. Russell’s paradox raises an interesting question: If S
is the set of all sets which do not have themselves as a member, is S a member
of itself? The general principle is that: there cannot be a set without
individual elements (example: a library – collection of books – cannot exist
without individual books). There cannot be a set of one element or a set of one
element is superfluous (example: a book is not a library). Collection of
different objects unrelated to each other would be individual members as it
does not satisfy the condition of a set. Thus a collection of objects is either
a set with its elements, or individual objects that are not the elements of a
set.
Let us examine the property p(x): x Ï
x, which means the defining property p(x) of any element x is such that it does
not belong to x. Nothing appears unusual about such a property. Many sets have
this property. A
library [p(x)] is a collection of books. But a book is not a
library [x Ï x]. Now,
suppose this property defines the set R ={x : x Ï x}. It must be possible
to determine if RÎR or RÏR. However if RÎR,
then the defining properties of R implies that RÏR, which contradicts the
supposition that RÎR. Similarly, the supposition RÏR
confers on R the right to be an element of R, again leading to a contradiction.
The only possible conclusion is that, the property “x Ï
x” cannot define a set. This idea is also known as the Axiom of Separation in
Zermelo-Frankel set theory, which postulates that; “Objects can only be
composed of other objects” or “Objects shall not contain themselves”. In order
to avoid this paradox, it has to be ensured that a set is not a member of
itself. It is convenient to choose a “largest” set in any given context called
the universal set and confine the study to the elements of such universal set
only. This set may vary in different contexts, but in a given set up, the
universal set should be so specified that no occasion arises ever to digress
from it. Otherwise, there is every danger of colliding with paradoxes such as
the Russell’s paradox. And in the case of EP, we do just that.
THE THOUGHT EXPERIMENTS OF GR
AND EP:
There are similar paradoxes in SR, GR and the EP.
Let us examine EP. All objects fall in similar ways under the influence
of gravity. Hence it is said that locally one cannot tell the difference between
an accelerated frame and an un-accelerated frame. But since measurement is a
comparison between similars, these must be related
to be compared as equivalent or not? Let us take the example of a person
seating in the elevator that is falling down a shaft. It is assumed that
locally (i.e., during any sufficiently small amount of time or over a
sufficiently small space) the person can make no distinction between being in
the falling elevator and being stationary in completely empty space. This is a
wrong assumption. We have experienced the effect of gravity or free fall in
closed elevators. Even otherwise, unless the door opens and we find a different floor in front of us, we
cannot relate motion of the elevator
to the un-accelerated structure of the building – hence no equivalence. The
moment we relate to the structure beyond the elevator, we can know the relative
motion of the elevator, by comparing it against different floors.
Inside a spaceship in deep space, objects behave
like suspended particles in a fluid (un-accelerated) or like the asteroids in the asteroid
belt. Usually, they are relatively stationary (fixed velocity) within the
medium unless some other force acts upon them. This is because of the relative
distribution of mass and energy inside the spaceship and its dimensional volume
that determines the average density at each point in the medium. Further the
average density of the local medium of space is factored into in this
calculation. If the person is in a spaceship where he can see the outside
objects, then he can know the relative motions by comparing objects at
different distances. In a train, if we look only at nearby trees, we may think
the trees are moving, but when we compare it with distant objects, we realize
the truth. If we cannot see the outside objects, then we will consider only our
position with reference to the spaceship – stationary or floating within a
frame. There is no equivalence because there is no other frame for comparison.
The same principle works for other examples.
It is said that a ray of light, which moves in a
straight line will appear curved to the occupants of the spaceship. The light
ray from outside can be related to the spaceship only if we consider the bigger
frame of reference containing both the space emitting light and the spaceship.
If the passengers could observe the scene outside the spaceship, they will
notice this difference and know that the spaceship is moving. In that case, the
reasons for the apparent curvature of light path will be known. If we consider
outside space as a separate frame of reference unrelated to the spaceship, the
ray emitted by it cannot be considered inside the spaceship. The consideration
will be restricted to those rays emanating from within the spaceship. In that
case, the ray will move straight inside the spaceship. In either case, the
description of Einstein is faulty. Thus, the foundation of GR - the EP - is
wrong description of reality. Hence all mathematical derivatives built upon
such wrong description are also wrong. There is only one type of mass.
The
shifting of Mercury’s perihelion that is used to validate GR can be explained
by (v/c)2 radians per revolution, where v is not the escape
velocity, but the velocity component induced by Sun’s motion in the galaxy,
which drags the planets also. Mercury being smallest and closest to the Sun,
its effect is most profound. Before Einstein, Gerber has solved the problem
differently – not using GR. Eddington’s experiment about gravitational lensing
has been questioned repeatedly. The effect is due to contrasting refractive
indices of the media like the time dilation seen in GPS, where light bends and
travels a longer path (also slows down) after entering the denser atmosphere of
Earth. Every material that light can travel through has a refractive index,
denoted by the letter n. The velocity of light in a vacuum is about
3.0 × 108 m/s. The refractive index equals the ratio
of the velocities of light in vacuum (c) to that in the medium (v), that is
n = c/v. Light slows down when traveling through a medium, thus the
refractive index of any medium will be greater than one. By definition, the
refractive index of vacuum is 1. For air at STP it is 1.000277. For air at 0°C
and 1 atm., it is 1.000293. This, and not
time dilation, slows down light. The problem with Doppler effects
in relativity is that there appears to be lack of consistency in their cause
and effect relationship with time dilation. In some cases (within SR for
example) the time dilation itself is the actual cause of the observed
frequency shifting, while in other cases (such as specific equivalence
principle models) the acceleration-induced frequency shift seems to cause the
time dilation. Both are contradictory.
A BRIEF ANALYSIS OF THE RELATIVISTIC CONCEPT OF TIME:
Before
we discuss time orderings or whether time is Newtonian or Relativistic, let us
define time precisely. In his 1905 paper, Einstein says: “It might appear possible to overcome all the difficulties attending
the definition of ‘time’ by substituting ‘the position of the small hand of my
watch’ for ‘time’. And in fact such a definition is satisfactory when we are
concerned with defining a time exclusively for the place where the watch is
located; but it is no longer satisfactory when we have to connect in time series
of events occurring at different places, or - what comes to the same thing - to
evaluate the times of events occurring at places remote from the watch”.
It is not a precise or scientific definition of time, but the
description of the recordings of a clock, which is subject to mechanical error
in its functioning. Space, Time and coordinates, have no physical existence
like matter. They arise out of orderings or sequence or our notions of priority
and posterity. When the orderings are for objects, the interval between them is
called space. When it is for transformations in objects (events), the intervals
are called time. When we describe the specific nature of orderings of space
(straight line, geodesic, angular, etc), it is called coordinate system. Since
measurement is a comparison between similars (Einstein uses fixed speed of
light per second to measure distance), we use similar, but easily
intelligible and uniformly transforming natural sequence, such as the day or
year or its subdivisions as the unit of time. If a clock stops or functions
erratically, time does not stop or becomes erratic. Now is a fleeting interface
between two events. Hence while at the universal level it is the minimum
perceivable interval between two events, in specific cases it can have longer
durations as present continuous or continued existence for that form or system.
For example, all life cycles that are created undergo six stages of
evolution: transformation from quantum
state to macro state (from being to becoming), linear growth due to
accumulation of similar particles, non-linear growth or transformation due to interaction
with dissimilar particles, transmutation leading to the reverse process of
decomposition and final disintegration or decay. The total duration is a life
cycle and is continued existence for those individuals or objects. Comparison between two different natural life
cycles is the time dilation between them. Hence Einstein’s definition of time
is scientifically wrong. His definition of synchronization is also wrong
as shown below.
Einstein uses a
privileged frame of reference to define synchronization between clocks and then
denies the existence of any privileged frame of reference – a universal
“now” - for time. We quote from his 1905 paper:
“We have so far defined only an ‘A time’ and a ‘B
time’. We have not defined a common ‘time’ for A and B, for the latter cannot
be defined at all unless we establish by definition that the ‘time’
required by light to travel from A to B equals the ‘time’ it requires to travel
from B to A. Let a ray of light start at the ‘A time’ tA
from A towards B, let it at the ‘B time’ tB be reflected at B
in the direction of A, and arrive again at A at the ‘A time’ t’A.
In accordance with definition the two clocks synchronize if: tB-
tA = t’A-tB.
We
assume that this definition of synchronism is free from contradictions, and
possible for any number of points; and that the following relations are
universally valid:
- If the clock at B synchronizes with the clock at A, the clock at A synchronizes with the clock at B.
- If the clock at A synchronizes with the clock at B and also with the clock at C, the clocks at B and C also synchronize with each other.”
The
concept of relativity is valid only between two objects. Introduction of a third
object brings in the concept of privileged frame of reference and all equations
of relativity fall. Yet, Einstein does the same while claiming the very
opposite. In the above description, the clock at A is treated as a privileged
frame of reference for proving synchronization of the clocks at B and C. Yet,
he claims it is relative! Thus; his conclusion: “there are many quite
different but equally valid ways of assigning times to events or different
observers moving at constant velocity relative to one another require different
notions of time, as their clocks run differently”, is wrong. Paradoxically,
standard formulations of quantum mechanics use the universal “now” frequently.
An event
is defined as a single moment in three dimensional space and time,
characterized uniquely by (t, x, y, z). Since time is ever changing, it
represents time evolution of objects in space. These time evolutions are
different for different objects. Their ordered sequence, or the ordered
intervals between two such sequences, when measured by a similar but repetitive
interval that is easily intelligible, is called time measurement. But this does
not justify the conversion factor from time units to space unit via constant
speed of light per second - c, because of two reasons: space here is treated as
vacuum and there is no true vacuum. Secondly, velocity of light depends on the
density of the medium through which it travels, where it bends due to
diffraction (which causes time dilation). Einstein has later admitted it and
there are plenty of literatures on this subject.
After his SR paper of 1905,
Einstein has frequently held that the speed of light is not constant. In his
1911 paper “ON THE INFLUENCE OF GRAVITATION ON THE PROPAGATION OF LIGHT”, he
says:
“For measuring time at a place
which, relatively to the origin of the coordinates, has the gravitation
potential Φ, we must employ a clock which – when removed to the origin of
co-ordinates – goes (1 + Φ/c²) times more slowly than the clock used for
measuring time at the origin of coordinates. If we call the velocity of light
at the origin of coordinates c0, then the velocity of light c
at a place with the gravitation potential Φ will be given by the relation: c =
c0 (1 + Φ/c²)……………(3).
The principle of the constancy of the velocity of light holds good
according to this theory in a different form from that which usually underlies
the ordinary theory of relativity (italics ours).
4. Bending of Light-Rays in the
Gravitational Field
FROM the proposition which has just
been proved, that the velocity of light in the gravitational field is a
function of the place, we may easily infer, by means of Huygens’s principle,
that light-rays propagated across a gravitational field undergo deflection”.
Now let
us examine Lorentz transformation. The description of the measured state
at a given instant is physics and the use of the magnitude of change at two or
more designated instants to predict the outcome at other times is mathematics.
Measurement is a comparison between similars, of which the constant one is
called the unit. The factor v2/c2 or (v/c)2,
is the ratio of two dynamical quantities where c is the constant - hence a unit
of measurement of a dynamic variable. It can
be used to measure only the comparative dynamical velocities; not changes in
mass or dimension; which is possible only through accumulation or reduction of
similars. The second order factor (v/c)2 represents the
modifications of incoming light signal (third dimension - like the electromagnetic
radiation) as seen by an observer without changing any physical characteristics
of the observed that emits the light signal. Thus, Lorentz
transformation is only virtual – not real.
The
concept of measurement has undergone a big change over the last century. It all
began with the problem of measuring the length of a moving rod. Two
possibilities of measurement suggested by Einstein in his 1905 paper (published
as Zur Elektrodynamic bewegter Körper
in Annalen der Physik 17:891, 1905) were as follows:
(a)
“The observer moves together with the given measuring-rod and the rod to be
measured, and measures the length of the rod directly by superposing the
measuring-rod, in just the same way as if all three were at rest”, or
(b) “By
means of stationary clocks set up in the stationary system and synchronizing
with a clock in the moving frame, the observer ascertains at what points of the
stationary system the two ends of the rod to be measured are located at a
definite time. The distance between these two points, measured by the
measuring-rod already employed, which in this case is at rest, is the length of
the rod”
The
method described at (b) is
misleading. We can do this only by setting up a measuring device to record the
emissions from both ends of the rod at the designated time, (which is the same
as taking a photograph of the moving rod) and then measure the distance between
the two points on the recording device in units of velocity of light or any
other unit. But the picture will not give a correct reading due to two reasons:
·
If the length of the rod is small or
velocity is small, then length contraction will not be perceptible according to
the formula given by Einstein.
·
If the length of the rod is big or
velocity is comparable to that of light, then light from different points of
the rod will take different times to reach the recording device and the picture
we get will be distorted due to Doppler shift of different points. Thus, there
is only one way of measuring the length of the rod as in (a).
It is
said that gravity is “curved spacetime”, though Einstein did not use this term
in The Foundation of General Relativity published in 1916. To understand
gravity, we have to see not only what it does, but also what it is
- the cause also, not the effect only. Spacetime is space with motion (evolving
in time) through it. An object passing by a star traces a curved path that can
be compared to a plane tracing a silver streak in the sky. The silver streak is
not a part of the sky. We take a mental snapshot of what we see (the silver
streak) now - an
ever-shifting instant frozen after measurement as a timeless instant. Later we
call this non-existent “picture” as the path of the plane. It is the same with
gravity. If we take the derivative of that curved spacetime, what we get is a
gradient in space traced at a
certain instant, not curved spacetime. The rubber sheet analogy to
explain gravity is circular reasoning – use gravity to present a picture of
gravity! The path of the smaller ball is “bent” toward the
larger ball as it rolls by - only to get pulled-in toward the larger ball.
But that is still using actual gravity (gradient) to move one object toward the
other. This seems to explain the change in direction that an already
moving object experiences as it passes by. But how does general relativity
explain the mechanism behind the “force” pulling on a stationary object (in the
simplest possible terms) causing it to acquire kinetic energy and move toward
the attracting mass? Is something in space (or space itself) constantly being
pulled toward the massive objects which necessitate the motion to maintain a
lower energy state?
THE ALTERNATIVE CONCEPT:
Having
shown the deficiencies in the “established theories”, let us consider an
alternative concept by synchronizing available information. Maxwell’s equations are background
invariant. Transverse waves are always
characterized by particle motion being perpendicular to the wave motion. This
implies the existence of a medium through which the reference wave travels and
with respect to which the transverse wave travels in a perpendicular direction.
In the absence of the reference wave, which is a longitudinal wave, the
transverse wave can not be characterized as such. Transverse waves are background invariant by
its very definition. Since light is a transverse wave, it is background
invariant. Einstein’s ether-less
relativity is not supported by Maxwell’s Equations nor the Lorentz
Transformations, both of which are medium (aether) based. Thus, the Michelson-Morley
experiments (non-observance of aether drag) cannot serve to ultimately disprove
a universal background structure. We posit that the so-called dark energy is
the universal background structure.
The
universe is thought to be expanding because light from distant galaxies stretch towards redder
wavelengths. It is thought that over
small distances gravity has reversed the universe’s expansion, so that modest
blue-shifts are common. But according to Lowell Observatory Bulletin No. 58
Vol. II No. 8, not even the local group - the collection of approximately 75
galaxies that includes the Milky Way - expands. In fact, the Local Group’s
largest member - the Andromeda Galaxy is moving towards us: it has a blue-shift
of 300 kilometer per second. Now, astronomers have spotted an object far beyond
the Local Group’s borders (at the star clusters around M87, a giant elliptical
galaxy located at the heart of the Virgo Cluster, 54 million light-years from
Earth) with a blue-shift of 1,026 kilometer per second. We propose that the observed red-shift is due to the amalgamation
of continued emission of the same wave lengths at one instant on the recording
device or the photographic plate just like the picture of landscape below looks
different from different heights from a plane. The non-observance of the
expansion at local scales and observation of blue shifts, point out that the
galactic clusters are orbiting a common center like planets around the Sun.
Sometimes some planets appear to move away while at other times they appear to
close in. If it is “dark” because it is
non-interacting, then it cannot be energy, because energy is perceived only
indirectly by its interactions. Only because it is smooth and persistent, it
cannot be energy – fluids are also smooth and persistent. Thus, the
concept of dark matter and dark energy needs a review.
Here we
start with the creation event and derive the fundamental interactions from it.
The big bang does not imply a single event or an event in a designated epoch.
It could not have happened in vacuum ex-nihilo and could not have expanded into
nothing. Whatever was before the big bang, can be answered by the so-called
dark energy. Dark energy appears like a fluid that acts as a background
structure like a river on which boats float. The deterministic floating is
gravitation and the resultant mechanical propulsions are the fundamental
interactions. In arXiv:1402.0290v2 [math.AP] 6 Feb 2014, researchers have shown
that in an alternative abstract universe closely related to the one described
by the Navier-Stokes equations, it is possible for a body of fluid to form a
sort of computer, which can build a self-replicating fluid robot that, like the
“Cat in the Hat”, keeps transferring its energy to smaller and smaller copies
of itself until the fluid “blows up”. This is in sync with our theory.
Recently
researchers made an agglomeration of short-lived cluster of electrons and
positively charged “holes” some 200 nanometers across that could form a
liquid-like quasi-particle, which has been dubbed Dropleton. A Dropleton is a
new kind of particle cluster in solids, formed inside a tiny correlation bubble
(drops) that lasts only 25 picoseconds. This liquid-like particle droplet is
created by light and its energy has quantized dependency on light intensity. It
acts like a super-sized electron. Oppositely charged electrons and holes tend to form pairs
called excitons. These pairs are used in solar panels, which employ special
materials to separate the electron-hole pairs, freeing up electrons and
generating current. The photons that excite the electrons to form Dropleton
become entangled with individual exciton pairs. Louis de’ Broglie had theorized that all matter has a wave property associated
with it. Combining both, we can reformulate the modern notion of wave-particle
duality. What we “see” is the radiation emitted by an object, but what we
“touch” is mass that emits radiation that is not seen. It is separate matter
and wave – not convertible or occasionally simultaneously a matter and a wave. The
principle of mass-energy equivalence, which is treated as the corner-stone
principle of all nuclear interactions, binding energies of atoms and nucleons,
etc., enters physics only as a corollary of the transformation equations
between frames of references in relative motion. The equation e = mc2,
implies rate of change in a stable configuration and not mass-energy conversion,
as both have opposite properties. We can define mass as confined energy packets
of different density - covering a fixed area. Energy confined around a point
generates externally directed pressure that is felt as mass. These are digital
entities. The field is analog space. Five way bonding between particles and
fields are perceived as reality. The universe is a closed system that
spins or “swirls” in a “B-mode pattern” – to use the phraseology of the BICEP2
telescope team. Their findings question a few inflationary models, but justify
our theory. Spin is a common feature of all bodies from atoms to stars to
galaxies. It is also a feature of the universe.
The dark
energy to total matter ratio in the universe (68.3% to 31.7% subject to
precision measurement) is about the same as the ratio of sea to land area on
Earth. Further, the total mass of the constituent quarks forms a small fraction
of the mass of protons and neutrons, just like ordinary matter forms a small
fraction of the cosmos. The neutrinos are the equivalent of falling apples (no
one knows whether the neutrinos and the anti-neutrinos are the same or
different). Thus, these can be used as a model to represent the universe. The standard
model of particle physics says that matter is made of quarks and leptons while
the various forces in the universe, such as the strong and weak nuclear forces,
and electromagnetism act through “mediator” particles: gluons, Zo, W±
and photons. In theory, these mediators are all massless, and so all the
fundamental forces should act over infinite distances. But in reality, they do
not - the forces have a limited range, and the mediator particles have mass.
Further, while the strong and electromagnetic forces have only one “mediator”
each, the weak force has three “mediators”. This indicates that the weak force
behaves differently than others, i.e., in more than one way.
Physicists
believe that the source of mass is something called the Higgs field that fills
the universe and is mediated by a particle known as the Higgs boson. These
bosons are thought to exist in a “condensed” state that excludes the mediator
particles such as gluons in the same way that a superconductor’s entangled
electrons exclude the photons of a magnetic field. This exclusion by the Higgs
field is what gives the mediator particles an effective mass, and also limits
their range of influence. But no one has shown exactly how they exclude, say,
gluons. The condensation of the Higgs bosons and exclusion of the mediators
requires entanglement between the Higgs bosons. Entanglement may be linked to
the mass of not just the mediator particles, but all fundamental particles.
Different particles would interact differently with the entangled Higgs bosons,
providing different “effective masses” for each particle. But there must be a
connection between entanglement and mass. The laws of physics do allow energy to be converted into
matter, but require that almost equal quantities of antimatter be produced in
the process. These two are entangled. Thus, entanglement and confinement are
related aspects. Entanglement is a function of charge. Hence it can be of two
types: one like a fluid in a container and the other like a planet by the star.
The former is also of two types: inside the container adjusting to its surface
(or like a simple thermostat exploiting
the difference in the thermal expansion of two metal films to sense temperature
changes and trigger the heating or cooling system on or off as needed)
or falling out of it and separated from the rest. The fundamental interactions
behave in this manner.
Each
application of force generates an entangled couple of equal and opposite
interactions due to laws of conservation (or inertia of restoration) that tries
to retain the state at the instant of interaction t, and inertia of motion,
that tries to conserve the state after t. These generate impedance and stress
respectively in the background field that may be experienced by other bodies
entering it - either linearly or non-linearly (when other effects exist). The
intensity of interaction depends on the average density of the field
encompassing the bodies, the nature of composition of the bodies (internal
mass-energy density ratio vis-à-vis the field density: that generates
momentum), and distance between the bodies (or their boundaries or orbits). The
local density gradient of the field determines the resultant motion – apparent
attraction or repulsion witch is described as the curvature of spacetime. Within
the body or the system, this creates four entangled sets of proximity-distance
variables between the bodies (proximity-proximity, proximity-distance,
distance-distance and distance-proximity). These are the four fundamental
forces of Nature – strong interaction, two types of weak interaction, and
electromagnetic interactions respectively. These are intra-body variables that
produce all particles in different combinations and determine dimensions - thus
invariant under the Lorentz transformation.
Gravity is
an all pervading force that acts on each body linearly. Due to differential
mass, the resultant nonlinear movement appears as an inter-body force. In
relation to the parts of a body, it resolves into the other four interactions. What
is this mechanism? If we look at the mass-energy interaction spectrum, we find
that chemical properties begin with molecules that are mixtures of atoms. Atoms
can be thought of as compounds of protons and neutrons, but their stability in
any combination depends upon several factors. Also different stable
combinations produce different elements and isotopes; just like different
combinations of quarks form protons or neutrons. They are held together by the
n-p chain, which, in turn, depends on quark conversion. This is mediated by
release of neutrinos by one and its absorption by the other. This is why energy seemed to disappear when one
atomic nucleus decayed into another nucleus plus an electron. The laws of
physics do allow energy to be converted into matter, but require that almost
equal quantities of antimatter are produced in the process. In a separate paper
we will show that in a chain of different mechanisms, the big bang leads to
generation of spin or “swirls” in “E-mode and B-mode patterns”, which, contrary to popular belief, is not associated with inflation,
but disinflation or deflation that slowed down the initial expansion rate. It
also leads to generation of charge, so that the structures could be created. Charge
leads to generation of entanglement, which leads to the interactions leading to
confinement. Everything rests in the all-encompassing field, whose response to all forces is called
gravity! This makes G variable between different systems. When we interact with
it (apply freewill), we feel entangled “conduction”, “convection” and
“radiation” currents differently – in 5, 7, 11, 49 or 122 ways, which explains
all motions. We will discuss it later.
No comments:
Post a Comment
let noble thoughts come to us from all around