PHYSICS BEYOND STANDARD MODEL
mbasudeba@gmail.com
INTRODUCTORY:
Notices of the
American Mathematical Society Volume 52, Number 9 published a paper in which Mr.
Mason A. Porter and Mr. Predrag Cvitanovic had shown that the theory of
dynamical systems used to design trajectories of space flights and the theory
of transition states in chemical reactions share the same set of mathematics.
We posit that this is a universal phenomenon and every quantum system and phenomenon
including superposition, entanglement and spin has macro equivalents. This will
be proved, inter alia, by deriving bare mass and bare charge (subjects of
Quantum Electrodynamics and Field Theory) without renormalization and without
using a counter term, and linking it to dark matter and dark energy (subjects
of cosmology). In the process we will give a simple conceptual mechanism for
deriving all forces starting from a single source. We also posit that physics
has been deliberately made incomprehensible with a preponderance of
“mathematical modeling” to match experimental and observational data through
back door. Most of the “mathematics” in physics does not conform to
mathematical principles.
In a paper “Is
Reality Digital or Analogue” published by the FQXi Community on Dec. 29, 2010,
we have shown that: uncertainty is not a law of Nature. It is the result of
natural laws relating to measurement that reveal a kind of granularity at
certain levels of existence that is related to causality. The left hand side of
all valid equations or inequalities represents freewill, as we are free to
choose (or vary within certain constraints) the individual parameters. The
right hand side represents determinism, as the outcome is based on the input in
predictable ways. The equality (or inequality) sign prescribes the special conditions to be observed or matched
to achieve the desired result. These special conditions, which cannot be always
predetermined with certainty or chosen by us arbitrarily, introduce the element
of uncertainty in measurements.
When Mr. Heisenberg proposed his conjecture in 1927, Mr. Earle Kennard independently derived a
different formulation, which was later generalized by Mr. Howard Robertson as: σ(q)σ(p)
≥ h/4π. This inequality says that one cannot suppress quantum
fluctuations of both position σ(q) and momentum σ(p)
lower than a certain limit simultaneously. The fluctuation exists regardless of
whether it is measured or not implying the existence of a universal field. The
inequality does not say anything about what happens when a measurement is
performed. Mr. Kennard’s
formulation is therefore totally different from Mr. Heisenberg’s. However, because of the
similarities in format and terminology of the two inequalities, most physicists
have assumed that both formulations describe virtually the same phenomenon. Modern
physicists actually use Mr. Kennard’s
formulation in everyday research but mistakenly call it Mr. Heisenberg’s uncertainty principle. “Spontaneous”
creation and annihilation of virtual particles in vacuum is possible only in Mr.
Kennard’s formulation and not in Mr.
Heisenberg’s formulation, as otherwise it
would violate conservation laws. If it were violated experimentally, the whole
of quantum mechanics would break down.
The uncertainty
relation of Mr. Heisenberg was
reformulated in terms of standard deviations, where the focus was exclusively
on the indeterminacy of predictions, whereas the unavoidable disturbance in
measurement process had been ignored. A correct formulation of the
error–disturbance uncertainty relation, taking the perturbation into account,
was essential for a deeper understanding of the uncertainty principle. In 2003 Mr. Masanao Ozawa developed the following formulation of the error and disturbance
as well as fluctuations by directly measuring errors and disturbances in the
observation of spin components: ε(q)η(p) + σ(q)η(p)
+ σ(p)ε(q) ≥ h/4π.
Mr. Ozawa’s inequality suggests that suppression
of fluctuations is not the only way to reduce error, but it can be achieved by allowing
a system to have larger fluctuations. Nature Physics (2012) (doi:10.1038/nphys2194)
describes a neutronoptical experiment that records the error of a
spincomponent measurement as well as the disturbance caused on another
spincomponent. The results confirm that both error and disturbance obey the
new relation but violate the old one in a wide range of experimental parameters.
Even when either the source of error or
disturbance is held to nearly zero, the other remains finite. Our description
of uncertainty follows this revised formulation.
While the
particles and bodies are constantly changing their alignment within their
confinement, these are not always externally apparent. Various circulatory
systems work within our body that affects its internal dynamics polarizing it
differently at different times which become apparent only during our
interaction with other bodies. Similarly, the interactions of subatomic
particles are not always apparent. The elementary particles have intrinsic spin
and angular momentum which continually change their state internally. The time
evolution of all systems takes place in a continuous chain of discreet steps.
Each particle/body acts as one indivisible dimensional system. This is a
universal phenomenon that creates the uncertainty because the internal dynamics
of the fields that create the perturbations are not always known to us. We may
quote an example.
Imagine an
observer and a system to be observed. Between the two let us assume two interaction boundaries. When the
dimensions of one medium end and that of another medium begin, the interface of
the two media is called the boundary. Thus there will be one boundary at the
interface between the observer and the field and another at the interface of
the field and the system to be observed. In a simple diagram, the situation can
be schematically represented as shown below:
Here O
represents the observer and S the system to be observed. The vertical lines
represent the interaction boundaries. The two boundaries may or may not be
locally similar (have different local density gradients). The arrows represent
the effect of O and S on the medium that leads to the information exchange that
is cognized as observation.
All information
requires an initial perturbation involving release of energy, as perception is
possible only through interaction (exchange of force). Such release of energy
is preceded by freewill or a choice of the observer to know about some aspect of the system through a known mechanism. The mechanism is deterministic – it
functions in predictable ways (hence known). To measure the state of the
system, the observer must cause at least one quantum of information (energy,
momentum, spin, etc) to pass from him through the boundary to the system to
bounce back for comparison. Alternatively, he can measure the perturbation
created by the other body across the information boundary.
The quantum of
information (seeking) or initial perturbation relayed through an impulse
(effect of energy etc) after traveling through (and may be modified by) the
partition and the field is absorbed by the system to be observed or measured
(or it might be reflected back or both) and the system is thereby perturbed.
The second perturbation (release or effect of energy) passes back through the
boundaries to the observer (among others), which is translated after
measurement at a specific instant as the quantum of information. The observation is the observer’s subjective response on receiving this information. The
result of measurement will depend on the totality of the forces acting on the
systems and not only on the perturbation created by the observer. The “other
influences” affecting the outcome of the information exchange give rise to an
inescapable uncertainty in observations.
The system being
observed is subject to various potential (internal) and kinetic (external) forces
which act in specified ways
independent of observation. For example chemical reactions take place only after certain temperature threshold
is reached. A body changes its state of motion only after an external force acts on it. Observation doesn’t affect
these. We generally measure the outcome – not the process. The
process is always deterministic. Otherwise there cannot be any theory. We
“learn” the process by different means – observation, experiment, hypothesis,
teaching, etc, and develop these into cognizable theory. Heisenberg was right that “everything
observed is a selection from a plentitude of possibilities and a limitation on
what is possible in the future”. But his logic and the mathematical format of the uncertainty
principle: ε(q)η(p) ≥ h/4π are
wrong.
The observer
observes the state at the instant of second perturbation –
neither the state before nor after it. This is because only this state, with or
without modification by the field, is relayed back to him while the object continues
to evolve in time. Observation records only this temporal state and freezes it
as the result of observation (measurement). Its truly evolved state at any
other time is not evident through such observation. With this, the forces
acting on it also remain unknown – hence uncertain. Quantum theory takes these
uncertainties into account. If ∑ represents the state of the system before and ∑ ± ¶∑ represents the state at
the instant of perturbation, then the difference linking the transformations in
both states (treating other effects as constant) is minimum, if ¶∑<<∑. If I
is the impulse selected by the observer to send across the interaction
boundary, then ¶∑
must be a function of I: i.e. ¶∑ = f (I).
Thus, the observation is affected by the choices made by the observer also.
The inequality: ε(q)η(p) ≥ h/4π or as it is commonly written: δx. δp
≥ ħ permits simultaneous
determination of position along xaxis and momentum along the yaxis; i.e., δx. δp_{y}
= 0. Hence the statement that position and momentum cannot be measured
simultaneously is not universally valid. Further, position has fixed coordinates
and the axes are fixed arbitrarily. The dimensions remain invariant under
mutual transformation. Position along xaxis and momentum along yaxis can only
be related with reference to a fixed origin (0, 0). If one has a nonzero
value, the other has indeterminate (or relatively zero) value (if it has
position say x = 5 and y = 7, then it implies that it has zero relative momentum,
otherwise either x or y or both would not be constant, but will have extension).
Multiplying both, the result will always be zero. Thus no mathematics is
possible between position (fixed coordinates) and momentum (mobile coordinates)
as they are mutually exclusive in space and time. They do not commute. Hence, δx.δp_{y}
= 0.
Uncertainty is not a law of Nature. We
can’t create a molecule from any
combination of atoms as it has to follow certain “special conditions”. The conditions may be different like the restrictions
on the initial perturbation sending the signal out or the second perturbation
leading to the reception of the signal back for comparison because the inputs
may be different like c+v and cv or there may be other inhibiting
factors like a threshold limit for interaction. These “special conditions” and external influences that regulate and influence
all actions and are unique by themselves, and not the process of measurement, create uncertainty. The disturbances
arising out of the process of measurement are operational (technological) in
nature and not existential for the particles.
Number is a property of all substances
by which we differentiate between similars. If there are no similars, it is one.
If there are similars, the number is many. Depending upon the sequence
of perception of “one’s”, many can be 2, 3, 4…n etc. Mathematics
is accumulation and reduction of similars, i.e., numbers of the same class of
objects (like atomic number or mass number), which describes the changes in the
physical phenomena or object when the numbers of any of the parameters are
changed.
Mathematics is related to the
result of measurement. Measurement is a conscious process of comparison
between two similar quantities, one of which is called the scaling constant (unit).
The cognition part induces the action leading to comparison, the reaction of
which is again cognized as information. There is a threshold limit for such
cognition. Hence Nature is mathematical in some perceptible ways. This has been
proved by the German physiologist Mr. Ernst Heinrich Weber, who measured human response to various physical
stimuli. Carrying out experiments with lifting increasing weights, he devised
the formula: ds = k (dW / W), where ds is the threshold increase
in response (the smallest increase still discernible), dW the corresponding
increase in weight, W the weight already present and k the proportionality
constant. This has been developed as the WeberFechner law. This shows that the
conscious response follows a somewhat logarithmic law. This has been successfully
applied to a wide range of physiological responses.
Measurement is not the action of putting a
scale to a rod, which is a mechanical action. Measurement is a conscious
process of reaching an inference based on the action of comparison of
something with an appropriate unit at “herenow”. The readings of a particular
aspect, which indicate a specific state of the object at a designated instant, (out
of an infinite set of temporally evolving states), is frozen for use at
other times and is known as the “result of measurement”. The states relating to
that aspect at all “other times”, which cannot be measured; hence remain unknown,
are clubbed together and are collectively referred to as the “superposition
of states” (we call it adhyaasa). This concept has not only been
misunderstood, but also unnecessarily glamorized and made incomprehensible in
the “undead” Schrödinger’s cat and other examples. The normal time evolution of
the cat (its existential aspect) and the effect of its exposure to poisonous
gas (the operational aspect) are two different unrelated aspects of its history.
Yet these unrelated aspects have been coupled to bring in a state of
coupledsuperposition (we call it aadhyaasika taadaatmya), which is
mathematically, physically and conceptually void.
Mathematics is related to accumulation and
reduction of numbers. Since measurements are comparison between similar
quantities, mathematics is possible only between similars (linear) or partly
similars (nonlinear) but never between the dissimilars. We cannot add or
multiply 3 protons and 3 neutrons. They can be added only by taking their
common property of mass to give mass number. These accumulation and reduction
of numbers are expressed as the result of measurement after comparison with a scaling
constant (standard unit) having similar characteristics (such as length
compared with unit length, area with unit area, volume with unit volume, density
with unit density, interval with unit interval, etc). The results of
measurements are always pure numbers, i.e., scalar quantities, because the
dimensions of the scaling constants are same for both the measuring device and
the object being measured and measurement is only the operation of scaling up
or down the unit for an appropriate number of times. Thus, mathematics explains
only “how much” one quantity accumulates or reduces in an interaction
involving similar or partly similar quantities and not “what”, “why”,
“when”, “where”, or “with whom” about the objects involved
in such interactions. These are the subject matters of physics. We will show
repeatedly that in modern physics there is a mismatch and mixup between the
data, the mathematics and the physical theory.
Quantum physics implied
that physical quantities usually have no values until they are observed. Therefore,
the observer must be intrinsically involved in the physics being observed. This
has been wrongly interpreted to mean that there might be no real world in the
absence of an observer! When we measure a particular quantity, we come up with
a specific value. This value is “known” only after the conscious or sentient
content is added to the measurement. Thus, it is reasonable to believe that
when we do not measure or perceive, we do not “know” the value – there is no operation
of the conscious or sentient content is inert  and not that the quantity does
not have any existential value. Here the failure of the physicists to find the
correct “mathematics” to support their “theory” has been put forth as a pretext
for denying reality. Mathematics
is an expression of Nature, not its sole language. Though observer has a
central role in Quantum theories, its true nature and mechanism has eluded the
scientists. There cannot be an
equation to describe the observer, the glory of the rising sun, the grandeur of
the towering mountain, the numbing expanse of the night sky, the enchanting fragrance
of the wild flower or the endearing smile on the lips of the beloved. It is not
the same as any physical or chemical reaction or curvature of lips.
Mathematics is often manipulated to spread
the cult of incomprehensibility. The electroweak theory is extremely
speculative and uses questionable mathematics as a cover for opacity to predict
an elusive Higg’s mechanism. Yet, tens of millions of meaningless papers have
been read out in millions of seminars world wide based on such unverified myth
for a half century and more wasting enormous amounts of resources that could
otherwise have been used to make the Earth a better place to live. The
physicists use data from the excellent work done by experimental scientists to
develop theories based on reverse calculation to match the result. It is
nothing but politics of physics – claim credit for bringing in water in the
river when it rains. Experiment without the backing of theory is blind. It can
lead to disaster. Rain also brings floods. Experiments guided by economic and
military considerations have brought havoc to our lives.
We don’t see the
earlier equations in their original format because all verified inverse square
laws are valid only in spherically symmetric emission fields that rule out
virtual photons and messenger photons etc. Density is a relative term and
relative density is related to volume, which is related to diameter. Scaling up
or down the diameter brings in corresponding changes in relative density. This
gives rise to inverse square laws in a real emission field. The quanta cannot spontaneously
emit other quanta without violating conservation laws. This contradicts the
postulates of QED and QFT. The modern physicists are afraid of reality. To
cover up for their inadequacies, the equations have been rewritten using different
unphysical notations to make it incomprehensible for even those making a career
out of it. Reductionism, superstitious belief in the validity of “accepted
theories” and total reliance on them, and the race for getting recognition at
the earliest by any means, compound the problem. Thus, while the “intellectual supremacy (?)” of the
“establishment scientists” is reinforced before “outsiders”, it goes
unchallenged by even their own community.
The modern physicists disregard even reality. Example: in “Reviews
of Modern Physics”, Volume 77, July 2005, p. 839, Mr. GellMann says: “In order
to obtain such relations that we conjecture to be true, we use the method of
abstraction from a Lagrangian fieldtheory model. In other words, we construct a mathematical theory of the
strongly interacting particles, which may or may not have anything to do with
reality, find suitable algebraic relations that hold in the model, postulate their validity, and then throw away the model. We may compare
this process to a method sometimes employed in French cuisine: a piece of
pheasant meat is cooked between two slices of veal, which are then discarded”.
Is it physics? Thankfully, he has not differentiated between the six different
categories of veal: Prime, Choice, Good, Standard, Utility and Cull linking it
to the six quarks. Veal is used in the cuisine because of its lack of natural
fat, delicate flavor and fine texture. These qualities creep into the pheasant
meat even after the veal is discarded. But what Mr. GellMann proposes is: use
A to prove B. Then throw away A! B cannot stand without A. It is the ground for B.
A complete theory must have elements of the
theory corresponding to every element of reality over and above those implicit in the socalled wavefunction. Mr. David Hilbert argues:
“Mathematical existence is merely freedom from contradiction”. This implies
that mathematical structures simply do not exist unless they are logically
consistent. The validity of a
mathematical statement is judged by its logical consistency. The validity of a
physical statement is judged by its correspondence to reality. Russell’s
paradox and other paradoxes  such as the ZermeloFrankel set theory that
avoids the Russell’s paradox  point out that mathematics on its own does not
lead to a sensible universe. We must apply constraints in order to obtain
consistent physical reality from mathematics. Unrestricted axioms lead to
Russell’s paradox. Manipulation of
mathematics to explain physics has violated the principle of logical
consistency in most cases. One example is renormalization or elimination of
infinities using a “counter term”, which is logically not consistent, as
mathematically all operations involving infinity are void. Some describe it as
divergence linking it to the concept of limit. We will show that the problem
with infinities can be solved in mathematically consistent ways without using a
“counter term” by reexamining the concept of limit.
Similarly, Mr. Feynman’s sumover histories is the “sum of the
particle’s histories” in imaginary time rather than in real time. Feynman had
to do the sum in imaginary time because he was following Mr. Minkowski, who assigned time to the imaginary
axis. That is the four vector field in GR. Mr. Minkowski assigned time to that axis to make the field symmetrical. It
was a convenience for him, not a physical necessity or reality. But once it is
done, it continued to denormalize everything. Mr. Feynman was not using imaginary time; he was
using real time, but assigned it to the imaginary axis. The theory gets the
correct answer up to a certain limit not because it is correct, but because it
had been proposed through back calculation from experimental results. The gaps
and the greater technical difficulties of trying to sum these in real time are
avoided through technical jargon. These greater technical difficulties are also
considered as a form of renormalization, but they require infinite
renormalization, which is mathematically not valid. Mr. Feynman’s renormalization is heuristics: “mathematics”
specially designed to explain a limited set of data.
Mathematics is also related to the
measurement of time evolution of the state of something. These time evolutions
depict rate of change. When such change is related to motion; like velocity,
acceleration, etc, it implies total displacement from the position occupied by
the body and moving to the adjacent position. This process is repeated due to
inertia till it is modified by the introduction of other forces. Thus, these are
discrete steps that can be related to three dimensional structures only. Mathematics
measures only the numbers of these steps, the distances involved including
amplitude, wave length, etc and the quanta of energy applied etc. Mathematics
is related also to the measurement of area or curves on a graph – the socalled
mathematical structures, which are two dimensional structures. Thus, the
basic assumptions of all topologies, including symplectic topology, linear and
vector algebra and the tensor calculus, all representations of vector spaces,
whether they are abstract or physical, real or complex, composed of whatever
combination of scalars, vectors, quaternions, or tensors, and the current
definition of the point, line, and derivative are necessarily at least one
dimension less from physical space.
The
graph may represent space, but it is not space itself. The drawings of a circle,
a square, a vector or any other physical representation, are similar abstractions.
The circle represents only a two dimensional cross section of a three
dimensional sphere. The square represents a surface of a cube. Without the cube
or similar structure (including the paper), it has no physical existence. An
ellipse may represent an orbit, but it is not the dynamical orbit itself. The
vector is a fixed representation of velocity; it is not the dynamical velocity
itself, and so on. The socalled simplification or scaling up or down of the
drawing does not make it abstract. The basic abstraction is due to the fact
that the mathematics that is applied to solve physical problems actually applies
to the two dimensional diagram, and not to the three dimensional space. The
numbers are assigned to points on the piece of paper or in the Cartesian graph,
and not to points in space. If one assigns a number to a point in space, what
one really means is that it is at a certain distance from an arbitrarily chosen
origin. Thus, by assigning a number to a point in space, what one really does
is assign an origin, which is another point in space leading to a contradiction.
The point in space can exist by itself as the equilibrium position of various
forces. But a point on a paper exists only with reference to the arbitrarily
assigned origin. If additional force is applied, the locus of the point in space
resolves into two equal but oppositely directed field lines. But the locus of a
point on a graph is always unidirectional and depicts distance – linear or
nonlinear, but not force. Thus,
a physical structure is different from its mathematical representation.
The
word vacuum has always been used to mean “the thing that is not material or
particulate”. By definition, the vacuum is supposed to be nothing, but often it
is used to mean something. This is a contradiction because it begs the paradox
of Parmenides: If the vacuum is composed of virtual particle pairs, then it no
longer is the vacuum: it is matter. If everything is matter, then we have a
plenum in which motion is impossible. Calling this matter “virtual” is
camouflage. When required to be transparent, treat it as nothing and when it is
required to have physical characteristics (like polarity), treat it as something!
Defining something as both x and nonx is not physics.
There
is no surprise that the equations of QCD remain unsolved at energy scales
relevant for describing atomic nuclei! The various terms of QCD like “color”,
“flavor”, the strangeness number (S) and the baryon number (B) etc, are not
precisely defined and cannot be mechanically assigned. Even spin cannot be mechanically
assigned for quarks except assigning a number. The quantum spin is said to be
not real since quarks are point like and cannot spin. If quarks cannot spin, how
does chirality and symmetry apply to them at this level? How can a point
express chirality and how can a point be either symmetrical or nonsymmetrical?
If W bosons that fleetingly mediate particles have been claimed to leave their
footprints, quarks should be more stable! But single quarks have never been seen
in bubble chambers, ionization chambers, or any other experiments. We will
explain the mechanism of spin (1/6 for quarks) to show that it has macro
equivalents and that spin zero means absence of spin – which implies only massless
energy transfer.
Objects
in three dimensional spaces evolve in time. Mathematical structures in two
dimensions do not evolve in time – it only gets mechanically scaled up or down.
Hawking and others were either confused or trying to fool others when they
suggested “time cone” and “event horizon” by manipulating a two dimensional
structure and suggesting a time evolution and then converting it to a three
dimensional structure. Time, unlike distance that is treated as space in a
graph, is an independent variable. We cannot plot or regulate time. We can only
measure time or at best accommodate our actions in time. A light pulse in two
dimensional field evolves in time as an expanding circle and not as a conic
section. In three dimensions, it will be an expanding sphere and not a cone.
The reverse direction will not create a reverse cone, but a smaller sphere.
Thus, their concept of time cone is not even a valid mathematical representation
of physical reality. Researchers have found a wide variety of stellar collapse
scenarios in which an event horizon does not form, so that the singularity
remains exposed to our view. Physicists call it a “naked singularity”. In such
a case, Matter and radiation can both fall in and come out, whereas matter falling
into the singularity inside a black hole would land in a oneway trip. Thus, “naked
singularity” proves the concept of “even horizon wrong”.
The description of the measured state at a
given instant is physics and the use of the magnitude of change at two or more designated
instants to predict the outcome at other times is mathematics. But the concept
of measurement has undergone a big change over the last century leading to
changes in “mathematics of physics”. It all began with the problem of measuring
the length of a moving rod. Two possibilities of measurement suggested by Mr.
Einstein in his 1905 paper were:
(a)
“The observer moves together with the given measuringrod and the rod to be
measured, and measures the length of the rod directly by superposing the
measuringrod, in just the same way as if all three were at rest”, or
(b)
“By means of stationary clocks set up in the stationary system and
synchronizing with a clock in the moving frame, the observer ascertains at what
points of the stationary system the two ends of the rod to be measured are
located at a definite time. The distance between these two points, measured by
the measuringrod already employed, which in this case is at rest, is the
length of the rod”
The method described at (b) is misleading.
We can do this only by setting up a measuring device to record the emissions
from both ends of the rod at the designated time, (which is the same as taking
a photograph of the moving rod) and then measure the distance between the two
points on the recording device in units of velocity of light or any other unit.
But the picture will not give a correct reading due to two reasons:
·
If
the length of the rod is small or velocity is small, then length contraction
will not be perceptible according to the formula given by Einstein.
·
If
the length of the rod is big or velocity is comparable to that of light, then
light from different points of the rod will take different times to reach the recording
device and the picture we get will be distorted due to different Doppler shift.
Thus, there is only one way of measuring the length of the rod as in (a).
Here also we are reminded of an anecdote relating
to a famous scientist, who once directed two of his students to precisely measure
the wavelength of sodium light. Both students returned with different results
– one resembling the normally accepted value and the other a different value.
Upon enquiry, the other student replied that he had also come up with the same
result as the accepted value, but since everything including the Earth and the
scale on it is moving, for precision measurement he applied length contraction
to the scale treating the star Betelgeuse as a reference point. This changed
the result. The scientist told him to treat the scale and the object to be
measured as moving with the same velocity and recalculate the wavelength of light
again without any reference to Betelgeuse. After sometime, both the students
returned to tell that the wavelength of sodium light is infinite. To a
surprised scientist, they explained that since the scale is moving with light,
its length would shrink to zero. Hence it will require an infinite number of
scales to measure the wavelength of sodium light!
Some scientists we have come across try to
overcome this difficulty by pointing out that length contraction occurs only
in the direction of motion. They claim that if we hold the rod in a transverse
direction to the direction of motion, then there will be no length contraction.
But we fail to understand how the length can be measured by holding the rod in
a transverse direction. If the
light path is also transverse to the direction of motion, then the terms c+v and cv vanish from the equation making the entire theory redundant. If
the observer moves together with the
given measuringrod and the rod to be measured, and measures the length of the
rod directly by superposing the measuringrod while moving with it, he will not
find any difference because the length contraction, if real, will be in the
same proportion for both.
The
fallacy in the above description is that if one treats “as if all three were at
rest”, one cannot measure velocity or momentum, as the object will be
relatively as rest, which means zero relative velocity. Either Mr. Einstein missed this point or he was
clever enough to camouflage this, when, in his 1905 paper, he said: “Now to the
origin of one of the two systems (k) let a constant velocity v be
imparted in the direction of the increasing x of the other stationary
system (K), and let this velocity be communicated to the axes of the
coordinates, the relevant measuringrod, and the clocks”. But is this
the velocity of k as measured from k, or is it the velocity as measured from K?
This question is extremely crucial. K and k each have their own clocks and
measuring rods, which are not treated as equivalent by Mr. Einstein. Therefore,
according to his theory, the velocity will be measured by each differently. In
fact, they will measure the velocity of k differently. But Mr. Einstein does
not assign the velocity specifically to either system. Everyone missed it and all are misled. His
spinning disk example in GR also falls for the same reason.
Mr. Einstein uses a privileged frame of
reference to define synchronization and then denies the existence of any
privileged frame of reference. We quote from his 1905 paper on the definition
of synchronization: “Let a ray of light start at the “A time” t_{A}
from A towards B, let it at the “B time” t_{B} be reflected at B
in the direction of A, and arrive again at A at the “A time” t’_{A}.
In accordance with definition the two clocks synchronize if: t_{B}
 t_{A} = t’_{A}
 t_{B}.”
“We assume that this definition of
synchronism is free from contradictions, and possible for any number of points;
and that the following relations are universally valid:—
 If the clock at B synchronizes with the clock at A, the clock at A synchronizes with the clock at B.
 If the clock at A synchronizes with the clock at B and also with the clock at C, the clocks at B and C also synchronize with each other.”
The concept of relativity is valid only
between two objects. Introduction of a third object brings in the concept of
privileged frame of reference and all equations of relativity fall. Yet, Mr. Einstein
precisely does the same while claiming the very opposite. In the above
description, the clock at A is treated as a privileged frame of reference for
proving synchronization of the clocks at B and C. Yet, he claims it is
relative!
The cornerstone of GR is the principle of
equivalence. It has been generally accepted without much questioning. But if we
analyze the concept scientifically, we find a situation akin to the Russell’s
paradox of Set theory, which raises an interesting question: If S is the set of
all sets which do not have themselves as a member, is S a member of itself? The
general principle (discussed in our book Vaidic Theory of Numbers) is that: there
cannot be many without one, meaning there cannot be a set without individual
elements (example: a library – collection of books – cannot exist without
individual books). In one there cannot be many, implying, there cannot
be a set of one element or a set of one element is superfluous (example: a book
is not a library)  they would be individual members unrelated to each other as
is a necessary condition of a set. Thus, in the ultimate analysis, a collection
of objects is either a set with its elements, or individual objects that are
not the elements of a set.
Let us examine set theory and consider the
property p(x): x Ï x, which means the defining property p(x)
of any element x is such that it does not belong to x. Nothing appears unusual
about such a property. Many sets have this property. A library [p(x)] is a collection of books. But a book is not a
library [x Ï x]. Now, suppose this property defines the set R
= {x : x Ï x}. It must be possible to determine if RÎR or RÏR. However if RÎR, then the
defining properties of R implies that RÏR, which contradicts the supposition that RÎR. Similarly,
the supposition RÏR confers on R the right to be an element of
R, again leading to a contradiction. The only possible conclusion is that, the
property “x Ï x” cannot define a set. This idea is also known as the Axiom of
Separation in ZermeloFrankel set theory, which postulates that; “Objects can
only be composed of other objects” or “Objects shall not contain themselves”. This
concept has been explained in detail with examples in the chapter on motion in
the ancient treatise “Padaartha Dharma Samgraha” – Compendium on Properties of
Matter written by Aachaarya Prashastapaada.
In order to avoid this paradox, it has to be
ensured that a set is not a member of itself. It is convenient to choose a
“largest” set in any given context called the universal set and confine the
study to the elements of such universal set only. This set may vary in
different contexts, but in a given set up, the universal set should be so
specified that no occasion arises ever to digress from it. Otherwise, there is
every danger of colliding with paradoxes such as the Russell’s paradox. Or as
it is put in the everyday language: “A man of Serville is shaved by the Barber
of Serville if and only if the man does not shave himself?”
There is a similar problem
in the theory of General Relativity and the principle of equivalence. Inside a
spacecraft in deep space, objects behave like suspended particles in a fluid or
like the asteroids in the asteroid belt. Usually, they are relatively stationary
in the medium unless some other force acts upon them. This is because of the relative
distribution of mass inside the spacecraft and its dimensional volume that
determines the average density at each point inside the spacecraft. Further the
average density of the local medium of space is factored into in this
calculation. The light ray from outside can be related to the space craft only
if we consider the bigger frame of reference containing both the space emitting
light and the spacecraft. If the passengers could observe the scene outside the
spacecraft, they will notice this difference and know that the space craft is
moving. In that case, the reasons for the apparent curvature will be known. If
we consider outside space as a separate frame of reference unrelated to the
space craft, the ray emitted by it cannot be considered inside the space craft
(we call it praagaabhaava). The emission of the ray will be restricted to
those emanating from within the spacecraft. In that case, the ray will move
straight inside the space craft. In either case, the description of Mr. Einstein is faulty. Thus, both SR and GR
including the principles of equivalence are wrong descriptions of reality.
Hence all mathematical derivatives built upon these wrong descriptions are also
wrong. We will explain all socalled experimental verifications of the
SR and GR by alternative mechanisms or other verifiable explanations.
Relativity is an
operational concept, but not an existential concept. The equations apply to
data and not to particles. When we approach a mountain from a distance, its
volume appears to increase. What this means is that the visual perception of
volume (scaling up of the angle of incoming radiation) changes at a particular
rate. But locally, there is no such impact on the mountain. It exists as it
was. The same principle applies to the perception of objects with high velocities.
The changing volume is perceived at different times depending upon our relative
velocity. If we move fast, it appears earlier. If we move slowly, it appears
later. Our differential perception is related to changing angles of radiation
and not the changing states of the object. It does not apply to locality.
Einstein has also admitted this. But the Standard model treats these as
absolute changes that not only change the perceptions, but change the particle
also!
The above description points to some very
important concepts. If the only way to measure is to move with the object of
measurement or allow it to pass between two points at two instants (and measure
the time and distance for comparison), it implies that all measurements can be
done only at “herenow”. Since “herenow” is ever changing, how do we describe
the result? We cut out an easily perceived and fairly repetitive segment of it
and freeze it or its subdivisions for future reference as the scaling constant
(unit). We compare all future states (also past, where it had been measured) with
this constant and call the result of such comparison as the “result of
measurement”. The operations involving such measurement are called mathematics.
Since the result of measurement can only be scalar quantities, i.e., numbers, mathematics
is the science of numbers. Since numbers are always discrete units, and the
objects they represent are bound by different degrees of freedom, mathematics
must follow these principles. But in most of the “mathematics” used by the
physicists, these principles are totally ignored.
Let us take the example of complex numbers. The
imaginary are abstract descriptions and are illusions that can never be
embodied in the “phenomena” because they do not conform to the verifiable laws
of the phenomena in nature. Conversely, only the real can be embodied in
the verifiable phenomena. A negative
sign assigned to a number points to the “deficiency of a physical
characteristic” at “herenow”. Because of conservation laws, the negative sign
must include a corresponding positive sign “elsewhere”. While the deficiency is
at “herenow”, the corresponding positive part is not at “herenow”. They seek
each other out, which can happen only in “other times”.
Let
us take the example of an atom. Generally, we never talk about the total charge
of a particle  we describe only the net charge. Thus, when we describe a
positively or negatively charged ion, we mean that the particle has both the
charges, but the magnitude of one category of charge is more than that of the
other. The positively charged proton is deficient in negative charge, i.e., it
has a charge of –(–1) in electron charge units. This double negative appears as
the positive charge (actually, the charge of proton is slightly deficient from
+1). We posit that the negative potential is the real and the only charge.
Positive potential is perceived due to relative deficiency (we call it nyoona)
of negative potential. We will discuss this statement while explaining what an
electron is. The proton tries to fulfill its relative deficiency by uniting
with an electron to become a neutron (or hydrogen atom, which is also unstable
because of the deficiency). The protonneutron interaction is dependent upon neutrinosantineutrinos.
Thus, there is a deficiency of neutrinosantineutrinos. The neutron and protonelectron
pairs search for it. This process goes on. At every stage, there is an
addition, which leads to a corresponding “release” leading to fresh deficiency
in a linear mechanism. Thus, the nuclei weigh less than their constituents.
This deficiency is known as the mass defect, which represents the energy
released when the nucleus is formed. The deficiency generates the charge that is
the cause for all other forces and nonlinear interactions.
The operation of deficiency leads to linear
addition with corresponding subtraction. This is universally true for
everything and we can prove it. Hence a deficiency cannot be reduced in a nonlinear
manner. This is because both positive and negative potentials do not separately
exist at “herenow”, where the mathematics is done. They must be separated in
space or exist as net charge. For this reason, negative numbers (–1) cannot be
reduced nonlinearly (√–1). Also why stop only at squareroot? Why not work out
fourth, eighth etc, roots ad infinitum? For numbers other than 1, they will not
give the same result. This means complex numbers are restricted to (√–1). Since
(–1) does not exist at “herenow”, no mathematics is possible with it. Thus, the
complex numbers are neither physical nor mathematical. This is proved by the
fact that complex numbers cannot be used in computer programming, which mimics
conscious processes of measurement. Since mathematics is done by conscious
beings, there cannot be mathematics involving unphysical complex numbers.
To say that complex
numbers are “complete”, because they “include real numbers and more” is like
saying dreams are “complete”, because they “include what we perceive in wakeful
state and more”. Inertia is a universal law of Nature that arises after all
actions. Thought is the inertia of mind, which is our continued response to
initial external stimuli. During wakeful state, the “conscious actions” involve
perception through sense organs, which are nothing but measurement of the fields
set up by the objects by the corresponding fields set up by our respective sense
organs at “herenow”. Thus, any inertia they generate is bound by not only the existential
physical characteristics of the objects of perception, but also the intervening
field. During dreams, the ocular interaction with external fields ceases, but
their memory causes inertia of mind due to specific tactile perception during
sleep. Thus, we dream of only whatever we have seen in our wakeful state. Since
memory is a frozen state (saakshee) like a scaling constant and is free
from the restrictions imposed by the time evolving external field, dreams are also
free from these restrictions. We have seen horses that run and birds that fly.
In dream, we can generate operational images of flying horses. This is not
possible in existential wakeful state. This is not the ways of Nature. This is
not physics. This is not mathematics either.
Mr.
Dirac
proposed a procedure for transferring the characteristic quantum phenomenon of
discreteness of physical quantities from the quantum mechanical treatment of
particles to a corresponding treatment of fields. Conceptually, such treatment
is void, as by definition, a particle is discrete whereas a field is analog. A
digitized field is an oxymoron. Digits are always discrete units. What we
actually mean by a digitized field is that we measure it in discrete steps unit
by unit. Employing the quantum mechanical theory of the harmonic oscillator, Mr.
Dirac gave
a theoretical description of how photons appear in the quantization of the
electromagnetic radiation field. Later, Mr. Dirac’s procedure became
a model for the quantization of other fields as well. But the fallacy here is
evident. There are some potential
ingredients of the particle concept which are explicitly opposed to the
corresponding (and therefore opposite) features of the field concept.
A core
characteristic of a field is supposed to be that it is a system with an
infinite number of degrees of freedom, whereas the very opposite holds
true for particles. What this really means is that the field interacts with
many particles simultaneously, whereas a particle is placed in and interacts
with other particles only through one field (with its subfields like
electrical or magnetic fields). A particle can be referred to by the
specification of the coordinates x(t) that pertains to the
time evolution of its center of mass as representative of the particle (presupposing
its dimensional impenetrability). However, the operatorvaluedness of quantum
fields generally mean that to each spacetime point x (t),
a field value φx (t) is assigned, which is called an operator.
Operators are generally treated as mathematical entities which are defined by
how they act on something. They do not represent definite values of quantities,
but they specify what can be measured. This is a fundamental difference between
classical fields and quantum fields because an operator valued quantum field φx
(t) does not by itself correspond to definite values of a physical
quantity like the strength of an electromagnetic field. The quantum fields are
determinables as they are described by mappings from spacetime points to
operators. This description is true but interpreted wrongly. Left to itself, a
particle will continue to be in its state infinitely. It evolves in time
because of its interaction with the field due to differential density that
appears as charge. Unlike particles, where the density is protected by its
dimensions, universal fields (where energy fields are subfields called “jaala – literally net”) act like fluids.
Hence its density is constantly fluctuating and cannot be precisely defined.
Thus, it continuously strives to change the state of the particle, which is its
time evolution. The pace of this time evolution is the time dilation for that
particle. There is nothing as universal time dilation. Hence we call time as “vastu patita” literally meaning based on
changes in objects. Thus, it can be called as an operator.
Another feature
of the particle concept is explicitly in opposition to the field concept. In
pure particle ontology, the interaction between remote particles can only be
understood as an action at a distance. In contrast to that, in field
ontology, or a combined ontology of particles and fields, local action
is implemented by mediating fields. Further, classical particles are massive
and impenetrable, again in contrast to classical fields. The concept of
particles has been evolving through history of science in accordance with the
latest scientific theories. Therefore, particle interpretation for QFT is a
very difficult proposition.
Mr.
Wigner’s
famous analysis of the Poincaré group is often assumed to provide a definition
of elementary particles. Although Mr. Wigner has found a classification of
particles, his analysis does not contribute very much to the question “what a
particle is” and whether a given theory can be interpreted in terms of
particles. What Mr. Wigner has given is rather a conditional answer. If
relativistic quantum mechanics can be interpreted in terms of particles, then
the possible types of particles correspond to irreducible unitary
representations of the Poincaré group. However, the question whether and if yes, in what sense at least, relativistic quantum mechanics can
be interpreted as a particle theory at all, has not been addressed in Mr.
Wigner’s
analysis. For this reason the discussion of the particle interpretation of QFT
is not closed with Mr. Wigner’s analysis. For example the pivotal question
of the localizability of particle states is still open. Quantum physics has
generated much more questions that it has solved.
Each measurable parameter in a physical
system is said to be associated with a quantum mechanical operator. Part of the
development of quantum mechanics is the establishment of the operators
associated with the parameters needed to describe the system. The operator
associated with the system energy is called the Hamiltonian. The word operator can in principle
be applied to any function. However in practice, it is most often applied to
functions that operate on mathematical
entities of higher complexity than real number, such as vectors, random
variables, or other “mathematical expressions”. The differential and integral
operators, for example, have domains and codomains whose elements are “mathematical
expressions of indefinite complexity”.
In contrast, functions with vectorvalued domains but scalar ranges are called “functionals”
and “forms”. In general, if either the
domain or codomain (or both) of a function contains elements significantly
more complex than real numbers, that function is referred to as an operator.
Conversely, if neither the domain nor the codomain of a function contains
elements more complex than real numbers, that function is referred to simply as
a function. Trigonometric functions such as signs, cosine etc., are examples of
the latter case. Thus, the operators or Hamiltonian are
not mathematical as they do not accumulate or reduce particles by themselves. These
are illegitimate manipulations in the name of mathematics.
The Hamiltonian is said to contain the
operations associated with both kinetic and potential energies. Kinetic energy
is related to motion of the particle – hence uses binomial terms associated
with energy and fields. This is involved in interaction with the external field
while retaining the identity of the body, with its internal energy, separate
from the external field. Potential energy is said to be related to the position
of the particle. But it remains confined to the particle even while the body is
in motion. The example of pendulum, where potential energy and kinetic energy
are shown as interchangeable is a wrong description, as there is no change in
the potential energy between the pendulum when it is in motion and when it is
at rest.
The motion of the pendulum is due only to
inertia. It starts with application of force to disturb the equilibrium
position. Then both inertia of motion and inertia of restoration take over. Inertia
of motion is generated when the body is fully displaced. Inertia of restoration
takes over when the body is partially displaced, like in the pendulum, which
remains attached to the clock. This is one of the parameters that cause wave
and sound generation through transfer of momentum. As the pendulum swings to
one side due to inertia of motion, the inertia of restoration tries to pull it
back to its equilibrium position. This determines the speed and direction of
motion of the pendulum. Hence the frequency and amplitude depend on the length
of the chord (this determines the area of the cross section) and the weight of
the pendulum (this determines the momentum). After reaching equilibrium
position, the pendulum continues to move due to inertia of motion or
restoration. This process is repeated. If the motion is sought to be explained
by exchange of PE and KE, then we must account for the initial force that
started the motion. Though it ceases to exist, its inertia continues. But the
current theories ignore it. The only verifiable explanation is; kinetic energy,
which is determined by factors extraneous to the body, does not interfere with
the potential energy.
In a Hamiltonian, the potential energy is
shown as a function of position such as x
or the potential V(x). The spectrum of the Hamiltonian is said to be the set of
all possible outcomes when one measures the total energy of a system. A body
possessing kinetic energy has momentum. Since position and momentum do not
commute, the functions of position and momentum cannot commute. Thus,
Hamiltonian cannot represent total energy of the system. Since potential energy
remains unchanged even in motion, what the Hamiltonian actually depicts is the
kinetic energy only. It is part
of the basic structure of quantum mechanics that functions of position are
unchanged in the Schrödinger equation, while momenta take the form of spatial
derivatives. The Hamiltonian operator contains both time and space derivatives.
The Hamiltonian operator for a class of velocitydependent potentials shows
that the Hamiltonian and the energy of the system are not simply related, and while
the former is a constant of motion and does not depend on time explicitly, the
latter quantity is timedependent, and the Heisenberg equation of motion is not
satisfied.
The spectrum of the
Hamiltonian is said to be decomposed via its spectral measures, into a) pure
point, b) absolutely continuous, and c) singular parts. The pure point spectrum
can be associated to eigen vectors, which in turn are the bound states of the
system – hence discrete. The absolutely continuous spectrum corresponds to the socalled
free states. The singular spectrum
comprises physically impossible outcomes. For example, the finite potential
well admits bound states with discrete negative energies and free states with continuous positive
energies. When we include
unphysical parameters, only such outcomes are expected. Since all three
decompositions come out of the same Hamiltonian, it must come through different
mechanism. Hence a Hamiltonian cannot be used without referring to the specific
mechanism that causes the decompositions.
Function is a relationship between two sets of numbers or other
mathematical objects where each member of the first set is paired with only one
member of the second set. It is an
equation, for which any x that can be plugged into the equation, will
yield exactly one y out of the equation  onetoone correspondence –
hence discreteness. Functions can be used to understand how one quantity
varies in relation to (is a function of) changes in the second quantity. Since
no change is possible without energy, which is said to be quantized, such
changes should also be quantized, which imply discreteness involving numbers.
The Lagrangian is used both in celestial mechanics and quantum mechanics.
In quantum mechanics, the Lagrangian has been extended into the Hamiltonian. Although
Lagrange only sought to describe classical mechanics, the action principle that
is used to derive the Lagrange equation is now recognized to be applicable to
quantum mechanics. In celestial mechanics, the gravitational field causes both
the kinetic energy and the potential energy. In quantum mechanics, charge
causes both the kinetic energy and the potential. The potential is the energy
contained in a body when it is not in motion. The kinetic energy is the energy
contained by the same body when it is put to motion. The motions of celestial
bodies are governed by gravitational fields and the potential is said to be gravitational
potential. Thus, originally the Lagrangian must have been a single field differential.
At its simplest, the Lagrangian is the kinetic energy of a system T minus
its potential energy V. In other words, one has to subtract the gravitational
potential from gravitational kinetic energy! Is it possible?
Mr. Newton thought that the kinetic energy and
the potential energy of a single particle would sum to zero. He solved many
problems of the timevarying constraint force required to keep a body (like a
pendulum) in a fixed path by equating the two. But in that case, the Lagrangian
L = T – V will always be zero
or 2T. In both cases it is of no use. To overcome the problem, it has
been suggested that Lagrangian only considers the path and chooses a set
of independent generalized coordinates that characterize the possible
motion. But in that case, we will know about the path, but not about the force.
Despite
its much publicized predictive successes, quantum mechanics has been plagued by
conceptual difficulties since its inception. No one is really clear about what
is quantum mechanics? What does quantum mechanics describe? Since it is widely
agreed that any quantum mechanical system is completely described by its wave
function, it might seem that quantum mechanics is fundamentally about the
behavior of wave functions. Quite naturally, all physicists starting with Mr.
Erwin
Schrödinger, the father of the wave function, wanted this to be true. However, Mr.
Schrödinger
ultimately found it impossible to believe. His difficulty was not so much with
the novelty of the wave function: “That it is an abstract, unintuitive mathematical
construct is a scruple that almost always surfaces against new aids to thought
and that carries no great message”. Rather, it was that the “blurring”
suggested by the spread out character of the wave function “affects
macroscopically tangible and visible things, for which the term ‘blurring’
seems simply wrong” (Schrödinger 1935).
For
example, in the same paper Mr. Schrödinger noted that it may happen in
radioactive decay that “the emerging particle is described ... as a spherical
wave ... that impinges continuously on a surrounding luminescent screen over
its full expanse. The screen however does not show a more or less constant
uniform surface glow, but rather lights up at one instant at one
spot ....”. He observed that one can easily arrange, for example by including a
cat in the system, “quite ridiculous cases” with the ψfunction of the entire
system having in it the living and the dead cat mixed or smeared out in equal
parts. Thus it is because of the “measurement problem” of macroscopic
superposition that Schrödinger found it difficult to regard the wave function
as “representing reality”. But then what does reality represent? With evident
disapproval, Schrödinger describes how the reigning doctrine rescues itself by
having recourse to epistemology. We are told that no distinction is to be made
between the state of a natural object and what we know about it, or perhaps
better, what we can know about it. Actually – it is said  there is
intrinsically only awareness, observation, measurement.
One of the assumptions of quantum mechanics is that any state of a physical
system and its time evolution is represented by the wavefunction, obtained by
the solution of timedependent Schrödinger equation. Secondly, it is assumed
that any physical state is represented by a vector in Hilbert space being
spanned on one set of Hamiltonian eigenfunctions and all states are bound
together with the help of superposition principle. However, if applied to a
physical system, these two assumptions exhibit mutual contradiction. It is said
that any superposition of two solutions of Schrödinger equation is also a
solution of the same equation. However, this statement can have physical
meaning only if the two solutions correspond to the same initial conditions.
By superposing solutions belonging to different initial conditions, we obtain
solutions corresponding to fully different initial conditions, which imply that
significantly different physical states have been combined in a manner that is
not allowed. The linear differential equations that hold for general
mathematical superposition principles have nothing to do with physical reality,
as actual physical states and their evolution is uniquely defined by
corresponding initial conditions. These initial conditions characterize
individual solutions of Schrödinger equation. They correspond to different
properties of a physical system, some of which are conserved during the entire
evolution.
The physical superposition principle has been deduced from the linearity
of Schrödinger differential equation without any justification. This arbitrary
assumption has been introduced into physics without any proof. The solutions
belonging to diametrically different initial conditions have been arbitrarily
superposed. Such statements like: “quantum mechanics including superposition
rules have been experimentally verified” is absolutely wrong. All tests
hitherto have concerned only consequences following from the Schrödinger
equation.
The measurement problem in quantum physics is really not a problem, but
the result of wrong assumptions. As has been described earlier, measurement is
done only at “herenow”. It depicts the state only at “herenow” – neither
before nor after it. Since all other states are unknown, they are clubbed
together and described as superposition of states. This does not create a bizarre
state of “undead” cat  the living and the dead cat mixed or smeared out in equal parts  at
all other times. As has already been pointed out, the normal time evolution of the cat and the effect of its exposure to
poisonous gas are two different unrelated aspects. The state at
“herenow” is a culmination of the earlier states that are time evolution of
the object. This is true “wave function collapse”, where the unknown collapses
to become transitorily known (since the object continues to evolve in time).
The collapse does not bring the object to a fixed state at everafter. It
describes the state only at “herenow”.
How much one quantity is changing in response to changes in some other
quantity is called its derivative. Contrary to general perception, derivative
is a constant differential over a subinterval, not a diminishing differential
as one approaches zero. There cannot be any approach to zero in calculus because
then there will be no change – hence no derivative. The interval of the
derivative is a real interval. In any particular problem, one can find the time
that passes during the subinterval of the derivative. Thus, nothing in calculus
is instantaneous.
Derivatives are of two types.
Geometrical derivatives presuppose that the function is continuous. At points
of discontinuity, a function does not have a derivative. Physical derivatives
are always discrete. Since numbers are always discrete quantities, a continuous
function cannot represent numbers universally. While fields and charges are continuous,
particles and mass are discrete. The differentiating characteristic between
these two is dimension. Dimension is the characteristic of objects by which we
differentiate the “inner space” of an object from its “outer space”. In the
case of mass, it is discreet and relatively stable. In the case of fluids, it
is continuous but unstable. Thus, the term derivative has to be used carefully.
We will discuss its limitations by using some physical phenomena. We will deal
with dimension, gravity and singularity cursorily and spin and entanglement
separately. Here we focus on bare mass and bare charge that will also explain black
holes, dark matter and dark energy. We will also explain “what is an
electron” and review Coulomb’s law.
Even the modern
mathematician and physicists do not agree on many concepts. Mathematicians
insist that zero has existence, but no dimension, whereas the physicists insist
that since the minimum possible length is the Planck scale; the concept of zero
has vanished! The Lie algebra corresponding to SU (n) is a real and not a complex Lie algebra. The physicists
introduce the imaginary unit i, to
make it complex. This is different from the convention of the mathematicians. Often
the physicists apply the “brute force approach”, in which many parameters are
arbitrarily reduced to zero or unity to get the desired result. One example is
the mathematics for solving the equations for the libration points. But such
arbitrary reduction changes the nature of the system under examination (The modern
values are slightly different from our computation). This aspect is overlooked
by the physicists. We can cite many such instances, where the conventions of
mathematicians are different from those of physicists. The famous Cambridge coconut puzzle
is a clear representation of the differences between physics and mathematics.
Yet, the physicists insist that unless a theory is presented in a mathematical
form, they will not even look at it. We do not accept that the laws of physics
break down at singularity. At singularity only the rules of the game change and
the mathematics of infinities takes over.
The mathematics
for a multibody system like a lithium or higher atom is done by treating the
atom as a number of two body systems. Similarly, the Schrödinger equation in
socalled one dimension (it is a second order equation as it contains a term x^{2},
which is in two dimensions and mathematically implies area) is converted to
three dimensional by addition of two similar factors for y and z axis. Three
dimensions mathematically imply volume. Addition of three (two dimensional) areas
does not generate (three dimensional) volume and x^{2}+y^{2}+z^{2}
≠ (x.y.z). Similarly, mathematically all operations involving infinity are
void. Hence renormalization is not mathematical. Thus, the so called
mathematics of modern physicists is not mathematical at all!
Unlike Quantum
physicists, we will not use complex terminology and undefined terms; will not first
write everything as integrals and/or partial derivatives. We will not use
Hamiltonians, covariant fourvectors and contravariant tensors of the second
rank, Hermitian operators, Hilbert
spaces, spinors, Lagrangians, various forms of matrices, action, gauge
fields, complex operators, CalabiYau shapes, 3branes, orbifolding and so on
to make it incomprehensible. We will not use “advanced mathematics”, such as
the Abelian, nonAbelian, and Affine models etc, based on mere imagery at the
axiomatic level. We will describe physics as it is perceived. We will use
mathematics only to determine “how much” a system changes when some input parameters
are changed and then explain the changed output, as it is perceived.
HISTORICAL BACK GROUND:
Lorentz force
law deals with what happens when charges are in motion. This is a standard law
with wide applications including designing TV Picture Tubes. Thus, its
authenticity is beyond doubt. When parallel currents are run next to one
another, they are attracted when the currents run in the same direction and
repulsed when the currents run in opposite directions. The attractive or
repulsive force is proportional to the currents and points in a direction
perpendicular to the velocity. Observations and measurements demonstrate that
there is an additional field that acts only on moving charges. This force is
called the Lorentz force. This happens even when the wires are completely charge
neutral. If we put a stationary test charge near the wires, it feels no force.
Consider a long
wire that carries a current I and generates
a corresponding magnetic field. Suppose that a charge moves parallel to this
wire with velocity ~v. The magnetic field of the wire leads to an attractive
force between the charge and the wire. With reference to the wire frame, there
is no contradiction. But the problem arises when we apply the first postulate
of Special Relativity that laws of physics are the same for all frames of
reference. With reference to the chargeframe, the charge is stationary. Hence there
cannot be any magnetic force. Further, a charged particle can gain (or lose)
energy from an electric field, but not from a magnetic field. This is because
the magnetic force is always perpendicular to the particle’s direction of
motion. Hence, it does no work on the particle. (For this reason, in particle
accelerators, magnetic fields are often used to guide particle motion e.g.,
in a circle, but the actual acceleration is performed by the electric fields.) Apparently,
the only solution to the above contradiction is to assume some attractive force
in the charge frame. The only attractive force in the charge frame must be an attractive
electric field. In other words, apparently, a force is generated by the charge on
itself while moving, i.e., back reaction, so that the total force on the charge
is the back reaction and the applied force.
There
is something fundamentally wrong in the above description. A charge must move
in a medium. No one has ever seen the evidence for “bare charge” just like no
one has ever seen the evidence for “bare mass”. Thus, “a charge moves parallel
to this wire” must mean either that a charged body is passing by the wire or an
electric current is flowing at a particular rate. In both cases, it would
generate a magnetic field. Thus, the law of physics in both frames of reference
is the same. Only the wrong assumption of the charge as stationary with
reference to itself brings in the consequentially wrong conclusion of back
reaction. This denormalization was
sought to be renormalized.
Classical
physics gives simple rules for calculating this force. An electron at rest is
surrounded by an electrostatic field, whose value at a distance r is given by:
ε (r) = (e^{2}/r). …………………………………………………………………(1)
If we consider a
cell of unit volume at a distance r,
the energy content of the cell is: (1/8π)ε^{2}(r).
(2)
The total
electrostatic energy E is therefore obtained by integrating this energy over
the whole of space. This raises the question about the range of integration.
Since electromagnetic forces are involved, the upper limit is taken as
infinity. The lower limit could depend upon the size of the electron. When Mr. Lorentz
developed his theory of the electron, he assumed the electron to be a sphere of
radius a. With this assumption, he
arrived at:
E = e^{2}/2a. …………………………………………………………………… (3)
The trouble
started when attempts were made to calculate this energy from first principles.
When a, the radius of the electron, approaches
zero for a point charge, the denominator in equation (3) becomes zero implying total
energy diverges to infinity:
E → ∞. ……………………………………………………………………… (4)
As Mr. Feynman
puts it; “What’s wrong with an infinite energy? If the energy can’t get out,
but must stay there forever, is there any real difficulty with an infinite
energy? Of course, a quantity that comes out as infinite may be annoying, but
what matters is only whether there are any observable
physical effects. To answer this question, we must turn to something else
besides the energy. Suppose we ask how the energy changes when we move the
charge. Then, if the changes are
infinite, we will be in trouble”.
Electrodynamics
suggests that mass is the effect of charged particles moving, though there can
be other possible sources of origin of mass. We can take mass broadly of two
types: mechanical or bare mass that we denote as m_{0} and mass of
electromagnetic origin that we denote as m_{em}. Total mass is a
combination of both. In the case of electron, we have a mass experimentally
observed, which must be equal to:
m_{exp} = m_{0} +
m_{em}, …………………………………………………………….(5)
i.e., experimental mass = bare
mass + electromagnetic mass.
This raises the
question, what is mass? We will
explain this and the mechanism of generation of mass without Higgs mechanism
separately. For the present, it would suffice to note that the implication of
equation (1) can be understood only through a confined field. The density of a
confined field varies inversely with radius or diameter. If this density is
affected at one point, the effect travels all along the field to affect other
particles within the field. This is the only way to explain the seemingly
action at a distance. The interaction of the field is fully mechanical. Though
this fact is generally accepted, there is a tendency among scientists to treat
the field as not a kind of matter and treat all discussions about the nature of
the field as philosophical or metaphysical. For the present, we posit that mass
is “field confined (which increases density beyond a threshold limit)”. Energy
is “mass unleashed”. We will prove it later. Now, let us consider a paradox! The
nucleus of an atom, where most of its mass is concentrated, consists of
neutrons and protons. Since the neutron is thought of as a particle without any
charge, its mass should be purely mechanical or bare mass. The mass of the
charged proton should consist of m_{0} + m_{em}. Hence, the
mass of proton should have been higher than that of neutron, which, actually,
is the opposite. We will explain this apparent contradiction later.
When the
electron is moved with a uniform velocity v,
the electric field generated by the electron’s motion acquires a momentum, i.e.,
mass x velocity. It would appear that the electromagnetic field acts as if the
electron had a mass purely of electromagnetic origin. Calculations show that
this mass m_{em} is given by the equation:
……………………………………………………………….(6) or
………………………………….………..….…..………..(7)
where a defines the radius of the electron.
Again we land in
problem, because if we treat a = 0,
then equation (6) tells us that m_{em} = ∞.
……………………………………………………………………….(8)
Further, if we
treat the bare mass of electron m_{0} = 0 for a point particle, then
the mass is of purely electromagnetic in origin. In that case:
m_{em} = m_{exp}
= observed mass = 9.10938188 × 10^{31}
kilograms.………….…... (9),
which contradicts equation (8).
Putting the
value of eq.9 in eq.7, we get: a =
(2/3) (e^{2}/ m_{exp} c^{2})..….… (10),
as the radius of the electron.
But we know that the classical electron radius:
..……………..…………. ……………………..……………… (11).
The
factor 2/3 in a depends on how the
electric charge is actually distributed in the sphere of radius a. We will discuss it later. The r_{0}
is the nominal radius. According to the modern quantum mechanical understanding
of the hydrogen atom, the average distance between electron and proton is ≈1.5a_{0}, somewhat different than
the value in the Bohr model (≈ a_{0}), but certainly the same order
of magnitude. The value 1.5a_{0}
is approximate, not exact, because it neglects reduced mass, fine structure
effects (such as relativistic corrections), and other such small effects.
If the electron
is a charged sphere, since it contains same charge, normally it should explode.
However, if it is a point charge where a
= 0, it will not explode – since zero has existence but no dimensions. Thus, if
we treat the radius of electron as nonzero, we land at instability. If we
treat the radius of electron as zero, we land at “division of a number by
zero”. It is treated as infinity. Hence equation (6) shows the m_{em}
as infinity, which contradicts equation (8), which has been physically
verified. Further, due to the massenergy equation E = m_{0}c^{2},
mass is associated with an energy. This energy is known as selfenergy. If mass
diverges, selfenergy also diverges. For infinite mass, the selfenergy also
becomes infinite. This problem has not been satisfactorily solved till date. According
to standard quantum mechanics, if E
is the energy of a free particle, its wavefunction changes in time as:
Ψ (t) = e^{iEt / ħ} Ψ(0)……………………………………………………………
(12)
Thus,
effectively, time evolution adds a phase factor e^{iEt / ħ}. Thus, the “dressing up” only changes the
value of E to (E+ ΔE). Hence, it can be
said that as the mass of the particle changes from m_{0, }the value
appropriate to a bare particle, to (m_{0} + Δm), the value appropriate to the dressed up or physically
observable “isolated” or “free” particle changes from E to (E+ ΔE). Now, the value of (m_{0} + Δm), which is the observed mass, is
known to be 9.10938188 × 10^{31}
kilograms. But Δm, which is same as m_{em} = ∞.
Hence again we are stuck with an infinity.
Mr. Tomonaga, Mr.
Schwinger and Mr. Feynman independently tried to solve the problem. They argued
that what we experimentally observe is the bare electron, which cannot be directly
observed, because it is always interacting with its own field. In other words,
they said that experimental results must be wrong because something, which
cannot be experimentally verified, is changing it! And only after something else
is subtracted from the experimental results, it would give the correct figures!
It must be magic or voodoo! There is no experimental proof till date to justify
the inertial increase of mass. Energy does affect volume which affects density,
but it does not affect mass. Further, they have not defined “what is an
electron”. Hence they can assign any property to it as long as the figures
match. This gives them lot of liberty to play with the experimental value to
match their “theories”. What they say effectively means: if one measures a
quantity and gets the result as x, it must be the wrong answer. The correct
answer should be x’ – Δx, so that the
result is x. Since we cannot experimentally observe Δx, we cannot get x’. But that is irrelevant. You must believe that
what the scientists say is the only truth. And they get Nobel Prize for that
“theory”!
It is this hypothetical
interaction Δm that “dresses up” the
electron by radiative corrections to denormalize it. Thereafter, they started
the “mathematical” magic of renormalization. Since Δm was supposed to be ∞, they tried to “nullify” or “kill” the infinity
by using a counter term. They began with the hydrogen atom. They assumed the
mass of the electron as m_{0} + Δm
and switched on both coulombic and radiative interactions. However, the
Hamiltonian for the interaction was written not as H_{i}, but H_{i}
– Δm. Thereafter, they cancelled + Δm by – Δm. This operation is mathematically not legitimate, as in
mathematics, all operations involving infinity are void. Apart from the wrong
assumptions, the whole problem has arisen primarily because of the mathematics
involving division by zero, which has been assumed to be infinite. Hence let us
examine this closely. First the traditional view.
DIVISION BY ZERO:
Division of two numbers a
and b is the reduction of dividend a by the divisor b or taking
the ratio a/b to get the result (quotient). Cutting or separating an
object into two or more parts is also called division. It is the inverse
operation of multiplication. If: a x b = c, then a
can be recovered as a = c/b as long as b ≠ 0. Division by
zero is the operation of taking the quotient of any number c and 0,
i.e., c/0. The uniqueness of division breaks down when dividing by b
= 0, since the product a x 0 = 0 is the same for any value of a. Hence
a cannot be recovered by inverting the process of multiplication (a
= c/b). Zero is the only number with this property and, as a result,
division by zero is undefined for real numbers and can produce a fatal
condition called a “division by zero error” in computer programs. Even in fields
other than the real numbers, division by zero is never allowed.
Now
let us evaluate (1+1/n)^{n} for any number n. As n increases, 1/n
reduces. For very large values of n, 1/n becomes almost negligible. Thus, for
all practical purposes, (1+1/n) = 1. Since any power of 1 is also 1, the result
is unchanged for any value of n. This position holds when n is very small and
is negligible. Because in that case we can treat it as zero and any number
raised to the power of zero is unity. There is a fatal flaw in this argument,
because n may approach ∞ or 0, but it never “becomes” ∞ or 0.
On
the other hand, whatever be the value of 1/n, it will always be more than zero,
even for large values of n. Hence, (1+1/n) will always be greater than 1. When
a number greater than zero is raised to increasing powers, the result becomes
larger and larger. Since (1+1/n) will always be greater than 1, for very large
values of n, the result of (1+1/n)^{n} will also be ever bigger. But
what happens when n is very small and comparable to zero? This leads to the
problem of “division by zero”. The contradicting result shown above was sought
to be resolved by the concept of limit, which is at the heart of calculus. The generally
accepted concept of limit led to the result: as n approaches 0, 1/n approaches
∞. Since that created all problems, let us examine this aspect closely.
LIMIT – GEOMETRICAL (ANALOG) VS PHYSICAL (DIGITAL):
In Europe, the concept of limit goes back to Mr. Archimedes.
His method was to inscribe a number of regular polygons inside a circle. In a
regular polygon, all sides are equal in length and each angle is equal with the
adjacent angles. If the polygon is inscribed in the circle, its area will be
less than the circle. However, as the number of sides in a polygon increases,
its area approaches the area of the circle. Similarly by circumscribing the
polygon over the circle, as the number of its sides goes up, its circumference
and area would be approaching those of the circle. Hence, the value of p can
be easily found out by dividing the circumference with the diameter. If we take
polygons of increasingly higher sides and repeat the process, the true value of
p
can be “squeezed” between a lower and an upper boundary. His value for p was
within limits of:
Long before Mr. Archimedes, the idea was known in India
and was used in the Shulba Sootras,
world’s first mathematical works. For example, one of the formulae prevalent in
ancient India for determining the length of each side of a polygon with 3,4,…9
sides inscribed inside a circle was as follows: Multiply the diameter of the
circle by 103923, 84853, 70534, 60000, 52055, 45922, 41031, for polygons having
3 to 9 sides respectively. Divide the products by 120000. The result is the
length of each side of the polygon. This formula can be extended further to any
number of sides of the polygon.
Aachaarya Brahmagupta (591 AD)
solved indeterminate equations of the second order in his books “Brahmasphoota
Siddhaanta”, which came to be known in Europe as
Pell’s equations after about 1000 years. His lemmas to the above solution were
rediscovered by Mr. Euler (1764 AD), and Mr. Lagrange (1768 AD). He enunciated
a formula for the rational cyclic quadrilateral. Chhandas is a Vedic metric system, which was methodically discussed
first by Aachaarya Pingala Naaga of antiquity. His work was developed by
subsequent generations, particularly, Aachaarya Halaayudha during the 10^{th}
Century AD. Using chhandas, Aachaarya
Halaayudha postulated a triangular array for determining the type of
combinations of n syllables of long and short sounds for metrical chanting
called “Chityuttara”. He developed it mathematically
into a pyramidal expansion of numbers. The ancient treatise on medicine –
Kashyapa Samhita uses “Chityuttara”
for classifying chemical
compositions and diseases and used it for treatment. Much later, it appeared in
Europe as the Pascal’s triangle. Based on this, (1+1/n)^{n} has been evaluated as the limit:
e =
2.71828182845904523536028747135266249775724709369995….
Aachaarya Bhaaskaraachaarya
– II (1114 AD), in his algebraic treatise “Veeja Ganitam”, had used the “chakravaala” (cyclic) method for solving
the indeterminate equations of the second order, which has been hailed by the
German mathematician Mr. Henkel as “the finest thing achieved in the theory of
numbers before Lagrange”. He used basic calculus based on “Aasannamoola” (limit), “chityuttara”
(matrix) and “circling the square” methods several hundreds of years before Mr.
Newton and Mr. Leibniz. “Aasannamoola”
literally means “approaching a limit” and has been used in India since
antiquity. Surya Siddhanta, Mahaa Siddhanta and other ancient treatises on
astronomy used this principle. The later work, as appears from internal
evidence, was written around 3100 BC. However, there is a fundamental
difference between these methods and the method later adopted in Europe. The concepts of limit and calculus have been
tested for their accuracy and must be valid. But while the Indian
mathematicians held that they have limited application in physics, the
Europeans held that they are universally applicable. We will discuss this
elaborately.
Both
Mr. Newton and Mr. Leibniz evolved calculus from charts prepared
from the power series, based on the binomial expansion. The binomial expansion is
supposed to be an infinite series expansion of a complex differential that
approached zero. But this
involved the problems of the tangent to the curve and the area of the
quadrature. In Lemma VII in Principia, Mr. Newton states that at the limit (when the
interval between two points goes to zero), the arc, the chord and the tangent
are all equal. But if this is true, then both his diagonal and the versine must
be zero. In that case, he is talking about a point with no spatial dimensions. In
case it is a line, then they are all equal. In that case, neither the versine
equation nor the Pythagorean Theorem applies. Hence it cannot be used in
calculus for summing up an area with spatial dimensions.
Mr. Newton and Mr. Leibniz found the solution to the calculus
while studying the “chityuttara” principle
or the socalled Pascal’s differential triangle. To solve the problem of the
tangent, this triangle must be made smaller and smaller. We must move from x to Δx. But can it be mathematically represented? No point on any
possible graph can stand for a point in space or an instant in time. A point on
a graph stands for two distances from the origin on the two axes. To graph a straight
line in space, only one axis is needed. For a point in space, zero axes are needed.
Either you perceive it directly without reference to any origin or it is
nonexistent. Only during measurement, some reference is needed.
While
number is a universal property of all substances, there is a difference between
its application to objects and quantities. Number is related to the object
proper that exist as a class or an element of a set in a permanent manner,
i.e., at not only “herenow”, but also at other times. Quantity is related to
the objects only during measurement at “herenow” and is liable to change from
time to time. For example, protons and electrons as separate classes can be
assigned class numbers 1 and 2 or any other permanent class number. But their
quantity, i.e., the number of protons or electrons as seen during measurement of
a sample, can change. The difference between these two categories is a temporal
one. While the description “class” is time invariant, the description quantity
is time variant, because it can only be measured at “herenow” and may
subsequently change. The class does not change. This is important for defining
zero, as zero is related to quantity, i.e., the absence of a class of
substances that was perceived by us earlier (otherwise we would not perceive
its absence), but does not exist at “herenow”. It is not a very small quantity, because even then the infinitely small quantity is present at herenow.
Thus, the expression: lim_{n → ∞}1/n = 0 does not mean that 1/n will
ever be equal to zero.
Infinity, like one, is without similars. But while the dimensions of
“one” are fully perceptible; those for infinity are not perceptible. Thus,
space and time, which are perceived as without similars, but whose dimensions
cannot be measured fully, are infinite. Infinity
is not a very big number. We use
arbitrary segments of it that are fully perceptible and label it differently
for our purpose. Everchanging processes can’t be measured other than in time – their time
evolution. Since we observe the state and not the process of change during
measurement (which is instantaneous), objects under ideal conditions are as they evolve
independent of being perceived. What we measure reflects only a temporal state
of their evolution. Since these are similar for all perceptions of
objects and events, we can do mathematics with it. The same concept is
applicable to space also. A single object in void cannot be perceived, as it
requires at least a different backdrop and an observer to perceive it. Space
provides the backdrop to describe the changing interval between objects. In outer
space, we do not see colors. It is either darkness or the luminous bodies –
black or white. The rest about space are like time.
There are functions like a_{n} = (2n +1) / (3n + 4), which
hover around values that are close to 2/3 for all values of n. Even though objects are always
discrete, it is not necessary that this discreteness must be perceived after
direct measurement. If we measure a sample and infer the total quantity from
such direct measurement, the result can be perceived equally precisely and it
is a valid method of measurement – though within the constraints of the
mechanism for precision measurement. However, since physical particles are
always discrete, the indeterminacy is terminated at a desired accuracy level
that is perceptible. This is the concept behind “Aasannamoola” or digital limit. Thus, the value of π is accepted as
3.141...Similarly, the ratio between the circumference and diameter of astral
bodies, which are spheroids, is taken as √10 or 3.16....We have discussed these
in our book “Vaidic Theory of Number”. This also conforms to the modern
definition of function, according to which, every x plugged into the equation will yield exactly one y
out of the equation – a discrete quantity. This also conforms to the physical
Hamiltonian, which is basically a function, hence discrete.
Now, let us take a different
example: a_{n} = (2n^{2}
+1) / (3n + 4). Here n^{2} represents a two dimensional object, which
represents area or a graph. Areas or graphs are nothing but a set of continuous
points in two dimensions. Thus, it is a field that vary smoothly without breaks
or jumps and cannot propagate in true vacuum. Unlike a particle, it is not
discrete, but continuous. For n = 1,2,3,…., the value of a_{n} diverges as 3/7, 9/10, 19/13, ...... For every value
of n, the value for n+1 grows bigger than the earlier rate of divergence. This
is because the term n^{2} in the numerator grows at a faster rate than
the denominator. This is not done in physical accumulation or reduction. In
division, the quotient always increases or decreases at a fixed rate in
proportion to the changes in either the dividend
or the divisor or both.
For example, 40/5 = 8 and 40/4 = 10.
The ratio of change of the quotient from 8 to 10 is the same as the inverse of
the ratio of change of the divisor from 5 to 4. But in the case of our example:
a_{n} = (2n^{2} +1) /
(3n + 4), the ratio of change from n = 2 to n = 3 is from 9/10 to 19/13, which
is different from 2/3 or 3/2. Thus, the statement:
lim_{n→∞}
a_{n} = {(2n^{2} +1)
/ (3n + 4)} → ∞,
is neither
mathematically correct (as the values for n+1 is always greater than that of n
and never a fixed ratio n/n+1) nor can it be applied to discrete particles
(since it is indeterminate). According to relativity, wherever speed comparable
to light is involved, like that of a free electron or photon, the Lorentz
factors invariably comes in to limit the output. There is always length, mass or
time correction. But there is no such correcting or limiting factor in the above
example. Thus, the present concept of limit violates the principle of
relativistic invariance for high velocities and cannot be used in physics.
All measurements are done at
“herenow”. The state at “herenow” is frozen for future reference as the
result of measurement. All other unknown states are combined together as the
superposition of states. Since zero represents a class of object that is
nonexistent at “herenow”, it cannot be used in mathematics except by way of
multiplication (explained below). Similarly, infinity goes beyond “herenow”.
Hence it can’t be used like other numbers. These violate superposition
principle as measurement is sought to be done with something nonexistent at
“herenow”. For this reason, Indian mathematicians treated division by zero in
geometry differently from that in physics.
Aachaarya Bhaaskaraachaarya
(1114 AD) followed the geometrical method and termed the result of division by
zero as “khahara”, which is broadly
the same as renormalization except for the fact that he has considered nonlinear
multiplication and division only, whereas renormalization considers linear addition
and subtraction by the counter term. He visualized it as something of a class
that is taken out completely from the field under consideration. However, even
he had described that if a number is first divided and then multiplied by zero,
the number remains unchanged. Aachaarya Mahaavira (about 850 AD), who followed
the physical method in his book “Ganita Saara Samgraha”, holds that a number multiplied
by zero is zero and remains unchanged when it is divided by, combined with or
diminished by zero. The justification for the same is as follows:
Numbers
accumulate or reduce in two different ways. Linear accumulations and reductions
are addition and subtraction. Nonlinear accumulation and reduction are
multiplication and division. Since mathematics is possible only between
similars, in the case of nonlinear accumulation and reduction, first only the
similar part is accumulated or reduced. Then the mathematics is redone between
the two parts. For example, two areas or volumes can only be linearly
accumulated or reduced, but cannot be multiplied or divided. But areas or
volumes can be multiplied or divided by a scalar quantity, i.e., a number. Suppose
the length of a field is 5 meters and breadth 3 meters. Both these quantities are
partially similar as they describe the same field. Yet, they are dissimilar as
they describe different spreads of the same field. Hence we can multiply these.
The area is 15 sqmts. If we multiply the field by 2, it means that either we
are increasing the length or the breadth by a factor of two. The result 15 x 2
= 30 sqmts can be arrived at by first multiplying either 5 or 3 with 2 and then
multiplying the result with the other quantity: (10 x 3 or 5 x 6). Of course,
we can scale up or down both length and breadth. In that case, the linear
accumulation has to be done twice separately before we multiply them.
Since zero does
not exist at “herenow” where the numbers representing the objects are
perceived, it does not affect addition or subtraction. During multiplication by
zero, one nonlinear component of the quantity is increased to zero, i.e., moves
away from “herenow” to a superposition of states. Thus, the result becomes
zero for the total component, as we cannot have a Schrödinger’s “undead” cat before
measurement in real life. In division by zero, the “nonexistent” part is sought
to be reduced from the quantity (which is an operation akin to “collapse
reversal” in quantum mechanics), leaving the quantity unchanged. Thus,
physically, division by zero leaves the number unchanged.
This has important implications for
many established concepts of physics. One example is the effect on mass, length
and time of a body traveling at the velocity of light. According to the accepted
view, these are contracted infinitely. Earlier we had shown the fallacies
inherent in this view. According to the view of Aachaarya Mahaavira, there is
no change in such cases. Thus, length and time contractions are not real but
apparent. Hence treating it as real is bad mathematics. But its effect on point
mass is most dramatic. We have shown in latter pages that all fermions (we call
these asthanwaa – literally meaning
something with a fixed structure) are three dimensional structures (we call
these tryaanuka and the description tribrit) and all mesons (we call these anasthaa – literally meaning something
without a fixed structure) are two dimensional structures (we call the
description atri – literally meaning
not three). Both of these are confined particles (we call these dwaanuka – literally meaning “coupled point
masses” and the description agasti 
literally meaning created in confinement). We treat the different energy that
operate locally and fall off with distance as subfields (we call these jaala – literally a net) in the
universal field. This agrees with Mr. Kennard’s
formulation of uncertainty relation discussed earlier. By definition, a
point has no dimension. Hence each point in space cannot be discerned from any
other. Thus, a pointmass (we call it anu)
is not perceptible. The mass has been reduced to one dimension making it effectively
massless. Since after confinement in higher dimensions, it leads to generation
of massive structures, it is not massless either.
When Mr. Fermi wrote the three part Hamiltonian: H = H_{A} + H_{R}
+ H_{I}, where H_{A} was the Hamiltonian for the atom, H_{R}
the Hamiltonian for radiation and H_{I} the Hamiltonian for
interaction, he was somewhat right. He should have written H was the
Hamiltonian for the atom and H_{A} was the Hamiltonian for the nucleus.
We call these three (H_{A}, H_{R}, H_{I}) as “Vaya”, “Vayuna” and “Vayonaadha”
respectively. Of these, the first has fixed dimension (we call it akhanda), the second both fixed and
variable dimensions depending upon its nature of interaction (we call it khandaakhanda) and the third variable
dimensions (we call it sakhanda). The
third represents energy that “binds” the other two. This can be verified by
analyzing the physics of sand dunes. Many experiments have been conducted on
this subject in the recent past. The water binds the sand in ideal conditions
when the ratio between them is 1:8. More on this has been discussed separately.
Different forces cannot be linearly additive but can only coexist. Since the
three parts of the Hamiltonians do not belong to the same class, they can only
coexist, but cannot accumulate or reduce through interchange.
When Mr. Dirac wrote H_{I} as H_{I} –Δm, so that Δm, which was
thought to be infinite could be cancelled by –Δm, he was clearly wrong. There is no experimental proof till date
to justify the inertial increase of mass. It is only a postulate that has been
accepted by generations since Mr. Lorentz. Addition of energy in some cases may
lead to a change in dimension with consequential change in density. Volume and
density are inversely proportional. Change in one does lead to change in the
other, which is an operational aspect. But it does not change the mass, which
is related to existential aspect. Mr. Feynman got his Nobel Prize for
renormalizing the socalled bare mass. As has been shown later, it is
one of the innumerable errors committed by the Nobel Committee. The award was more
for his stature and clever “mathematical” manipulation to match the observed
values than for his experiment or verifiable theory.
A similar “mathematical” manipulation was done by Mr. Lev Landau, who
developed a famous equation to find the socalled Landau Pole, which is the
energy at which the force (the coupling constant) becomes infinite. Mr. Landau
found this pole or limit or asymptote by subtracting the bare electric charge e
from the renormalized or effective electric charge e_{R}: 1/ e_{R}^{2
} 1/e^{2} = (N/6π^{2})ln(Λ/ m_{R})
Here momentum
has been represented by Λ instead of the normal “p” for unexplained reasons – may be to introduce
incomprehensibility or assigning magical properties to it later. Treating the renormalized
variable e_{R} as constant,
one can calculate where the bare charge becomes singular. Mr. Landau
interpreted this to mean that the coupling constant had become infinite at that
value. He called this energy the Landau pole.
In any given experiment, the
electron shows one and only one charge value so that either e or e_{R}
must be incorrect. Thus, either the original mathematical value e or the
renormalized mathematical value e_{R}
must be wrong. If two values are different, both cannot be used as correct in
the same equation. Thus what Mr. Landau does effectively is: add or subtract
an incorrect value from a correct value, to achieve “real physical information”!
And he got his Nobel Prize for this achievement! In the late 1990’s, there was
a wellknown “Landau pole problem” that was discussed in several journals. In
one of them, the physicists claimed that: “A detailed study of the relation
between bare and renormalized quantities reveals that the Landau pole lies in a
region of parameter space which is made inaccessible by spontaneous chiral
symmetry breaking”. We are not discussing it.
Some may argue that the effective
charge and the bare charge are both experimental values: the effective
charge being charge as experienced from some distance and the bare charge being
the charge experienced on the point particle. In a way, the bare charge comes
from 19^{th} century experiments and the effective charge comes from 20^{th}
century experiments with the changing notion of field. This is the current
interpretation, but it is factually incorrect. The difference must tell us
something about the field. But there is no such indication. According to the present
theory, the bare charge on the electron must contain a negative infinite term,
just as the bare mass of the electron has an infinite term. To get a usable figure,
both have to be renormalized. Only if we hold that the division by zero
leaves the number unchanged, then the infinities vanish without renormalization
and the problem can be easily solved.
Interaction is the effect of energy on mass and it is always not the same
as mass or its increase/decrease by a fixed rule. This can be proved by
examining the mass of quarks. Since in the quark model the proton has three
quarks, the masses of the “Up”
and “Down” quarks were thought to be
about ⅓ the mass of a proton. But this view has since been discarded. The
quoted masses of quarks are now model dependent, and the mass of the bottom
quark is quoted for two different models. In other combinations they contribute
different masses. In the pion, an “up” and an “antidown” quark yield a
particle of only 139.6 MeV of mass energy, while in the rho vector meson, the same combination of quarks has a mass energy of
770 MeV. The difference between a pion
and a rho is the spin alignment of
the quarks. We will show separately that these spin arrangements arise out of
different bonding within the confinement. The pion is a pseudoscalar meson
with zero angular momentum. The values for these masses have been obtained by
dividing the observed energy by c^{2}. Thus, it is evident that different spin alignment in the “inner
space” of the particle generates different pressure on the “outer space” of the
particle, which is expressed as different mass.
When
a particle is reduced to point mass, it loses its confinement, as confinement implies
dimension and a point has no dimension. Thus, it becomes not only
indiscernible, but also becomes one with the universal field implied in Mr. Kennard’s formulation that has been validated
repeatedly. Only this way the “virtual interactions” are possible. Mr. Einstein’s
etherless relativity is not supported by Mr. Maxwell’s Equations nor the
Lorentz Transformations, both of which are medium (aether) based. We will
discuss it elaborately later. Any number, including and above one, requires
extension (1 from 0 and n from n1). Since points by definition cannot have
extensions, number and point must be mutually exclusive. Thus, the point mass
behaves like a part of the field. Photon is one such example. It is not light
quanta – as that would make it mechanical, which would require it to have mass
and diameter. Light is not “the appearance of photon”, but “momentary uncovering
of the universal field due to the movement of energy through it”. Hence it is
never stationary and varies with density of the medium. There have been recent
reports of bringing light to stop. But the phenomenon has other explanations. Reduction
of mass to this stage has been described as “khahara” by Aachaarya Bhaaskaraachaarya and others. The reverse
process restores mass to its original confined value. Hence if a number is
first divided and then multiplied by zero, the number remains unchanged.
This
shows the role of dimension and also proves that mass is confined field and charge is mass unleashed. This also
explains why neutron is heavier than the proton. According to our calculation, neutron
has a net negative charge of –1/11, which means, it contains +10/11 (proton)
and 1 (electron) charge. It searches out for a complementary charge for
attaining equilibrium. Since negative charge confines the center of mass; the
neutron generates pressure on a larger area on the outer space of the atom than
the confined proton. This is revealed as the higher mass. Thus, the very
concept of a fixed Δm to cancel an
equivalent –Δm is erroneous.
Viewed from the
above aspect, the “mass gap” and the YangMill’s theory to describe the strong
interactions of elementary particles need to be reviewed. We have briefly
discussed it in later pages. Since massive particles have dimensions, and
interactions with other particles are possible only after the dimensions are broken through, let us examine
dimension.
DIMENSION DEFINED:
It can be generally said that the electrons determine atomic size, i.e.,
its dimensions. There are different types of atomic radii: such as Van der
Waal’s radius, ionic radius, covalent radius, metallic radius and Bohr radius
etc. Bohr radius is the radius of the lowestenergy electron orbit predicted by
Bohr model of the atom in 1913. It defines the dimensional boundary of single
electron atoms such as hydrogen. Although the model itself is now treated as obsolete,
the Bohr radius for the hydrogen atom is still regarded as an important
physical constant. Unless this radius is overtaken (dimensional boundary is
broken), no other atoms, molecules or compounds can be formed, i.e., the atom
cannot take part in any chemical interaction. Thus, Mr. Bohr’s equations are
valid only for the hydrogen atom and not for higher atoms.
Most of quantum physics
dealing with extra large or compact dimensions have not defined dimension
precisely. In fact in most cases, like in the description of
phasespaceportrait, the term dimension has been used for vector quantities in
exchange for direction. Similarly; the M theory, which requires 11 undefined
dimensions, defines strings as one dimensional loop. Dimension is the
differential perception of the “inner space” of an object (we call it aayatana) from its “outer space”. In a
helium atom with two protons, the electron orbit determines this boundary. In a
hydrogen molecule with two similar protons, the individual inner spaces are
partially shared. When the relation between the “inner space” of an object remain
fixed for all “outer space”, i.e., irrespective of orientation, the object is
called a particle with characteristic discreteness. In other cases, it behaves
like a field with characteristic continuity.
For perception of the spread of the object, the electromagnetic radiation
emitted by the object must interact with that of our eyes. Since electric and
magnetic fields move perpendicular to each other and both are perpendicular to
the direction of motion, we can perceive the spread of any object only in these
three directions. Measuring the spread uniquely is essentially measuring the
invariant space occupied by any two points on it. This measurement can be done
only with reference to some external frame of reference. For the above reason,
we arbitrarily choose a point that we call origin and use axes that are
perpendicular to each other (analogous to e.m. waves) and term these as xyz
coordinates (lengthbreadthheight making it 3 dimensions or rightleft,
forwardbackward and updown making it 6 dimensions). Mathematically a point has zero dimensions. A straight
line has one dimension. An area has two dimensions and volume has three
dimensions. A one dimensional loop is mathematically impossible, as a loop
implies curvature, which requires a minimum of two dimensions. Thus, the
“mathematics” of string theory, which requires 10, 11 or 26 compactified or
extralarge or time dimensions, violates all mathematical principles.
Let us now consider the “physics” of
string theory. It was developed with a view to harmonize General
Relativity with Quantum theory. It is said to be a high order theory where
other models, such as supergravity and quantum gravity appear as
approximations. Unlike supergravity, string theory is said to be a consistent
and welldefined theory of quantum gravity, and therefore calculating the value
of the cosmological constant from it should, at least in principle, be
possible. On the other hand, the number of vacuum states associated with it
seems to be quite large, and none of these features three large spatial
dimensions, broken supersymmetry, and a small cosmological constant. The features of string theory which
are at least potentially testable  such as the existence of supersymmetry and
cosmic strings  are not specific to string theory. In addition, the features
that are specific to string theory  the existence of strings  either do not
lead to precise predictions or lead to predictions that are impossible to test
with current levels of technology.
There are many unexplained questions
relating to the strings. For example, given the measurement problem of quantum
mechanics, what happens when a string is measured? Does the uncertainty
principle apply to the whole string? Or does it apply only to some section of
the string being measured? Does string theory modify the uncertainty principle?
If we measure its position, do we get only the average position of the string?
If the position of a string is measured with arbitrarily high accuracy, what
happens to the momentum of the string? Does the momentum become undefined as
opposed to simply unknown? What about the location of an endpoint? If the
measurement returns an endpoint, then which endpoint? Does the measurement
return the position of some point along the string? (The string is said to be a
Two dimensional object extended in space. Hence its position cannot be
described by a finite set of numbers and thus, cannot be described by a finite
set of measurements.) How do the Bell’s
inequalities apply to string theory? We must get answers to these questions
first before we probe more and spend (waste!) more money in such research.
These questions should not be put under the carpet as inconvenient or on the
ground that some day we will find the answers. That someday has been a very
long period indeed!
The point, line, plane, etc. have no
physical existence, as they do not have physical extensions. As we have already
described, a point vanishes in all directions. A line vanishes along y and z
axes. A plane vanishes along z axis. Since we can perceive only three
dimensional objects, an object that vanishes partially or completely cannot be
perceived. Thus, the equations describing these “mathematical structures” are
unphysical and cannot explain physics by themselves. A cube drawn (or marked on
a three dimensional) paper is not the same as a cubic object. Only when they
represent some specific aspects of an object, do they have any meaning. Thus,
the description that the twodimensional string is like a bicycle tyre
and the threedimensional object is like a doughnut, etc, and that the Type IIA
coupling constant allows strings to expand into two and threedimensional
objects, is nonsense.
This is all the more true for “vibrating”
strings. Once it starts vibrating, it becomes at least two dimensional. A
transverse wave will automatically push the string into a second dimension. It
cannot vibrate lengthwise, because then the vibration will not be discernible.
Further, no pulse could travel lengthwise in a string that is not divisible. There
has to be some sort of longitudinal variation to propose compression and
rarefaction; but this variation is not possible without subdivision. To vibrate
in the right way for the string theory, they must be strung very, very, tight. But
why are the strings vibrating? Why are some strings vibrating one way and others
vibrating in a different way? What is the mechanism? Different vibrations
should have different mechanical causes. What causes the tension? No answers!
One must blindly accept these “theories”. And we thought blind acceptance is
superstition!
Strings are not supposed
to be made up of subparticles; they are absolutely indivisible. Thus, they
should be indiscernible and undifferentiated. Ultimate strings that are
indivisible should act the same in the same circumstances. If they act
differently, then the circumstances must differ. But nothing has been told about
these different circumstances. The vast variation in behavior is just another
postulate. How the everyday macroscopic world emerges from its strangely
behaving microscopic constituents is yet to be explained by quantum physics.
One of the major problems here is the blind acceptance of the existence of 10
or 11 or 26 dimensions and search for ways to physically explain those
nonexisting dimensions. And that is science!
The
extradimension hypothesis started with a nineteenth century novel that
described “flat land”, a two dimensional world. In 1919, Mr. Kaluza proposed a
fourth spatial dimension and linked it to relativity. It allowed the expression
of both the gravitational field and the electromagnetic field  the only
two of the major four that were known at the time. Using the vector fields as
they have been defined since the end of the 19th century, the fourvector field
could contain only one acceleration. If one tried to express two acceleration
fields simultaneously, one got too many (often implicit) time variables showing
up in denominators and the equations started imploding. The calculus, as it has
been used historically, could not flatten out all the accelerations fast enough
for the mathematics to make any sense. What Mr. Kaluza did was to push the time
variable out of the denominator and switch it into another x variable in the
numerator. Minkowski’s new “mathematics” allowed him to do so. He termed the
extra xvariable as the fourth spatial dimension, without defining the term. It
came as a big relief to Mr. Einstein, who was struggling not only to establish
the “novelty” of his theory over the “mathematics” of Mr. Poincare, who
discovered the equation e = mc^{2} five years before him, but also to
include gravity in SR. Since then, the fantasy has grown bigger and bigger. But
like all fantasies, the extradimensions could not be proved in any experiment.
Some people have
suggested the extra seven dimensions of M theory to be time dimensions. The
basic concept behind these extra fields is rate of change concept of calculus.
Speed is rate of change of displacement. Velocity is rate of change of speed.
Acceleration is the rate of change of velocity. In all such cases, the
equations can be written as Δx/Δt or ΔΔx, Δx/Δt^{2} or ΔΔΔx, etc. In
all these cases, the time variable increases inversely with the space variable.
Some suggested extending it further like Δx/Δt^{3} or ΔΔΔΔx and so on,
i.e., rate of change of acceleration and rate of change of that change and so
on. But in that case it can be extended ad infinitum implying infinite number
of dimensions. Why stop only at 7? Further, we do not use any other terminology
for rate of change of acceleration except calling it variable acceleration.
Speed becomes velocity when direction is included in the description. Velocity
becomes acceleration when change in the direction is included in the
description. But then what next for the change into higher order?
Some try to
explain this by giving the example of a speeding car with constant velocity,
which brings in a term t^{2}. Then they assume that the car along with
the road is tucked inside a giant alien space craft, which moves in the same
direction with a constant, but different velocity (this they interpret as acceleration),
which brings in another term t^{2}. Then they claim that the motion of
the car relative to the earth or to space is now the compound of two separate
accelerations, both of which are represented by t^{2}. So the total
acceleration would be constant, not variable, but it would be represented by t^{4}.
This is what they call a “variable acceleration” of higher order. But this is a
wrong description. If we consider the motion of the space craft relative to us,
then it is moving with a constant velocity. If we consider the car directly,
then also it is moving at a different, but constant velocity from us in unit
time represented by t or t^{2} and not t^{4}, which is
meaningless.
String theory
and Mtheory continued to pursue this method. They had two new fields to
express. Hence they had (at least) two new variables to be transported into the
numerators of their equations. Every time they inserted a new variable, they
had to insert a new field. Since they inserted the field in the numerator as
another xvariable, they assumed that it is another space field and termed it
as an extra dimension. But it can be transported to the denominator as an
inverse time variable also. Both these descriptions are wrong. Let us examine what
a field is. A medium or a field is a
substance or material which carries the wave. It is a region of space
characterized by a physical property having a determinable value at every point
in the region. This means that if we put something appropriate in a field, we
can then notice “something else” out of that field, which makes the body
interact with other objects put in that field in some specific ways, that can
be measured or calculated. This “something else” is a type of force. Depending upon the nature of that force,
the scientists categorize the field as gravity field, electric field, magnetic
field, electromagnetic field, etc. The laws of modern physics suggest that
fields represent more than the possibility of the forces being observed. They
can also transmit energy and momentum. Light wave is a phenomenon that is
completely defined by fields.
Now,
let us take a physical example. Let us stand in a pool with static water with
eyes closed. We do not feel the presence of water except for the temperature
difference. Now we stand in a fountain of flowing water. We feel a force from
one direction. This is the direction of the flow of water. This force is
experienced differently depending upon the velocity of the flow. Water is
continuously flowing out and is being replaced by other water. There is no
vacuum. But we cannot distinguish between the different waters that flow down.
We only feel the force. If the velocity of the flow is too small, we may not
experience any force. Only when the velocity crosses a threshold limit do we
experience the force. This principle is a universal principle. This is noticed
in blackbody radiation and was explained by the photoelectric effect. While
the threshold limit remains constant for each system, the force that is
experienced varies with a fixed formula. The threshold limit provides the many
universal constants of Nature. We measure the changes in force only as ax, where “a” is constant and “x”
the variable. If we classify all forces into one group x, then we will have only one universal constants of Nature. This
way, there will be only one background field containing many energy subfields
(we call these “jaala” literally
meaning net) that behave like local density gradients. In that case, only the
effect of the field gets locally modified. There is no need to add extra space
variable in numerator or inverse time variable in denominator.
Let us look at
speed. It is no different from velocity. Both speed and velocity are the effects
of application of force. Speed is the displacement that arises when a force is
applied to a body and where the change in
the direction of the body or the
force acting on it, is ignored. When we move from speed to velocity, the
direction is imported into the description depending upon the direction from
which the force is applied. This makes velocity a vector quantity. In Mr. Newton’s
second law, f = ma, which is valid only for constantmass systems, the term ‘f’ has not been qualified. Once an
externally applied force acts on the body, the body is displaced. Thereafter,
the force loses contact with the body and ceases to act on it. Assuming no
other force is acting on the body, the body should move only due to inertia,
which is constant. Thus, the body should move at constant velocity and the
equation should be f = mv. Mr. Newton has not taken this factor into
account.
The rate of
change or f = ma arises because of application of additional force, which changes
the direction of the velocity. The initial force may be applied instantaneously
like the firing of a bullet or continuously like a train engine pulling the
bogies. In both cases the bodies move with constant velocity due to inertia.
Friction changes the speed (not directly the velocity, because it acts against
the direction of motion not affecting direction), which, in the second case, is
compensated by application of additional force of the engine. When velocity
changes to acceleration, nothing new happens. It requires only application of
additional force to change the constant velocity due to inertia. This additional
force need not be of another kind.
Thus, this is a new cycle of force and inertia changing the speed of the body. The
nature of force and displacement is irrelevant for this description. Whether it
is a horsepulled car or steam engine, diesel engine, electric engine or rocket
propelled body, the result is the same.
Now let us
import time to the equations of this motion. Time is an independent variable.
Motion is related to space, which is also an independent variable. Both
coexist, but being independent variables, they operate independent of each
other. A body can be in the same position or move 10 meters or a light year in
a nanosecond or in a billion years. Here the space coordinates and time
coordinates do not vary according to any fixed rules. They are operational
descriptions and not existential descriptions. They can vary for the same body
under different circumstances, but it does not directly affect the existence,
physics or chemistry of the body or other bodies (it may affect due to wear and
tear, but that is an operational matter). Acceleration is defined as velocity
per time or displacement per time per time or time squared. This is written
mathematically as t^{2}. Squaring is possible only if there is
nonlinear accumulation (multiplication) of the same quantity. Nonlinearity
arises when the two quantities are represented by different coordinates, which
also implies that they move along different directions. In the case of both
velocity and acceleration, time moves in the same direction from past to
present to future. Thus, the description “time squared” is neither a physical
nor mathematical description. Hence acceleration is essentially no different
from velocity or speed with a direction. While velocity shows speed in a fixed
direction over a finite time segment (second, hour or year, etc), acceleration
shows changes in direction of velocity over an equal time segment, which
implies the existence of another force acting simultaneously that changes the
velocity over the same time segment. Hence no time squaring! Only the forces
get coupled.
Dimension is an
existential description. Change in dimension changes the existential
description of the body irrespective of time and space. It never remains the
same thereafter. Since everything is in a state of motion with reference to
everything else at different rates of displacement, these displacements could
not be put into any universal equation. Any motion of a body can be described
only with reference to another body. Poincare and other have shown that even
three body equations cannot be solved precisely. Our everyday experience shows
that the motion of a body with reference to other bodies can measure different
distances over the same time interval and same distance over different time
intervals. Hence any standard equation for motion including time variables for
all bodies or a class of bodies is totally absurd. Photon and other radiation
that travel at uniform velocity, are mass less or without a fixed background
structure – hence, strictly, are not “bodies” (we call these asthanwaa – literally meaning “boneless
or without any fixed background structure” and the massive bodies as asthimat – literally meaning “with bones
or background structures”).
The three or six
dimensions described earlier are not absolute terms, but are related to the
order of placement of the object in the coordinate system of the field in which
the object is placed. Since the dimension is related to the spread of an
object, i.e., the relationship between its “totally confined inner space” and
its “outer space”, since the outer space is infinite, and since the outer space
does not affect inner space without breaking the dimension, the three or six dimensions
remain invariant under mutual transformation of the axes. If we rotate the
object so that xaxis changes to the yaxis or zaxis, there is no effect on
the structure (spread) of the object, i.e. the relative positions between
different points on the body and their relationship to the space external to it
remain invariant. Based on the positive and negative directions (spreading out
from or contracting towards) the origin, these describe six unique functions of
position, i.e. (x,0,0), (x,0,0), (0,y,0), (0,y,0), (0,0,z), (0,0,z), that
remain invariant under mutual transformation. Besides these, there are four
more unique positions, namely (x, y), (x, y), (x, y) and (x, y) where x = y
for any value of x and y, which also remain invariant under mutual
transformation. These are the ten dimensions and not the socalled “mathematical
structures”. Since time does not fit in this description, it is not a
dimension. These are described in detail in a book “Vaidic Theory of Numbers” written
by us and published on 30062005. Unless the dimensional boundary is
broken, the particle cannot interact with other particles. Thus, dimension is very
important for all interactions.
While
the above description applies to rigid body structures, it cannot be applied to
fluids, whose dimensions depend upon their confining particle or base. Further,
the rigid body structures have a characteristic resistance to destabilization
of their dimension by others (we call it vishtambhakatwa).
Particles with this characteristic are called fermions (we call it dhruva also, which literally means fixed
structure). This resistance to disruption of its position, which is based on
its internal energy and the inertia of restoration, is known as the potential
energy of the particle. Unless this energy barrier is broken, the particle
cannot interact with other particles. While discussing what an electron is, we
have shown the deficiencies in the concepts of electronegativity and electron
affinity. We have discussed the example of NaCl to show that the belief that
ions tend to attain the electronic configuration of noble gases is erroneous.
Neither sodium nor chlorine shows the tendency to become neon or argon. Their
behaviour can be explained by the theory of transition states in micro level
and the escape velocity in macro level.
In the case
of fluids, the relationship between its “totally confined inner space” and its
“outer space” is regulated not only by the nature of their confinement, but
also by their response to density gradients and applied forces that change
these gradients. Since this relationship between the “outer space” and “inner
space” cannot be uniquely defined in the case of fluids including gases, and
since their state at a given moment is subject to change at the next moment
beyond recognition, the combined state of all such unknown dimensions are said
to be in a superposition of states. These are called bosons (we call it dhartra also). The massless particles
cannot be assigned such characteristics, as dimension is related to mass. Hence
such particles cannot be called bosons, but must belong to a different class (we
call them dharuna). Photons belong to
this third class.
The
relationship between the “inner space” and the “outer space” depends on the
relative density of both. Since the inner space constitutes a three layer
structure, (i.e., the core or the nucleus, extranucleic part and the outer
orbitals in atoms and similar arrangement in others), the relationship between
these stabilizes in seven different ways (2l
+ 1). Thus, the effects of these are felt in seven different ways by bodies
external to these, which fall off with distance. These are revealed as the
seven types of gravitation.
Dimension is a
feature of mass, which is determined by both volume and density. The volume and
density are also features of charges, which, in a given space is called force. Thus,
both mass and charge/force are related, but they explain different aspects of
the objects. In spherical bodies from stars to protons, density is related to
volume and volume is related to radius. Volume varies only with radius, which,
in turn, inversely varies with density. Thus, for a given volume with a given
density, increase or decrease in volume and density are functions of its radius
or diameter, i.e., proximity or distance between the center of mass and its
boundary. When due to some
reason the equilibrium of the volume or density is violated, the broken
symmetry gives rise to the four plus one fundamental forces of nature.
We consider radioactive decay a type
of fundamental interaction. These interactions are nothing but variable
interactions between the nucleus representing mass (vaya) and the boundary (vayuna)
determined by the diameter, mediated by the charge – the interacting force (vayonaadha). We know that the
relationship between the centre and the boundary is directly related to diameter.
We also know that scaling up or down the diameter keeping the mass constant is
inversely proportional to the density of the body. Bodies with different
density coexist at different layers, but are not coupled together. Thus, the
mediating force can be related to each of these proximitydistance interactions
between the centre and the boundary. These are the four fundamental
interactions.
The proximityproximity variables give
rise to the socalled strong interaction that bring the centre of mass and the
boundary towards each other confining them (we call such interactions antaryaama). However, there are
conceptual difference between the modern theory and our derivation. The
strong force was invented to counteract the electromagnetic repulsion between protons in the nucleus. It is said that its influence
is limited to a radius of 10^{15}m. The question is, how do the protons come that close for the strong
force to be effective? If they can come that close without repelling each other
without any other force, then the view that equal charges repel needs
modification instead of introducing the strong force. If the strong
force drops off in order to keep it away from interacting with nearby electrons
as fast as is claimed, then it doesn’t explain nuclear creation at all. In that
case protons can never interact with electrons.
Further,
since the strong force has no electromagnetic
force to overcome with neutrons, one would expect neutrons to either be crushed
or thrown out of the nucleus by it. Modern theory suggests that it is prevented
by the strong force proper, which is a binding force between quarks, via gluons,
and the nuclear force, which is a “residue” of the strong force proper and acts
between nucleons. It is suggested that the nuclear force does not directly
involve the force carriers of QCD  the gluons. However, just as electrically
neutral atoms (each said to be composed of canceling charges) attract each
other via the secondorder effects of electrical polarization, via the van der
Waals forces, by a similar analogy, “colorneutral” nucleons may attract each
other by a type of polarization which allows some basically gluonmediated
effects to be carried from one colorneutral nucleon to another, via the
virtual mesons which transmit the forces, and which themselves are held
together by virtual gluons. The basic idea is that the nucleons are “colorneutral”,
just as atoms are “chargeneutral”. In both cases, polarization effects acting
between nearby neutral particles allow a “residual” charge effect to cause net
chargemediated attraction between uncharged species, although it is
necessarily of a much weaker and less direct nature than the basic forces which
act internally within the particles. Van der Waals forces are not understood
mechanically. Hence this is like explaining a mystery by an enigma through
magic.
It
is said that: “There is a high chance that the electron density will not be
evenly distributed throughout a nonpolar molecule. When electrons are unevenly
distributed, a temporary multipole exists. This multipole will interact with
other nearby multipoles and induce similar temporary polarity in nearby
molecules”. But why should the electrons not be evenly distributed? What
prevents it from being evenly distributed? There is no evidence that
electrons are unevenly distributed. According to the Uncertainty Principle, we
cannot know the position of all the electrons simultaneously. Since the
electrons are probabilities, we cannot know their distribution either. If
electrons are probabilities, there is neither a high chance nor a low chance
that electrons are unevenly distributed. The claim that there is a “high
chance” is not supported by any evidence.
It
is said that: “The strong force acting between quarks, unlike other forces, does
not diminish in strength with increasing distance, after a limit (about the
size of a hadron) has been reached... In QCD, this phenomenon is called color
confinement, implying that only hadrons can be observed; this is because the
amount of work done against a force of 10 newtons is enough to create
particleantiparticle pairs within a very short distance of an interaction.
Evidence for this effect is seen in many failed free quark searches”. Non
observance of free quarks does not prove that the strong force does not
diminish in strength with increasing distance. This is wrong assertion. We have
a different explanation for the observed phenomenon.
Mr.
Feynman came up with his (in)famous diagrams that explained nuclear forces
between protons and neutrons using pions to mediate, but like Yukawa
potentials, these diagrams are derived not from mechanical theory but from
experiment. Both the diagrams and the potentials are completely heuristic. Neither
“explanation” explains anything  they simply illustrate the experiment. It is just
a naming, not an unlocking of a mechanism. Mr. Yukawa came up with the meson
mediation theory of the strong force. He did not explain how trading or
otherwise using a pion as mediation could cause an attractive force like the
strong nuclear force. How can particle exchange cause attraction? Mr. Feynman
did not change the theory, he simply illustrated it. Nor did Mr. Feynman
provide a mechanism for the force. Both avoided the central question: “Why does
not the strong force or the nuclear force act differently on protons and
neutrons?” If the proton and neutron have no electromagnetic repulsion and a strong
nuclear force is binding them, then the neutron should be more difficult to
separate from the nucleus than the proton. If the strong force were only a
little stronger than the electromagnetic
force, it would require only the difference of the two to free the proton from
the nucleus, but it would require overcoming the entire strong force to free
the neutron. For this reason the standard model proposes a strong force 100
times stronger than the electromagnetic
force. This lowers the difference in binding energies between the
neutron and proton to cover up the problem. But this is reverse postulation!
Like Yukawa’s
field (discussed later), it does not have any mechanics. The view that “Carrier
particles of a force can themselves radiate further carrier particles”, is
different from the QED, where the photons that carry the electromagnetic force,
do not radiate further photons. There is no physical explanation for how
carrier particles radiate further carrier particles, or how any radiation of
any particles, primary or secondary, can cause the attractive force in the
nucleus. Mr. Weinberg was forced to admit this in Volume II. p. 329 of his book:
“The Quantum Theory of Fields”
that the equation: g_{s}^{2} = g^{2} = (5/3)g'^{2}
is “in gross disagreement with the observed values of the coupling constants”.
The variable g_{s}
is supposed to stand for the strong force, but here Mr. Weinberg has it of the
same size as the weak force. Mr. Weinberg says that there is an explanation for
this and that his solution only applies to masses at the scale of the big W
bosons. But there is no evidence that these big gauge bosons have anything to
do with the strong force. There is no experimental evidence that they have
anything to do with creating any of the coupling constants. Even in the
standard model, the connection of large gauge bosons to strong force theory is
tenuous or nonexistent. So not only Mr. Weinberg was not able to clarify the
mechanics of the strong force, but also he was forced to admit that the gauge
mathematics does not even work.
It
is said that: “Half the momentum in a proton is carried by something other than
quarks. This is indirect evidence for gluons. More direct evidence follows from
looking at the reaction e^{+}e^{} → q qbar. At high
energies, most of the time these events appear as two jets, one formed from the
materialization of the quark and the other formed from the antiquark. However,
for a fraction of the time, three jets are seen. This is believed to be due to
the process q → q bar + gluon”. Even from the point of view of
the standard model, it is difficult to explain how could half the momentum fail
to be carried by the particles that comprise the particle itself? We need some
sort of mechanical explanation for that. The momentum is caused by mass. Why
would gluons make up 50% of the lost momentum? What is the evidence in support
of giving full 50% of a real parameter to ad hoc particles? How can
carrier particles carry half the real momentum? These are the mediating or
carrier particles in the theory with zeroevidence. If gluons are field
particles, they must be able to travel. When they are in transit, their
momentum cannot be given to the proton. The gluon either travels to transmit a
force, or it does not. If it travels, it cannot make up 50% of the momentum of
the proton. If it does not travel, then it cannot transmit the force. Thus, the
theory of the strong force is severely flawed.
We explain the strong force by a
mechanism called “chiti”, which
literally means consolidation. While discussing Coulomb’s law in later pages,
we will show that contrary to popular belief, charge interaction in all emission
fields takes place in four different ways. Two positively charged particles
interact by exploding. But it is not so for interaction between two negatively
charged particles. Otherwise there would be no electricity. The strong force
holds the positively charged particles together. This process generates spin.
We will discuss the mechanism while describing spin. Proximitydistance variables generate weak
interaction (vahiryaama) where only
the boundary shifts. This process also gives rise to angular momentum. Both
strong forces and weak forces consolidate (we call it samgraha) two particles. While the strong force consolidates it
fully (we call it dhaarana), the weak
force consolidates both partially.
Distanceproximity variables
generate electromagnetic interaction where the bound field interacts with the
centre of mass of other particles (upayaama).
The modern view that messenger photons mediate electromagnetic interaction is
erroneous, as the photon field cannot create electricity or magnetism
without the presence of an ion field. The photons must drive electrons or
positive ions in order to create the forces of electricity and magnetism.
Normally, the massless photons cannot create macrofields on their own. Further, since photon is said
to be its own antiparticle, how does the same particle cause both attraction
and repulsion? Earlier we had pointed out at the background structure and its
relationship with universal constants. When minimal energy moves through the
universal background structure, it generates light. This transfer of momentum
is known as the photon. Since the density of the universal background
structure is minimum, the velocity of light is the maximum.
Distancedistance variables generate
radioactive disintegration that leads to a part of the mass from the nucleus to
be ejected (yaatayaama) in beta decay
(saamparaaya gati) to be coupled with
a negatively charged particle. We will explain the mechanism separately.
These four are direct contact
interactions (dhaarana) which operate
from within the body. All four are complimentary forces and are needed for
particle formation, as otherwise stable chemical reactions would be impossible.
For formation of atoms with higher and lower mass numbers, only the nucleus (and
not the full body) interacts with the other particles. Once the centre of mass
is determined, the boundary is automatically fixed, as there cannot be a centre
without a boundary. Gravitational interaction (udyaama), which stabilizes the orbits of two particles or bodies
around their common barycentre at the maximum possible distance (urugaaya pratishthaa), belong to a
different class altogether, as it is partial interaction between the two bodies
treating each as a whole and without interfering with their internal dynamics (aakarshana). This includes gravitational
interaction between subsystems within a system. The internal dynamics of the
subsystems are not affected by gravitation.
Action
is said to be an attribute of the dynamics of a physical system. Physical
laws specify how a physical quantity varies over infinitesimally small changes
in time, position, or other independent variables in its domain. It is also
said to be a mathematical function, which takes the trajectory (also called path or history), of the system as its argument and has a real number as
its result. Generally, action takes different values for different paths.
Classical mechanics postulates that the path actually followed by a physical
system is that for which the action is minimized, or, is stationary. These
statements are evidently selfcontradictory. A stationary path is position and
not action. The particle and its forces/fields may be useful “mathematical
concepts”, but they are approximations to reality and do not physically exist
by themselves. There is a fundamental flaw in such description because it
considers the effect of the four fundamental forces described above not together,
but separately.
For example,
while discussing Coulomb’s law, it will be shown that when Mr. Rutherford proposed his atomic model, he assumed that the
force inside the atom is an electrostatic force. Thus, his equations treat the
scattering as due to the Coulomb force, with the nucleus as a pointcharge. Both
his equations and his size estimates are still used though they have been
updated (but have never been seriously recalibrated, much less reworked). This
equation matches data up to a certain kinetic energy level, but fails after
that. Later physicists have assigned interaction with the strong force in
addition to the weak force to explain this mismatch. But even there, gravity
and radioactive disintegration have been ignored. We will discuss the fallacies
in this explanation while discussing electroweak theory.
Since all
actions take place after application of energy, which is quantized, what the
above descriptions physically mean is that; action is the effect of application
of force that leads to displacement. Within the dimensional boundary, it acts
as the four fundamental forces of Nature that are responsible for formation of
particles (we call it vyuhana –
literally stitching). Outside the dimensional boundary, it acts as the gravitational
interaction that moves the bodies in fixed orbits (we call it prerana – literally dispatch). After initial
displacement, the force ceases to act on the particle and the particle moves on
inertia. The particle then is subjected to other forces, which changes its
state again. This stepbystep interaction with various forces continues in a
chain reaction (we call it dhaara). The effects of the four forces described
in the previous para are individually different: total confinement (aakunchana), loose confinement (avakshepana), spreading from high
concentration to low concentration (prasaarana)
and disintegration (utkshepana).
Thus, individually these forces can continuously displace the particle only in
one direction. Hence they cannot change the state of any particle beyond this.
The change of state is possible only when all these forces act together on the
body. Since these are inherent properties of the body, they can only be
explained as transformation of the same force into these four forces. That way
we can unite all forces.
Gravity between two bodies stabilizes
their orbits based on the massenergy distribution over an area at the maximum
possible distance (urugaaya pratisthaa).
It is mediated by the field that stabilizes the bodies in proportion to their dimensional
density over the area. Thus, it belongs to a different class where the bodies
interact indirectly through the field (aakarshana).
When it stabilizes proximally, it is called acceleration due to gravity. When
it stabilizes at a distance, it is known as gravitation (prerana or gamana).
Like the constant for acceleration due to gravity g varies from place to place, the G also varies from system to
system, though it is not locally apparent. This shows that not only the four
fundamental forces of Nature, but also gravitation is essential for structure
formation, as without it, even the different parts of the body will not exist
in a stable configuration.
The above principle is universally
seen in every object or body. In the human body, the breathing in (praana) represents strong interaction,
the breathing out (also other excretory functions  apaana) represents radioactive disintegration, the functions of
heart and lungs (vyaana and udaana) represent weak interaction and
electromagnetic interactions respectively, and the force that does the
finetuning (samaana) represents
gravitation.
The concept can be further explained
as follows: Consider two forces of equal magnitude but opposite in direction
acting on a point (like the centre of mass and the diameter that regulate the
boundary of a body). Assuming that no other forces are present, the system would
be in equilibrium and it would appear as if no force is acting on it. Now
suppose one of the forces is modified due to some external interaction. The
system will become unstable and the forces of inertia, which were earlier not
perceptible, would appear as a pair of two oppositely directed forces. The
magnitude of the new forces would not be the same as the earlier forces,
because it would be constantly modified due to changing massenergy
distribution within the body. The net effect on the body due to the modified
force would regulate the complementary force in the opposite direction. This is
reflected in the apparently elliptical orbits of planets. It must be remembered
that a circle is a special case of an ellipse, where the distance between the
two foci is zero.
All planets go
round the Sun in circular orbits with radius r, whose center is the Sun itself.
Due to the motion of the Sun, the center of the circle shifts in a forward
direction, i.e., the direction of the motion of the Sun by ∆r making the new
position r_{0}+∆r in the direction of motion. Consequently, the point
in the opposite direction shifts to a new position r_{0}∆_{r}
because of the shifted center. Hence, if we plot the motion of the planets
around the Sun and try to close the orbit, it will appear as if it is an
ellipse, even though it is never a closed shape. The picture below depicts this
phenomenon.
An ellipse with
a small eccentricity is identical to a circular orbit, in which the center of
the circle has been slightly shifted. This can be seen more easily when we
examine in detail the transformations of shapes from a circle to an ellipse.
However, when a circle is slightly perturbed to become an ellipse, the change
of shape is usually described by the gradual transformation from a circle to
the familiar elongated characteristic shape of an ellipse. In the case of the
elliptical shape of an orbit around the sun, since the eccentricity is small,
this is equivalent to a circle with a shifted center, because in fact, when
adding a small eccentricity, the first mathematical term of the series
expansion of an ellipse appears as a shift of the central circular field of
forces. It is only the second term of the series expansion of an ellipse,
which flattens the orbit into the wellknown elongated shape. It may be noted
that in an elliptical orbit, the star is at one of the two foci. That specific
focus determines the direction of motion.
Now let us
examine the general concept of elliptical orbits. The orbital velocity of an
orbiter at any point in the orbit is the vector addition of the two independent
motions; i.e., the centripetal acceleration at that point in the field, which
determines the curve and the tangential velocity, which is a constant and which
moves in a straight line. The orbiter must retain its innate motion throughout
the orbit irrespective of the shape of the orbit. Otherwise, its innate motion
would dissipate. In that case, the orbit would not be stable. Therefore, the
orbiter always retains its innate motion over each and every differential. If
we take the differentials at perihelion and aphelion and compare them, we find that
the tangential velocities due to innate motion are equal, meaning that
the velocity tangent to the ellipse is the same in both places. But the
accelerations are vastly different. Yet
the ellipse shows the same curvature at both places. If we draw a line joining
the perihelion and aphelion and bisect it, the points where this line
intersects the orbit shows equal velocities, but in opposite directions. Thus,
one innate motion shows itself in four different ways. These are macro
manifestations of the four fundamental forces of Nature, as explained below.
From Kepler’s second
law (The Law of Equal Areas), we know that an imaginary line drawn from the
center of the sun to the center of the planet will sweep out equal areas in
equal intervals of time. Thus, the apparent velocity of the planet at perihelion
(closest point, where the strength of gravity would be much more) is faster
than that at the aphelion (farthest point, where the strength of gravity would
be much less). Assuming the planets to have equal mass, these cannot be
balanced (since distances are different). There is still a net force that keeps
the near orbit planet (or at perihelion) to slide away fast, but allows the far
orbit planet (or at aphelion) to apparently move slowly. There are the
proximityproximity and proximitydistance variables. Since the proximityproximity
interaction happens
continuously that keeps the planet at a constant tangential velocity, we call this motion nitya gati – meaning perpetual motion. Since
the proximitydistance interaction leads to coupling of one particle with other particles like
protonneutron reaction at the micro level or the centripetal
acceleration to the planet at the macro level, we call this motion yagnya gati – meaning coupled motion.
The
motion of the planets at the two points where the mid point of the ellipse
intersects its circumference, are the distanceproximity and distancedistance
variables. It is because at one points the planet moves towards net lower
velocity, whereas at the other point, it moves towards net higher velocity. We
call the former motion samprasaada gati
– meaning constructive motion because it leads to interaction among particles
and brings the planet nearer the Sun. We call the beta particle suparna – meaning isolated radioactive
particle. Hence we call the latter motion saamparaaya
gati – meaning radioactive disintegration.
Now, let us consider the example of
SunJupiter orbit. The mass of Jupiter is approximately 1/1047 of that of the
Sun. The barycenter of the SunJupiter system lies above the Sun’s
surface at about 1.068 solar radii from the Sun’s center, which amounts to
about 742800 km. Both
the Sun and Jupiter revolve around this point. At perihelion,
Jupiter is 741 million km or 4.95 astronomical units (AU) from the Sun. At
aphelion it is 817 million km or 5.46 AU. That gives Jupiter a semimajor axis
of 778 million km or 5.2 AU and a mild eccentricity of 0.048. This shows the near relationship between
relative mass and barycenter point that balances both bodies. This balancing
force that stabilizes the orbit is known as gravity.
If
the bodies have different masses, the forces exerted by them on the external
field would not be equal. Thus, they
would be propelled to different positions in the external field, where the net
density over the area would be equal for both. Obviously this would be in
proportion to their masses. Thus, the barycenter, which represents the
center of mass of the system, is
related to proportionate mass between the two bodies. The barycenter is one of the foci of the
elliptical orbit of each body. It changes continuously due to the
differential velocity of the two bodies. When these effects appear between the centre of mass and the boundary of
a body, these are termed as the four fundamental forces of Nature: strong force
and radioactive disintegration form one couple and weak force and
electromagnetic force form the other less strong couple. The net effect of the
internal dynamics of the body (inner space dynamics) is expressed as its charge
outside it.
Assuming that
gravity is an attractive force, let us take the example of the Sun attracting
Jupiter towards its present position S, and Jupiter attracting the Sun towards
its present position J. The two forces are in the same line and balance. If
both bodies are relatively stationary objects or moving with uniform velocity
with respect to each other, the forces, being balanced and oppositely directed,
would cancel each other. But since both are moving with different velocities,
there is a net force. The forces exerted by each on the other will take some
time to travel from one to the other. If the Sun attracts Jupiter toward its
previous position S’, i.e., when the force of attraction started out to cross
the gulf, and Jupiter attracts the Sun towards its previous position J’, then
the two forces give a couple. This couple will tend to increase the angular
momentum of the system, and, acting cumulatively, it will soon cause an
appreciable change of period. The cumulative effect of this makes the planetary
orbits to wobble as shown below.
MASSENERGY EQUAION
REVISTED:
Before
we reexamine the Lorentz force law in light of the above description, we must
reexamine the mass energy equivalence equation. The equation e = mc^{2}
is well established and cannot be questioned. But its interpretation must be
questioned for the simple reason that it does not conform to mathematical
principles. But before that let us note some facts Mr. Einstein either overlooked or glossed over.
It is generally accepted
that Space is homogeneous. We posit that space only “looks” homogeneous over very large scales, because what we
perceive as space is the net effect of radiation reaching our eyes or the
measuring instrument. Since massenergy density at different points in space
varies, it cannot be homogenous. Magnetic force acts only between magnetic
substances and not between all substances at the same space. Gravity interacts
only with mass. Whether inside a black hole or in open space, it is only a
probability amplitude distribution and it is part of the fields that exist in
the neighborhood of the particles. Thus, space cannot be homogeneous. This has
been proved by the latest observations of the Cosmic Microwave Background  the socalled afterglow of the big bang.
This afterglow is not perfectly smooth
 hot and cold spots speckle the sky. In recent years, scientists have
discovered that these spots are not quite as randomly distributed as they first
appeared  they align in a pattern that point out a special direction in space.
Cosmologists have dubbed it the “axis of evil”. More hints of a cosmic arrow
come from studies of supernovae, stellar cataclysms that briefly outshine
entire galaxies. Cosmologists have been using supernovae to map the
accelerating expansion of the universe. Detailed statistical studies reveal
that supernovae are moving even faster in a line pointing just slightly off the
“axis of evil”. Similarly, astronomers have measured galaxy clusters streaming
through space at a million miles an hour toward an area in the southern sky.
For the same
reason, we cannot accept that space is isotropic. Considering the temperature
of the cosmic background radiation (2.73^{0} K) as the unit, the
absolute zero, which is a notch below the melting point of Helium at 272^{0}
K, is exactly 100 times less than the freezing point of water. Similarly, the
interiors of stars and galaxies are a maximum of 1000 times hotter than the
melting point of carbon, i.e., 3500^{0} K. The significance of these
two elements is well known and can be discussed separately. The ratio of
100:1000 is also significant. Since these are all scattered in space – hence
affect its temperature at different points  space cannot be isotropic either.
We have hot stars and icy planets and other Kuiper Belt Objects (KBO’s) in
space. If we take the average, we get a totally distorted picture, which is not
the description of reality.
Space is not
symmetric under time translation either. Just like space is the successive
interval between all objects in terms of nearness or farness from a designated
point or with reference to the observer, time is the interval between
successive changes in the states of the objects in terms of nearness or farness
from a designated epoch or event or the time of measurement. Since all objects
in space do not continuously change their position with respect to all other objects, space is
differentiated from time, which is associated with continuous change of state of all objects. If we measure the spread of the objects, i.e.,
the relationship between its “inner space” and its “outer space” from two
opposite directions, there is no change in their position. Thus the concept of
negative direction of space is valid. Time is related to change of state, which
materializes because of the interaction of bodies with forces. Force is
unidirectional. It can only push. There is nothing as pull. It is always a complementary push from the opposite direction. (Magnetism acts only between
magnetic substances and not universally like other forces. Magnetic fields do
not obey the inverse square law. It has a different explanation). Consider an
example: A + B → C + D.
Here a force makes A interact with B to
produce C and D. The same force doesn’t act on C and D as they do not
exist at that stage. If we change the direction of the force, B acts on A. Here
only the direction of force and not the interval between the states before and
after application of force (time) will change. Moreover, C and D do not exist even
at that stage. Hence the equation would be:
B + A → C + D and not B + A
← C + D.
Thus, it does not affect
causality. There can be no negative direction for time or cause and effect.
Cause must precede effect.
Space is not
symmetric under a “boost” either. That the equations of physics work the same
in moving coordinate system as in the stationary system has nothing to do with
space. Space in no way interacts with or affects it.
Transverse
waves are always characterized by particle motion being perpendicular to the
wave motion. This implies the existence of a medium through which the reference
wave travels and with respect to which the transverse wave travels in a
perpendicular direction. In the absence of the reference wave, which is a
longitudinal wave, the transverse wave can not be characterized as such. All
transverse waves are background invariant by its very definition. Since light
is propagated in transverse waves, Mr. Maxwell used a transverse wave and
aether fluid model for his equations. Mr. Feynman has shown that Lorentz
transformation and invariance of speed of light follows from Maxwell’s
equations. Mr. Einstein’s causal analysis in SR is based on Mr. Lorentz’s
motional theory where a propagation medium is essential to solve the wave
equation. Mr. Einstein’s etherless relativity is not supported by Maxwell’s
Equations nor the Lorentz Transformations, both of which are medium (aether)
based. Thus, the nonobservance of aether drag (as observed in MichelsonMorley
experiments) cannot serve to ultimately disprove the aether model. The equations describing spacetime, based on Mr.
Einstein’s theories of relativity, are mathematically identical to the
equations describing ordinary fluid and solid systems. Yet, it is paradoxical that physicists have
denied aether model while using the formalism derived from it. They don’t
realize that Mr. Maxwell used transverse wave model, whereas aether drag
considers longitudinal waves. Thus, the notion that Mr. Einstein’s work is
based on “aetherless model” is a myth. All along he used the aether model,
while claiming the very opposite.
If light
consists of particles, as Mr. Einstein had suggested in his 1911 paper, the principle of constancy
of the observed speed of light seems absurd. A stone thrown from a speeding
train can do far more damage than one thrown from a train at rest; since the
speed of the particle is not independent of the motion of the object emitting
it. And if we take light to consist of particles and assume that these
particles obey Newton’s
laws, then they would conform to Newtonian relativity and thus automatically
account for the null result of the MichelsonMorley experiment without recourse
to contracting lengths, local time, or Lorentz transformations. Yet, Mr. Einstein
resisted the temptation to account for the null result in terms of particles of
light and simpler, familiar Newtonian ideas, and introduced as his second postulate
something that was more or less obvious when thought of in terms of waves in an
aether.
Mr. Maxwell’s
view that  the sum total of the electric field around a volume of space
is proportional to the charges contained within  has to be considered carefully.
Charge always flows from higher concentration to the lower concentration till
the system acquires equilibrium. But then he says about “around a volume of
space” and “charges contained within.” This means a confined space, i.e., an
object and its effects on its surrounding field. It is not free or unbound
space.
Similarly, his
view that  the sum total of the magnetic field around a volume of space
is always zero, indicating that there are no magnetic charges (monopoles)  has
to be considered carefully. With a bar magnet, the number of field lines “going
in” and those “going out” cancel each other out exactly, so that there is no
deficit that would show up as a net magnetic charge. But then we must
distinguish between the field lines “going in” and “going out”. Electric charge
is always associated with heat and magnetic charges with the absence of heat or
confinement of heat. Where the heat component dominates, it pushes out and
where the magnetic component dominates, it confines or goes in. This is evident
from the magnetospheric field lines and reconnections of the EarthSun and the
SaturnSun system. This is the reason why a change over time in the electric
field or a movement of electric charges (current) induces a proportional
vorticity in the magnetic field and a change over time in the magnetic field
induces a proportional vorticity in the electric field, but in the opposite
direction. In what is called free space, these conditions do not apply, as
charge can only be experience by a confined body. We don’t need the language of
vector calculus to state these obvious facts.
In the example of divergence, usually it is
believed that if we imagine the electric field with lines of force, divergence
basically tells us how the lines are “spreading out”. For the lines to spread
out; there must be something to “fill the gaps”. These things would be
particles with charge. But there are no such things in empty space, so it is
said that the divergence of the electric field in empty space is identically
zero. This is put mathematically as: div E = 0 and div B = 0.
The above
statement is wrong physics. Since space is not empty, it must have something.
There is nothing in the universe that does not contain charge. After all, even
quarks and leptons have charge. Neutrons have a small residual negative charge
(1/11 of electron as per our calculation). Since charges cannot be stationary
unless confined, i.e., unless they are contained in or by a body, they must
always flow from higher concentration to lower concentration. Thus, empty space
must be full of flowing charge as cosmic rays and other radiating particles and
energies. In the absence of sufficient obstruction, they flow in straight lines
and not in geodesics.
This does not
mean that convergence in space is a number or a scalar field, because we know
that, mean density of free space is not the same everywhere and density
fluctuations affect the velocity of charge. As an example, let us dump huge
quantities of common salt or gelatin
powder on one bank of the river water flowing with a constant velocity. This starts diffusing across the breadth of
the pool, imparting a viscosity gradient. Now if we put a small canoe on
the river, the canoe will take a curved path just like light passing by massive
stars bend. We call this “vishtambhakatwa”.
The bending will be proportional to the viscosity gradient. We do not need
relativity to explain this physics. We require mathematics only to calculate
“how much” the canoe or the light pulse will be deflected, but not whether it
will be deflected or why, when and where it is deflected. Since these are
proven facts, div E = 0 and div B = 0
are not constant functions and a wrong descriptions of physics.
Though Mr. Einstein
has used the word “speed” for light (“die Ausbreitungsgeschwindigkeit des
Lichtes mit dem Orte variiert” 
the speed of light varies with the locality”), all translations of his work
convert “speed” to “velocity” so that scientists generally tend to think it as
a vector quantity. They tend to miss the way Mr. Einstein refers to ‘c’, which is most definitely speed. The
word “velocity” in the translations is the common usage, as in “high velocity
bullet” and not the vector quantity that combines speed and direction. Mr. Einstein
held that the speed varies with position, hence it causes curvilinear motion.
He backed it up in his 1920 Leyden Address, where he said: “According to this theory the
metrical qualities of the continuum of spacetime differ in the environment of
different points of spacetime, and are partly conditioned by the matter
existing outside of the territory under consideration. This spacetime
variability of the reciprocal relations of the standards of space and time, or,
perhaps, the recognition of the fact that ‘empty space’ in its physical relation
is neither homogeneous nor isotropic, compelling us to describe its state by
ten functions (the gravitation potentials gμν), has, I think, finally disposed
of the view that space is physically empty”. This is a complex way of telling
the obvious.
Einsteinian
spacetime curvature calculations were based on vacuum, i.e. on a medium
without any gravitational properties (since it has no mass). Now if a material
medium is considered (which space certainly is), then it will have a profound
effect on the spacetime geometry as opposed to that in vacuum. It will make
the gravitational constant differential for different localities. We hold this
view. We do not fix any upper or lower limits to the corrections that would be
applicable to the gravitational constant. We make it variable in seven and
eleven groups. We also do not add a repulsive gravitational term to general relativity,
as we hold that forces only push.
Since
space is not empty, it must have different densities at different points. The
density is a function of mass and change of density is a function of energy.
Thus, the equation: e = mc^{2}
does not show mass energy equivalence, but the density gradient of space. The
square of velocity has no physical meaning except when used to measure an area
of length and breadth equal to the distance measured by c. The above equation does not prove mass energy convertibility, but only shows the energy requirement to spread a
designated quantity of mass over a designated area, so that the mean density
can be called a particular type of sub field or jaala – as we call it.
ELECTROWEAK THEORY REVISITED:
The interactions
we discussed while defining dimension appear to be different from those of
strong/weak/electromagnetic interactions. The most significant difference
involves the weak interactions. It is thought to be mediated by the high energy
W and Z bosons. Now, we will discuss this aspect.
The W boson is said
to be the mediator in beta decay by facilitating the flavor change or reversal
of a quark from being a down quark to being an up quark: d → u + W^{}.
The mass of a quark is said to be about 4MeV and that of a W boson, about 80GeV
– almost the size of an iron atom. Thus, the mediating particle outweighs the
mediated particle by a ratio of 20,000 to 1. Since Nature is extremely
economical in all operations, why should it require such a heavy boson to flip
a quark over? There is no satisfactory explanation for this.
The W^{}
boson then decays into an electron and an antineutrino: W^{} → e + v. Since
the neutrinos and antineutrinos are said to be massless and the electron
weighs about 0.5MeV, there is a great imbalance. Though the decay is not
intended to be an equation, a huge amount of energy magically appearing from
nowhere at the required time and then disappearing into nothing, needs
explanation. We have shown that uncertainty is not a law of Nature, but is a
result of natural laws relating to measurement that reveal a kind of
granularity at certain levels of existence that is related to causality. Thus,
the explanations of Dirac and others in this regard are questionable.
Messers Glashow,
Weinberg, and Salam “predicted” the W and Z bosons using an SU (2) gauge
theory. But the bosons in a gauge theory must be massless. Hence one must
assume that the masses of the W and Z bosons were “predicted” by some other mechanism
to give the bosons its mass. It is said that the mass is acquired through Higgs
mechanism  a form of spontaneous symmetry breaking. But it is an oxymoron. Spontaneous
symmetry breaking is symmetry that is broken spontaneously. Something that
happens spontaneously requires no mechanism or mediating agent. Hence the Higgs
mechanism has to be spontaneous action and not a mechanism. This does not
require a mediating agent – at least not the Higg’s boson. Apparently, the SU (2)
problem has been sought to be solved by first arbitrarily calling it a
symmetry, then pointing to the spontaneous breaking of this symmetry without any
mechanism, and finally calling that breaking the Higgs mechanism! Thus, the
whole exercise produces only a name!
A parity
violation means that beta decay works only on lefthanded particles or right
handed antiparticles. Messers Glashow,
Weinberg, and Salam provided a theory to explain this using a lot of
complicated renormalized mathematics, which showed both a parity loss and a
charge conjugation loss. However, at low energies, one of the Higgs fields
acquires a vacuum expectation value and the gauge symmetry is spontaneously
broken down to the symmetry of electromagnetism. This symmetry breaking would
produce three massless Goldstone bosons but they are said to be “eaten” by
three of the photonlike fields through the Higgs mechanism, giving them mass.
These three fields become the W^{}, W^{+}, and Z bosons of the
weak interaction, while the fourth gauge field which remains massless is the
photon of electromagnetism.
All the evidence
in support of the Higgs mechanism turns out to be evidence that, huge energy
packets near the predicted W and Z masses exist. In that case, why should we
accept that because big particles equal to W and Z masses exist for very short
times, the SU (2) gauge theory can’t be correct in predicting zero masses. And
that the gauge symmetry must be broken, so that the Higgs mechanism must be
proved correct without any mechanical reason for such breaking? There are other
explanations for this phenomenon. If the gauge theory requires to be bypassed
with a symmetry breaking, it is not a good theory to begin with. Normally, if
equations yield false predictions  like these zero boson masses  the “mathematics”
must be wrong. Because mathematics is done at “herenow” and zero is the
absence of something at “herenow”. One can’t use some correction to it in the
form of a nonmechanical “field mechanism”. Thus, Higgs mechanism is not a
mechanism at all. It is a spontaneous symmetry breaking, and there is no evidence
for any mechanism in something that is spontaneous.
Since charge is perceived
through a mechanism, a broken symmetry that is gauged may mean that the vacuum
is charged. But charge is not treated as mechanical in QED. Even before the
Higgs field was postulated, charge was thought to be mediated by virtual
photons. Virtual photons are nonmechanical ghostly particles. They are
supposed to mediate forces spontaneously, with no energy transfer. This is mathematically and physically not
valid. Charge cannot be assigned to the vacuum, since that amounts to assigning
characteristics to the void. One of the first postulates of physics is that
extensions of force, motion, or acceleration cannot be assigned to “nothing”.
For charge to be mechanical, it would have to have extension or motion. All
virtual particles and fields are imaginary assumptions. Higgs’ field, like
Dirac’s field, is a “mathematical” imagery.
The proof for
the mechanism is said to have been obtained in the experiment at the Gargamelle
bubble chamber, which photographed the tracks of a few electrons suddenly
starting to move  seemingly of their own accord. This is interpreted as a
neutrino interacting with the electron by the exchange of an unseen Z boson.
The neutrino is otherwise undetectable. Hence the only observable effect is the
momentum imparted to the electron by the interaction. No neutrino or Z boson is
detected. Why should it be interpreted to validate the imaginary postulate? The
electron could have moved due to many other reasons.
It is said that
the W and Z bosons were detected in 1983 by Carlo Rubbia. This experiment only
detected huge energy packets that left a track that was interpreted to be a
particle. It did not tell that it was a boson or that it was taking part in any
weak mediation. Since large mesons can be predicted by other simpler methods (e.g.,
stacked spins; as proposed by some, etc), this particle detection is not proof
of weak interaction or of the Higgs mechanism. It is only indication of a large
particle or two.
In section 19.2,
of his book “The Quantum Theory of Fields”, Weinberg says: “We do not have to look far for
examples of spontaneous symmetry breaking. Consider a chair. The equations
governing the atoms of the chair are rotationally symmetric, but a solution of
these equations, the actual chair, has a definite orientation in space”. Classically,
it was thought that parity was conserved because spin is an energy state. To
conserve energy, there must be an equal number of lefthanded and righthanded
spins. Every lefthanded spin cancels a righthanded spin of the same size, so
that the sum is zero. If they were created from nothing  as in the Big Bang  they
must also sum up to nothing. Thus, it is assumed that an equal number of lefthanded
and righthanded spins, at the quantum level.
It was also
expected that interactions conserve parity, i.e., anything that can be done from
left to right, can also be done from right to left. Observations like beta
decay showed that parity is not conserved in some quantum interactions, because
some interactions showed a preference for one spin over the other. The
electroweak theory supplied a mystical and nonmechanical reason for it. But it
is known that parity is not conserved always. Weinberg seems to imply that
because there is a chair facing west, and not one facing east, there is a
parity imbalance: that one chair has literally lopsided the entire universe!
This, he explains as a spontaneously broken symmetry!
A spontaneously
broken symmetry in field theory is always associated with a degeneracy of
vacuum states. For the vacuum the expectation value of (a set of scalar fields)
must be at a minimum of the vacuum energy. It is not certain that in such cases
the symmetry is broken, because there is the possibility that the true vacuum
is a linear superposition of vacuum states in which the summed scalar fields have
various expectation values, which would respect the assumed symmetry. So, a
degeneracy of vacuum states is the fall of these expectation values into a
nonzero minimum. This minimum corresponds to a state of broken symmetry.
Since true
vacuum is nonperceptible; hence nothingness; with only one possible state – zero
– logically it would have no expectation values above zero. However, Mr. Weinberg
assumed that the vacuum can have a range of nonzero states, giving both it and
his fields a nonzero energy. Based on this wrong assumption, Mr. Weinberg manipulated
these possible ranges of energies, assigning a possible quantum effective
action to the field. Then he started looking at various ways it might create
parity or subvert parity. Since any expectation value above zero for the vacuum
is wholly arbitrary and only imaginary, he could have chosen either parity or
nonparity. In view of Yang and Lee’s finding, Mr. Weinberg choose nonparity. This
implied that his nonzero vacuum degenerates to the minimum. Then he applied
this to the chair! Spontaneous symmetry breaking actually occurs only for
idealized systems that are infinitely large. So does Mr. Weinberg claim that a
chair is an idealized system that is infinitely large!
According to Mr.
Weinberg, the appearance of broken symmetry for a chair arises because it has a
macroscopic moment of inertia I, so that its ground state is part of a
tower of rotationally excited states whose energies are separated by only tiny
amounts, of the order h^{2}/I. This gives the state vector of
the chair an exquisite sensitivity to external perturbations, so that even very
weak external fields will shift the energy by much more than the energy
difference of these rotational levels. As a result, any rotationally
asymmetrical external field will cause the ground state or any other state of
the chair with definite angular momentum numbers to rapidly develop components
with other angular momentum quantum numbers. The states of the chair that are
relatively stable with respect to small external perturbations are not those
with definite angular momentum quantum numbers, but rather those with a
definite orientation, in which the rotational symmetry of the underlying theory
is broken.
Mr. Weinberg
declares that he is talking about symmetry, but actually he is talking about
decoherence. He is trying to explain why the chair is not a probability or an
expectation value and why its wave function has collapsed into a definite
state. Quantum mathematics works by proposing a range of states. This range is determined
by the uncertainty principle. Mr. Weinberg assigned a range of states to the
vacuum and then extended that range based on the nonparity knowledge of Messers
Yang and Lee. But the chair is not a range of states: it is a state – the ground state. To degenerate or collapse into this
ground state, or decohere from the probability cloud into the definite chair we
see and experience, the chair has to interact with its surroundings. The chair
is most stable when the surroundings are stable (having “a definite orientation”);
so the chair aligns itself to this definite orientation. Mr. Weinberg argues that
in doing so, it breaks the underlying symmetry. Thus, Mr. Weinberg does not
know what he is talking about!
Mr. Weinberg
believes that the chair is not just probabilistic as a matter of definite
position. Apparently, he believes it is probabilistic in spin orientation also.
He even talks about the macroscopic moment of inertia. This is extremely weird,
because the chair has no macroscopic angular motion. The chair may be facing
east or west, but there is no indication that it is spinning, either clockwise
or counter clockwise. Even if it were spinning, there is no physical reason to
believe that a chair spinning clockwise should have a preponderance of quanta
in it spinning clockwise. QED has never shown that it is impossible to propose
a macroobject spinning clockwise, with all constituent quanta spinning counterclockwise.
However, evidently Weinberg is making this assumption without any supporting
logic, evidence or mechanism. Spin parity was never thought to apply to
macroobjects. A chair facing or spinning in one direction is not a fundamental
energy state of the universe, and the Big Bang doesn’t care if there are five
chairs spinning left and four spinning right. The Big Bang didn’t create chairs
directly out of the void, so we don’t have to conserve chairs!
Electroweak
theory, like all quantum theories, is built on gauge fields. These gauge fields
have builtin symmetries that have nothing to do with the various conservation
laws. What physicists tried to do was to choose gauge fields that matched the
symmetries they had found or hoped to find in their physical fields. QED began
with the simplest field U (1), but the strong force and weak force had more
symmetries and therefore required SU (2) and SU (3). Because these gauge fields
were supposed to be mathematical fields (which is an abstraction) and not real physical
fields, and because they contained symmetries of their own, physicists
soon got tangled up in the gauge fields. Later experiments showed that the
symmetries in the socalled mathematical fields didn’t match the symmetries in
nature. However, the quantum theory could be saved if the gauge field could be somehow
broken  either by adding ghost fields or by subtracting symmetries by
“breaking” them. This way, the physicists landed up with 12 gauge bosons, only
three of which are known to exist, and only one of which has been welllinked
to the theory. Of these, the eight gluons are completely theoretical and only
fill slots in the gauge theory. The three weak bosons apparently exist, but no
experiment has tied them to beta decay. The photon is the only boson known to
exist as a mediating “particle”, and it was known long before gauge theory
entered the picture.
Quantum theory
has got even the only verified boson – the photon – wrong, since the boson of
quantum theory is not a real photon: it is a virtual photon! QED
couldn’t conserve energy with a real photon, so the virtual photon mediates
charge without any transfer of energy. The virtual photon creates a zeroenergy
field and a zeroenergy mediation. The photon does not bump the electron, it
just whispers a message in its ear. So, from a theoretical standpoint, the
gauge groups are not the solution, they are part of the problem. We should be
fitting the mathematics to the particles, not the particles to the mathematics.
Quantum physicists claim repeatedly that their field is mainly experimental,
but any cursory study of the history of the field shows that this claim is not
true. Quantum physics has always been primarily “mathematical”. A large part of
20^{th} century experiment was the search for particles to fill out the
gauge groups, and the search continues, because they are searching blind folded
in a dark room for a black cat that does not exist. When US Congress wanted to
curtail funding research in this vain exercise; they named the hypothetical
Higg’s boson (which is nonexistent), as the “God particle” and tried to sway
public opinion. Now they claim that they are “tantalizingly close” not to
discover the “God particle”, but to “the possibility of getting a glimpse of
it”. How long the scientists continue to fool the public!
Mr. Weinberg’s
book proves the above statement beyond any doubt. 99% of the book is couched in
leading “mathematics” that takes the reader through a mysterious maze.
This “mathematics” has its own set of rules that defy logical consistency. It
is not a tool to measure how much a system changes when some of its parameters
change. It is like a vehicle possessed by a spirit. You climb in and it takes
you where it wants to go! Quantum physicists never look at a problem
without first loading it down with all the mathematics they know to make it
thoroughly incomprehensible. The first thing they do is write everything as
integrals and/or partial derivatives, whether they are needed to be so written or
not. Then they bury their particles under matrices and action and Lagrangians
and Hamiltonians and Hermitian operators and so on  as many stuff as they can
apply. Only after thoroughly confusing everyone do they begin calculating. Mr. Weinberg
admits that Goldstone bosons “were first encountered in specific models by
Goldstone and Nambu.” It may be noted that the bosons were first encountered not
in experiments. They were encountered in
the mathematics of Mr. Goldstone and Mr. Nambu. As a “proof” of their
existence, Mr. Weinberg offers an equation in which action is invariant under a
continuous symmetry, and in which a set of Hermitian scalar fields are
subjected to infinitesimal transformations. This equation also includes it,
a finite real matrix. To solve it, he also needs the spacetime volume and the
effective potential.
In equation
21.3.36, he gives the mass of the W particle: W = ev/2sinθ, where e
is the electron field, v is the vacuum expectation value, and the angle
is the electroweak mixing angle. The angle was taken from elastic scattering
experiments between muon neutrinos and electrons, which gave a value for θ of
about 28^{o}. Mr. Weinberg develops v right out of the Fermi
coupling constant, so that it has a value here of 247 GeV.
v ≈ 1/√G_{F}
All these are of
great interest due to the following reasons:
·
There is no muon neutrino in beta decay, so the
scattering angle of electrons and muon neutrinos don’t tell us anything about
the scattering angles of protons and electrons, or electrons and electron
antineutrinos. The electron antineutrino is about 80 times smaller than a muon
neutrino, so it is hard to see how the scattering angles could be equivalent.
It appears this angle was chosen afterwards to match the data. Mr. Weinberg
even admits it indirectly. The angle wasn’t known until 1994. The W was
discovered in 1983, when the angle was unknown.
·
Mr. Fermi gave the coupling value to the
fermions, but Mr. Weinberg gives the derived value to the vacuum expectation.
This means that the W particle comes right out of the vacuum, and the only
reason it doesn’t have the full value of 247 GeV is, the scattering angle and
its relation to the electron. We were initially shocked in 1983 to find 80 GeV
coming from nowhere in the bubble chamber, but now we have 247 GeV coming from
nowhere. Mr. Weinberg has magically burrowed 247 GeV from the void to explain
one neutron decay! He gives it back 10^{25} seconds later, so that the
loan is paid back. But 247 GeV is not a small quantity in the void. It is very
big.
Mr. Weinberg says,
the symmetry breaking is local, not global. It means he wanted to keep his
magic as localized as possible. A global symmetry breaking might have
unforeseen sideeffects, warping the gauge theory in unwanted ways. But a local
symmetry breaking affects only the vacuum at a single “point”. The symmetry is
broken only within that hole that the W particle pops out of and goes back into.
If we fill the hole back fast enough and divert the audience’s gaze with the
right patter, we won’t have to admit that any rules were broken or that any
symmetries really fell. We can solve the problem at hand, keep the mathematics
we want to keep, and hide the spilled milk in a 10^{25}s rabbit hole.
Mr. Bryon Roe’s Particle
Physics at the New Millennium
deals with the same subject in a much more weird fashion. He clarifies: “Imagine
a dinner at a round table where the wine glasses are centered between pairs of
diners. This is a symmetric situation and one doesn’t know whether to use the
right or the left glass. However, as soon as one person at the table makes a
choice, the symmetry is broken and glass for each person to use is determined.
It is no longer rightleft symmetric. Even though a Lagrangian has a particular
symmetry, a ground state may have a lesser symmetry”.
There is nothing
in the above description that could be an analogue to a quantum mechanical
ground state. Mr. Roe implies that the choice determines the ground
state and the symmetry breaking. But there is no existential or mathematical
difference between reality before and after the choice. Before the choice, the
entire table and everything on it was already in a sort of ground state, since
it was not a probability, an expectation, or a wave function. For one thing,
prior choices had been made to bring it to this point. For another, the set
before the choice was just as determined as the set after the choice, and just
as real. Decoherence did not happen with the choice. It either happened long
before or it was happening all along. For another, there was no symmetry,
violation of which would have quantum effects. As with entropy, the universe
doesn’t keep track of things like this: there is no conservation of wine
glasses any more than there is a conservation of Mr. Weinberg’s chairs.
Position is not conserved, nor is direction. Parity is a conservation of spin,
not of position or direction. Mr. Roe might as well claim that declination, or
lean, or comfort, or wakefulness, or hand position is conserved. Should we
monitor chin angles at this table as well, and sum them up relative to the Big
Bang?
Mr. Roe gives some
very short mathematics for the Goldstone boson getting “eaten up by the gauge field”
and thereby becoming massive, as follows:
L = D_{β}*φ *D_{β}
φ  μ ^{2}φ*φ  λ(φ*φ)^{2 } (¼)F_{βν}F^{βν}
where F_{βν} = ∂_{ν}A_{β}
 ∂_{β}A_{ν}; D_{β} = ∂_{β}  igA_{β}
; and A_{β} → A_{β} + (1/g)∂_{β}α(x)
Let φ_{1} ≡ φ_{1}’
+ ⟨0φ_{1}0⟩ ≡ φ _{1}’ + v;v = √μ^{2}/λ)
and substitute:
New terms involving A are
(½)g^{2}v^{2}A^{ν}A_{ν}
 gvA_{ν}∂^{ν}φ _{2}
He says: “The
first term is a mass term for A_{ν}. The field has acquired mass!” But the
mathematics suddenly stops. He chooses a gauge so that φ_{2 }=
0, which deletes the last term above. But then he switches to a verbal
description: “One started with a massive scalar field (one state), a massless
Goldstone boson (one state) and a massless vector boson (two polarization
states). After the transform there is a massive vector meson A^{μ},
with three states of polarization and a massive scalar boson, which has one
state. Thus, the Goldstone boson has been eaten up by the gauge field, which
has become massive”. But where is the A^{μ} in that derivation? Mr. Roe
has simply stated that the mass of the field is given to the bosons,
with no mathematics or theory to back up his statement. He has simply jumped from
A_{ν} to A^{μ} with no mathematics or physics in between!
The mathematics for
positive vacuum expectation value is in section 21.3, of Mr. Weinberg’s book 
the crucial point being equation 21.3.27. This is where he simply inserts his
positive vacuum expectation value, by asserting that μ^{2}<0
making μ imaginary, and finding the positive vacuum value at the
stationary point of the Lagrangian. (In his book, Mr. Roe never held that μ^{2}<
0). This makes the stationary point of the Lagrangian undefined and basically implies
that the expectation values of the vacuum are also imaginary. These being
undefined and unreal, thus unbound, Mr. Weinberg is free to take any steps in
his “mathematics”. He can do anything he wants to. He therefore juggles the
“equalities” a bit more until he can get his vacuum value to slide into his
boson mass. He does this very ham handedly, since his huge Lagrangian quickly
simplifies to W = vg/2, where v is the vacuum expectation value.
It may be remembered that g in weak theory is 0.65, so that the boson
mass is nearly ⅔v.
Mr. Weinberg
does play some tricks here, though he hides his tricks a bit better than Mr. Roe.
Mr. Roe gives up on the mathematics and just assigns his field mass to his
bosons. Weinberg skips the field mass and gives his vacuum energy right to his
boson, with no intermediate steps except going imaginary. Mr. Weinberg tries to
imply that his gauged mathematics is giving him the positive expectation value,
but it isn’t. Rather, he has cleverly found a weak point in his mathematics
where he can choose whatever value he needs for his vacuum input, and then
transfers that energy right into his bosons.
What is the
force of the weak force? In section 7.2 of his book, Mr. Roe says that “The
energies involved in beta decay are a few MeV, much smaller than the 80 GeV of
the W intermediate boson.” But by this he only means that the electrons emitted
have kinetic energies in that range. This means that, as a matter of energy,
the W doesn’t really involve itself in the decay. Just from looking at the
energy involved, no one would have thought it required the mediation of such a
big particle. Then why did Mr. Weinberg think it necessary to borrow 247 GeV
from the vacuum to explain this interaction? Couldn’t he have borrowed a far
smaller amount? The answer to this is that by 1968, most of the smaller mesons
had already been discovered. It therefore would have been foolhardy to predict
a weak boson with a weight capable of being discovered in the accelerators of
the time. The particles that existed had already been discovered, and
the only hope was to predict a heavy particle just beyond the current limits.
This is why the W had to be so heavy. It was a brilliant bet, and it paid off.
WHAT IS AN ELECTRON:
Now, let us examine the Lorentz
force law in the light of the above discussion. Since the theory is based on
electrons, let us first examine what is an electron! This question
is still unanswered, even though everything else about the electron, what it
does, how it behaves, etc., is common knowledge.
From the time electrons were first discovered, charged particles like the
protons and electrons have been arbitrarily assigned plus or minus signs to
indicate potential, but no real mechanism or field has ever been seriously
proposed. According to the electroweak theory, the current carrier of charge
is the messenger photon. But this photon is a virtual particle. It does not
exist in the field. It has no mass, no dimension, and no energy. In
electroweak theory, there is no mathematics to show a real field. The virtual
field has no mass and no energy. It is not really a field, as a continuous
field can exist between two boundaries that are discrete. A stationary boat in
deep ocean in a calm and cloudy night does not feel any force by itself. It can
only feel the forces with reference to another body (including the dynamics of
the ocean) or land or sky. With no field to explain the atomic bonding, early
particle physicists had to explain the bond with the electrons. Till now, the
nucleus is not fully understood. Thus the bonding continues to be assigned to
the electrons. But is the theory correct?
The formation of an ionic bond proceeds when the cation, whose ionization
energy is low, releases some of its electrons to achieve a stable electron
configuration. But the ionic bond is used to explain the bonding of atoms and not ions. For instance, in the case of NaCl, it is a Sodium atom
that loses an electron to become a Sodium cation. Since the Sodium atom is
already stable, why should it need to release any of its electrons to achieve a “stable configuration” that makes it unstable? What causes it to
drop an electron in the presence of Chlorine? There is no answer. The problem
becomes even bigger when we examine it from the perspective of Chlorine. Why
should Chlorine behave differently? Instead of dropping an electron to become an ion, Chlorine adds electrons. Since as an atom
Chlorine is stable, why should it want to borrow an electron from Sodium to
become unstable? In fact, Chlorine cannot “want” an extra electron, because
that would amount to a stable atom “wanting” to be unstable. Once Sodium
becomes a cation, it should
attract a free electron, not Chlorine. So there is no reason for Sodium to
start releasing electrons. There is no reason for a free electron to move from
a cation to a stable atom like chlorine. But there are lots of reasons for
Sodium not to release electrons. Free
electrons do not move from cations to stable atoms.
This contradiction is sought to be explained by “electron affinity”. The
electron affinity of an
atom or molecule is defined as the amount of energy released when an electron
is added to a neutral atom or molecule to form a negative ion. Here affinity
has been defined by release of energy, which is an effect and not the cause! It
is said that Ionic bonding will occur only if the overall energy change for the
reaction is exothermic. This implies that the atoms tend to release energy. But
why should they behave like that? The present theory only tells that there is release of energy during
the bonding. But that energy could be released in any number of mechanical
scenarios and not necessarily due to electronaffinity alone. Physicists have
no answer for this.
It is said that all elements tend to become noble gases, so that they
gain or lose electrons to achieve this. But there is no evidence for it. If
this logic is accepted, then Chlorine should wants another electron to be more
like Argon. Hence it really should want another proton, because another
electron won’t make Chlorine into Argon. It will only make Chlorine an ion,
which is unstable. Elements do not destabilize themselves to become ions. On
the other hand, ions take on electrons to become atoms. It is the ions that
want to be atoms, not the reverse. If there is any affinity, it is for having
the same number of electrons and protons. Suicide is a misplaced human tendency
– not an atomic tendency. Atoms have no affinity for becoming ions. The theory
of ionic bonding suggests that the anion (an ion that is attracted to the anode
during electrolysis), whose electron affinity is positive, accepts the
electrons with a negative sign to attain a stable electronic configuration! And
so are electrons! And no body pointed out such a contradiction! Elements do not
gain or lose electrons; they confine and balance the charge field around them,
to gain even more nuclear stability.
Current theory tells only that atoms
should have a different electronegativity to bond without explaining the cause
for such action. Electronegativity
cannot be measured directly. Given the current theory, it also does not
follow any logical pattern on the Periodic Table. It generally runs from a low
to a peak across the table with many exceptions (Hydrogen, Zinc, Cadmium,
Terbium, Ytterbium, and the entire 6th period, etc). To calculate Pauling
electronegativity for an element, it is necessary to have the data on the
dissociation energies of at least two types of covalent bonds formed by that
element. That is a post hoc definition.
In other words, the data has been used to formulate the “mathematics”. The
mathematics has no predictive qualities. It has no theoretical or mechanical
foundation. Before we define electronegativity, let us define what is an
electron. We will first explain the basic concept before giving practical
examples to prove the concepts.
Since the effect of force on a body sometimes appears as action at a
distance and since all action at a distance can only be explained by the
introduction of a field, we will first consider fields to explain these. If
there is only one body in a field, it reaches an equilibrium position with
respect to that field. Hence, the body does not feel any force. Only when
another body enters the field, the interaction with it affects the field, which
is felt by both bodies. Hence any interaction, to be felt, must contain at
least two bodies separated by a field. Thus, all interactions are threefold
structures (one referral or relatively central structure, the other peripheral;
both separated by the field  we call it tribrit).
All bodies that take part in interactions are also threefold structures, as
otherwise there would not be a net charge for interaction with other bodies or
the field. Only in this way we can explain the effect of one body on the other
in a field. It may be noted that particles with electric charge create electric
fields that flow from higher concentration to lower concentration. When the
charged bodies are in motion, they generate a magnetic field that closes in on
itself. This motion is akin to that of a boat flowing from high altitude to low
altitude with river current and creating a bowshock effect in a direction
perpendicular to the direction of motion of the boat that closes in due to
interaction with static water.
All particles or bodies are discrete
structures that are confined within their dimension which differentiates their
“inner space” from their “outer space”. The “back ground structure” or the
“ground” on which they are positioned is the field. The boundaries between
particles and fields are demarcated by compact density variations. But what
happens when there is uniform density between the particles and the field – where
the particle melts into the field? The state is singular, indistinguishable in
localities, uncommon and unusual from our experience making it undescribable –
thus, unknowable. We call this state of uniform density (sama rasa) singularity (pralaya
– literally meaning approaching ultimate dissolution). We do not accept that
singularity is a point or region in spacetime in which gravitational forces
cause matter to have an infinite density – where the gravitational tides
diverge – because gravitational tides have never been observed. We do not
accept that singularity is a condition when equations do not give a valid
value, and can sometimes be avoided by using a different coordinate system,
because we have shown that division by zero leaves the number unchanged and
renormalization is illegitimate mathematics. Yet, in that state there can be no
numbers, hence no equations. We do not accept that events beyond the
Singularity will be stranger than science fiction, because at singularity,
there cannot be any “events”.
Some
physicists have modeled a state of quantum gravity beyond singularity and call
it the “big bounce”. Though we do not accept their derivation and their ”mathematics”,
we agree in general with the description of the big bounce. They have
interpreted it as evidence for colliding galaxies. We refer to that state as
the true “collapse” and its aftermath. Law of conservation demands that for
every displacement caused by a force, there must be generated an equal and
opposite displacement. Since application of force leads to inertia, for every
inertia of motion, there must be an equivalent inertia of restoration. Applying
this principle to the second law of thermodynamics, we reach a state, where the
structure formation caused by differential density dissolves into a state of
uniform density – not degenerates to the state of maximum entropy. We call that state singularity. Since at that
stage there is no differentiation between the state of one point and any other
point, there cannot be any perception, observer or observable. There cannot be
any action, number or time. Even the concept of space comes to an end as there
are no discernible objects that can be identified and their interval described.
Since this distribution leaves the largest remaining uncertainty, (consistent
with the constraints for observation), this is the true state of maximum entropy.
It is not a state of “heat death” or “state of infinite chaos”, because it is a
state mediated by negative energy.
Viewed from this light, we define
objects into two categories: macro objects that are directly perceptible (bhaava pratyaya) and quantum or micro
objects that are indirectly perceptible through some mechanism (upaaya pratyaya). The second category is
further divided into two categories: those that have differential density that
makes them perceptible indirectly through their effects (devaah) and those that form a part of the primordial uniform
density (prakriti layaah) making them
indiscernible. These are like the positive and negative energy states
respectively but not exactly like those described by quantum physics. This
process is also akin to the creation and annihilation of virtual particles
though it involves real particles only. We describe the first two states of the
objects and their intermediate state as “dhruva,
dharuna and dhartra” respectively.
When the universe reaches a state of
singularity as described above, it is dominated by the inertia of restoration.
The singular state (sama rasa)
implies that there is equilibrium everywhere. This equilibrium can be thought
of in two ways: universal equilibrium and local equilibrium. The latter implies
that every point is in equilibrium. Both the inertia of motion and inertia of
restoration cannot absolutely cancel each other. Because, in that event the
present state could not have been reached as no action ever would have started.
Thus, it is reasonable to believe that there is a mismatch (kimchit shesha) between the two, which
causes the inherent instability (sishrhkshaa)
at some point. Inertia of motion can be thought of as negative inertia of
restoration and vice versa. When the singularity approaches, this inherent
instability causes the negative inertia of restoration to break the
equilibrium. This generates inertia of motion in the uniformly dense medium
that breaks the equilibrium over a large area. This is the single and primary
force that gives rise to other secondary and tertiary etc, forces.
This interaction leads to a chain
reaction of breaking the equilibrium at every point over a large segment resembling
spontaneous symmetry breaking and density fluctuations followed by the
bowshock effect. Thus, the inertia of motion diminishes and ultimately ceases
at some point in a spherical structure. We call the circumference of this
sphere “naimisha”  literally meaning
controller of the circumference. Since this action measures off a certain volume
from the infinite expanse of uniform density, the force that causes it is
called “maayaa”, which literally
means “that by which (everything is) scaled”. Before this force operated, the
state inside the volume was the same as the state outside the volume. But once
this force operates, the density distribution inside both become totally
different. While the outside continues to be in the state of singularity, the
inside is chaotic. While at one level inertia of motion pushes ahead towards
the boundary, it is countered by the inertia of restoration causing nonlinear
interaction leading to density fluctuation. We call the inside stuff that
cannot be physically described, as “rayi”
and the force associate with it “praana”
– which literally means source of all displacements. All other forces are
variants of this force. As can be seen, “praana”
has two components revealed as inertia of motion and inertia of restoration,
which is similar in magnitude to inertia of motion in the reverse direction
from the center of mass. We call this second force as “apaana”. The displacements caused by these forces are
unidirectional. Hence in isolation, they are not able to form structures.
Structure formation begins when both operate on “rayi” at a single point. This creates an equilibrium point (we call
it vyaana) around which the
surrounding “rayi” accumulate. We call
this mechanism “bhuti” implying
accumulation in great numbers.
When “bhuti” operates on “rayi”,
it causes density variation at different points leading to structure formation
through layered structures that leads to confinement. Confinement increases
temperature. This creates pressure on the boundary leading to operation of
inertia of restoration that tries to confine the expansion. Thus, these are not
always stable structures. Stability can be achieved only through equilibrium.
But this is a different type of equilibrium. When inertia of restoration dominates
over a relatively small area, it gives a stable structure. This is one type of
confinement that leads to the generation of the strong, weak, electromagnetic
interactions and radioactivity. Together we call these as “Yagnya” which literally means coupling (samgati karane). Over large areas, the distribution of such stable
structures can also bring in equilibrium equal to the primordial uniform
density. This causes the bodies to remain attached to each other from a
distance through the field. We call this force “sootra”, which literally means string. This causes the
gravitational interaction. Hence it is related to mass and inversely to
distance. In gravitational interaction, one body does not hold the other, but
the two bodies revolve around their barycenter.
When “Yagnya” operates at negative potential, i.e., “apaana” dominates over “rayi”,
it causes what is known as the strong nuclear interaction, which is confined
within the positively charged nucleus. Outside the confinement there is a
deficiency of negative charge, which is revealed as the positive charge. We
call this force “jaayaa”, literally
meaning that which creates all particles. This force acts in 13 different ways
to create all elementary particles (we are not discussing it now). But when “Yagnya” operates at positive potential,
i.e., “praana” dominates over “rayi”, it causes what is known as the
weak nuclear interaction. Outside the confinement there is a deficiency of
positive charge, which is revealed as a negative charge. This negative charge component
searches for complimentary charge to attain equilibrium. This was reflected in
the Gargamelle bubble chamber, which photographed the tracks of a few electrons
suddenly starting to move. This has been described as the W boson. We call this
mechanism “dhaaraa” – literally
meaning sequential flow, since it starts a sequence of actions with
corresponding reactions (the socalled W^{+} and W^{} and Z bosons).
Till this time, there is no
structure: it is only density fluctuation. When the above reactions try to
shift the relatively denser medium, the inertia of restoration is generated and
tries to balance between the two opposite reactions. This appears as charge (lingam), because in its interaction with
others, it either tries to push them away (positive charge – pum linga) or confine them (negative
charge – stree linga). Since this
belongs to a different type of reaction, the force associated with it is called
“aapah”. When the three forces of “jaayaa”, “dhaaraa” and “aapah” act
together, it leads to electromagnetic interaction (ap). Thus, electromagnetic interaction is not a separate force, but
only accumulation of the other forces. Generally, an electric field is so
modeled that it is directed away from a positive electric charge and towards a
negative electric charge that generated the field. Another negative electric
charge inside the generated electric field would experience an electric force
in the opposite direction of the electric field, regardless of whether the
field is generated by a positive or negative charge. A positive electric charge
in the generated electric field will experience an electric force in the same
direction as the electric field. This shows that the inherent characteristic of
a positive charge is to push away from the center to the periphery. We call
this characteristic “prasava”. The
inherent characteristic of a negative charge is to confine positive charge. We
call this characteristic “samstyaana”.
Since electric current behaves in a bipolar way, i.e., stretching out,
whereas magnetic flow always closes in, there must be two different sources of
their origin and they must have been coupled with some other force. This is the
physical explanation of electromagnetic forces. Depending upon temperature
gradient, we classify the electrical component into four categories (sitaa, peeta, kapilaa, atilohitaa) and
the magnetic forces into four corresponding categories (bhraamaka, swedaka, draavaka, chumbaka).
While explaining uncertainty, we had shown that if we want to get any
information about a body, we must either send some perturbation towards it to
rebound or measure the incoming radiation emitted by it through the intervening
field, where it gets modified. We had also shown that for every force applied
(energy released), there is an equivalent force released in the opposite
direction (corrected version of Mr. Newton’s third law). Let us take a macro
example first. Planets move more or less in the same plane around the Sun like
boats float on the same plane in a river (which can be treated as a field). The
river water is not static. It flows in a specific rhythm like the space
weather. When a boat passes, there is a bow shock effect in water in front of
the boat and the rhythm is temporarily changed till reconnection of the
resultant wave. The water is displaced in a direction perpendicular to the
motion of the boat. However, the displaced water is pushed back by the water
surrounding it due to inertia of restoration. Thus, it moves backwards of the
boat charting in a curve. Maximum displacement of the curve is at the middle of
the boat.
We can describe this as if the boat is pushing the water away, while the
water is trying to confine the boat. The interaction will depend on the mass
and volume (that determines relative density) and the speed of the boat on the
one hand and the density and velocity of the river flow on the other. These two
can be described as the potentials for interaction (we call it saamarthya) of the boat and the river
respectively. The potential that starts the interaction first by pushing the
other is called the positive potential and the other that responds to this is
called the negative potential. Together they are called charge (we call it lingam). When the potential leads to
push the field, it is the positive charge. The potential that confines the
positive charge is negative charge. In an atom, this negative potential is
called an electron. The basic cause for such potential is instability of
equilibrium due to the internal effect of a confined body. Their position
depends upon the magnitude of the instability, which explains the electron affinity
also. The consequent reaction is electronegativity.
The Solar system is inside a big bubble, which forms a part of its
heliosphere. The planets are within this bubble. The planets are individually
tied to the Sun through gravitational interaction. They also interact with each
other. In the boat example, the river flows within two boundaries and the
riverbed affects its flow. The boat acts with a positive potential. The river
acts with a negative potential. In the Solar system, the Sun acts with a positive
potential. The heliosphere acts with a negative potential. In an atom, the
protons act with a positive potential. The electrons act with a negative
potential.
While discussing Coulomb’s law we have shown that interaction between two
positive charges leads to explosive results. Thus, part of the energy of the protons
explode like solar flares and try to move out in different directions, which
are moderated by the neutrons in the nucleus and electron orbits in the
boundary. The point where the exploding radiation stops at the boundary makes
an impact on the boundary and becomes perceptible. This is called the electron.
Since the exploding radiation returns from there towards the nucleus, it is
said to have a negative potential. The number of protons determine the number
of explosions – hence the number of boundary electrons. Each explosion in one
direction is matched by another equivalent disturbance in the opposite
direction. This determines the number of electrons in the orbital. The neutrons
are like planets in the solar system. This is confined by the negative
potential of the giant bubble in the Solar system, which is the equivalent of
electron orbits in atoms. Since the flares appear at random directions, the
position of the electron cannot be precisely determined. In the boat example,
the riverbed acts like the neutrons. The extranuclear field of the atom is
like the giant bubble. The water near the boat that is most disturbed acts
similarly. The totality of electron orbits is like the heliosphere. The river
boundaries act similarly.
The electrons have no fixed position until one looks at it and the wave
function collapses (energy released). However, if one plots the various
positions of the electron after a large number of measurements, eventually one
will get a ghostly orbitlike pattern. The pattern of the orbit appears as
depicted below. This proves the above view.
The atomic radius is a term used to
describe the size of the atom, but there is no standard definition for this
value. Atomic radius may refer to the ionic radius, covalent radius, metallic
radius, or van der Waals radius. In all cases, the size of the atom is
dependent on how far out the electrons extend. Thus, electrons can be described
as the outer boundary of the atom that confines the atom. It is like the
“heliopause” of the solar system, which confines the solar system and
differentiates it from the interstellar space. There are well defined planetary
orbits (like the electron shell), which lack a physical description except for
the backdrop of the solar system. These are like the electron shells. This similarity
is only partial, as each atomic orbital admits up to two otherwise identical
electrons with opposite spin, but planets have no such companion (though the libration
points 1 and 2 or 4 and 5 can be thought of for comparison). The reason for
this difference is the nature of mass difference (volume and density) dominating
in the two systems.
Charge neutral gravitational force
that arises from the center of mass (we call it Hridayam), stabilizes the inner (Sunward or nuclearward) space between
the Sun and the planet and nucleus and the electron shells. The charged electric
and magnetic fields dominates the field (from the center to the boundary) and confine
and stabilize the interplanetary field or the extranuclear field (we call it
“Sootraatmaa”, which literally means “selfsustained
entangled strings”). While in the case of Sunplanet system, most of the mass
is concentrated at the center as one body, in the case of nucleus, protons and
neutrons with comparable masses interact with each other destabilizing the
system continuously. This affects the electron arrangement. The mechanism (we
call it “Bhuti”), the cause and the macro
manifestation of these forces and spin will be discussed separately.
We have discussed the electroweak theory earlier. Here it would suffice to
say that electrons are nothing but outer boundaries of the extra nuclear space
and like the planetary orbits, have no physical existence. We may locate the
planet, but not its orbit. If we mark one segment of the notional orbit and
keep a watch, the planet will appear there periodically, but not always. However,
there is a difference between the two examples as planets are like neutrons. It
is well known that the solar wind originates from the Sun and travels in all
directions at great velocities towards the interstellar space. As it travels,
it slows down after interaction with the interplanetary medium. The planets
are positioned at specific orbits balanced by the solar wind, the average
density gradient of various points within the Solar system and the average
velocity of the planet besides another force that will be discussed while
analyzing Coulomb’s law.
We cannot measure both the position and momentum of the electron
simultaneously. Each electron shell is tied to the nucleus individually like
planets around the Sun. This is proved from the Lamb shift and the overlapping
of different energy levels. The shells are entangled with the nucleus like the
planets are not only gravitationally entangled with the Sun, but also with each
other. We call this mechanism “chhanda”,
which literally means entanglement.
Quantum theory now has 12 gauge
bosons, only three of which are known to exist, and only one of which has been
welllinked to the electroweak theory. The eight gluons are completely
theoretical, and only fill slots in the gauge theory. But we have a different
explanation for these. We call these eight as “Vasu”, which literally means “that which constitutes everything”. Interaction
requires at least two different units, each of these could interact with the
other seven. Thus, we have seven types of “chhandas”.
Of these, only three (maa, pramaa,
pratimaa) are involved in fixed dimension (dhruva), fluid dimension (dhartra)
and dimensionless particles (dharuna).
The primary difference between these bodies relate to density, (apaam pushpam) which affects and is
affected by volume. A fourth “chhanda”
(asreevaya) is related to the
confining fields (aapaam). We will
discuss these separately.
We can now review the results of the double slit experiment and the
diffraction experiment in the light of the above discussion. Let us take a
macro example first. Planets move more or less in the same plane around the Sun
like boats float on the same plane in a river (which can be treated as a field).
The river water is not static. It flows in a specific rhythm like the space
weather. After a boat passes, there is bow shock effect in the water and the
rhythm is temporarily changed till reconnection. The planetary orbits behave in
a similar way. The solar wind also behaves with the magnetosphere of planets in
a similar way. If we take two narrow angles and keep a watch for planets moving
past those angles, we will find a particular pattern of planetary movement. If
we could measure the changes in the field of the Solar system at those points,
we will also note a fixed pattern. It is like boats crossing a bridge with two
channels underneath. We may watch the boats passing through a specific channel
and the wrinkled surface of water. As the boats approach the channels, a
compressed wave precedes each boat. This wave will travel through both channels.
However, if the boats are directed towards one particular channel, then the
wave will proceed mostly through that channel. The effect on the other channel
will be almost nil showing fixed bands on the surface of water. If the boats
are allowed to move unobserved, they will float through either of the channels and
each channel will have a 50% chance of the boat passing through it. Thus, the
corresponding waves will show interference pattern.
Something similar happens in the case of electrons and photons. The socalled photon has zero rest mass.
Thus, it cannot displace any massive particles, but flows through the particles
imparting only its energy to them. The space between the emitter and the slits
is not empty. Thus, the movement of the massless photon generates similar
reaction like the boats through the channels. Since the light pulse spherically
spreads out in all directions, it behaves like a water sprinkler. This creates
the wave pattern as explained below:
Let us consider a water
sprinkler in the garden gushing out water. Though the water is primarily forced
out by one force, other secondary forces come into play immediately. One is the
inertia of motion of the particles pushed out. The second is the interaction
between particles that are in different states of motion due to such
interactions with other particles. What we see is the totality of such
interactions with components of the stream gushing out at different velocities
in the same general direction (not in the identical direction, but in a narrow
band). If the stream of gushing out water falls on a stationary globe which
stops the energy of the gushing out water, the globe will rotate. It is because
the force is not enough to displace the globe from its position completely, but
only partially displaces its surface, which rotates it on the fixed axis.
Something similar happens when
the energy flows generating a bunch of radiations of different wave lengths. If
it cannot displace the particle completely, the particle rotates at its
position, so that the energy “slips out” by it moving tangentially.
Alternatively, the energy moves one particle that hits the next particle. Since
energy always moves objects tangentially, when the energy flows by the particle, the particle is temporarily
displaced. It regains its position due to inertia of restoration – elasticity
of the medium  when other particles push it back. Thus, the momentum only is
transferred to the next particle giving the energy flow a wave shape as shown below.
The diffraction experiment can
be compared to the boats being divided to pass in equal numbers through both
channels. The result would be same. It will show interference pattern. Since
the electron that confines positive charge behaves like the photon, it should
be massless.
It may be noted that the motion of the wave is always within a narrow
band and is directed towards the central line, which is the equilibrium
position. This implies that there is a force propelling it towards the central
line. We call this force inertia of restoration (sthitisthaapaka samskaara), which is akin to elasticity. The
bowshock effect is a result of this inertia. But after reaching the central line,
it overshoots due to inertia of motion. The reason for the same is that, systems are
probabilistically almost always close to equilibrium. But transient
fluctuations to nonequilibrium states could be expected due to inequitable
energy distribution in the system and its environment independently and
collectively. Once in a nonequilibrium state, it is highly likely that both after
and before that state, the system was closer to equilibrium. All such
fluctuations are confined within a boundary. The electron provides this
boundary. The exact position of the particle cannot be predicted as it is
perpetually in motion. But it is somewhere within that boundary only. This is
the probability distribution of the particle. It may be noted that the particle
is at one point within this band at any given time and not smeared out in all
points. However, because of its mobility, it has the possibility of covering
the entire space at some time or the other. Since the position of the particle
could not be determined in one reading, a large number of readings are taken.
This is bound to give a composite result. But this doesn’t imply that such readings
represent the position of the particle at any specific moment or at all times before
measurement.
The “boundary conditions” can be satisfied by many different waves
(called harmonics – we call it chhanda) if each of those waves
has a position of zero displacement at the right place. These positions where
the value of the wave is zero are called nodes. (Sometimes two types of waves  traveling waves and
standing waves  are distinguished by whether the nodes of the wave move or
not.) If electrons behave like waves, then the wavelength of the electron must
“fit” into any orbit that it makes around the nucleus in an atom. This is the
“boundary condition” for a one electron atom. Orbits that do not have the
electron’s wavelength “fit” are not possible, because wave interference will
rapidly destroy the wave amplitude and the electron would not exist anymore.
This “interference” effect leads to discrete (quantized) energy levels for the
atom. Since light interacts with the atom by causing a transition between these
levels, the color (spectrum) of the atom is observed to be a series of sharp
lines. This is precisely the pattern of energy levels that are observed to
exist in the Hydrogen atom. Transitions between these levels give the pattern
in the absorption or emission spectrum of the atom.
LORENTZ FORCE LAW
REVISITED:
In
view of the above discussion, the Lorentz force law becomes simple. Since
division by zero leaves the quantity unchanged, the equation remains valid and
does not become infinite for point particles. The equation shows massenergy
requirement for a system to achieve the desired charge density. But what about
the radius “a” for the point electron
and the 2/3 factors in the equation:
The
simplest explanation for this is that no one has measured the mass or radius of
the electron, though its charge has been measured. This has been divided by c^{2} to get the hypothetical
mass. As explained above, this mass is not the mass of the electron, but the
required mass to achieve charge density equal to that of an electron shell,
which is different from that of the nucleus and the extranucleic field like
the heliosheath that is the dividing line between the heliosphere and the
interstellar space. Just like solar radiation rebounds from termination shock,
emissions from the proton rebound from the electron shell, that is akin to the
stagnation region of the solar system.
Voyager 1 spacecraft is now in a stagnation region in the outermost layer
of the bubble around our solar system – called termination shock. Data obtained
from Voyager over the last year reveal the region near the termination shock to
be a kind of cosmic purgatory. In it, the wind of charged particles streaming
out from our sun has calmed, our solar system’s magnetic field is piled up, and
higherenergy particles from inside our solar system appear to be leaking out
into interstellar space. Scientists
previously reported the outward speed of the solar wind had diminished to zero marking
a thick, previously unpredicted “transition zone” at the edge of our solar
system. During this past year, Voyager’s magnetometer also detected a
doubling in the intensity of the magnetic field in the stagnation region. Like
cars piling up at a clogged freeway offramp, the increased intensity of the
magnetic field shows that inward pressure from interstellar space is compacting
it. At the same time, Voyager has detected a 100fold increase in the intensity
of highenergy electrons from elsewhere in the galaxy diffusing into our solar
system from outside, which is another indication of the approaching boundary.
This is exactly what is
happening at the atomic level. The electron is like the termination shock at
heliosheath that encompasses the “giant bubble” encompassing the Solar system,
which is the equivalent of the extranuclear space. The electron shells are
like the stagnation region that stretches between the giant bubble and the
interstellar space. Thus, the radius a
in the Lorentz force law is that of the associated nucleus and not that of the electron.
The back reaction is the confining magnetic pressure of the electron on the
extranucleic field. The factor 2/3 is related to the extranucleic field,
which contributes to the Hamiltonian H_{I}. The balance 1/3 is related
to the nucleus, which contributes to the Hamiltonian H_{A}. We call
this concept “Tricha saama”, which
literally means “tripled radiation field”. We have theoretically derived the
value of π from this principle. The effect of the electron that is felt outside
 like the bow shock effect of the Solar system  is the radiation effect,
which contributes to the Hamiltonian H_{R}. To understand physical
implication of this concept, let us consider the nature of perception.
ALBEDO:
Before
we discuss perception of bare charge and bare mass, let us discuss about the
modern notion of albedo. Albedo is
commonly used to describe the overall average reflection coefficient of an
object. It is the fraction of solar energy (shortwave radiation) reflected from
the Earth or other objects back into space. It is a measure of the reflectivity
of the earth’s surface. It is a
nondimensional, unitless quantity that indicates how well a surface reflects
solar energy. Albedo (α) varies between 0 and 1. A value of 0 means the surface
is a “perfect absorber” that absorbs all incoming energy. A value of 1 means
the surface is a “perfect reflector” that reflects all incoming energy. Albedo
generally applies to visible light, although it may involve some of the
infrared region of the electromagnetic spectrum.
Neutron albedo is the probability
under specified conditions that a neutron entering into a region through a
surface will return through that surface. Daytoday variations of
cosmicrayproduced neutron fluxes near the earth’s ground surface are measured
by using three sets of paraffinmoderated BF3 counters, which are installed in
different locations, 3 m above ground, ground level, and 20 cm under ground.
Neutron flux decreases observed by these counters when snow cover exists show
that there are upwardmoving neutrons, that is, ground albedo neutron near the
ground surface. The amount of albedo neutrons is estimated to be about 40
percent of total neutron flux in the energy range 110 to the 6th eV.
Albedos are of
two types: “bond albedo” (measuring total proportion of electromagnetic energy
reflected) and “geometric albedo” (measuring brightness when illumination comes
from directly behind the observer). The geometric albedo is defined as the
amount of radiation relative to that from a flat Lambert surface which is an
ideal reflector at all wavelengths. It scatters light isotropically  in other
words, an equal intensity of light is scattered in all directions; it doesn’t
matter whether you measure it from directly above the surface or off to the
side. The photometer will give you the same reading. The bond albedo is the
total radiation reflected from an object compared to the total incident
radiation from the Sun. The study of albedos, their dependence on wavelength,
lighting angle (“phase angle”), and variation in time comprises a major part of
the astronomical field of photometry.
The albedo of an
object determines its visual brightness when viewed with reflected light. A typical geometric ocean albedo is approximately 0.06, while
bare sea ice varies from approximately 0.5 to 0.7. Snow has an even higher
albedo at 0.9. It is about 0.04 for charcoal. There cannot be any
geometric albedo for gaseous bodies. The albedos of planets are tabulated
below:
Planet

Mercury

Venus

Earth

Moon

Mars

Jupiter

Saturn

Pluto

Geometric Albedo

0.138

0.84

0.367

0.113

0.15





0.440.61

Bond Albedo

0.119

0.75

0.29

0.123

0.16

0.343 +/0.032

0.342+/0.030

0.4

The above table
shows some surprises. Generally, change in the albedo is related to temperature
difference. In that case, it should not be almost equal for both Mercury, which
is a hot planet being nearer to the Sun, and the Moon, which is a cold
satellite much farther from the Sun. In the case of Moon, it is believed that
the low albedo is caused by the very porous first few millimeters of the lunar
regolith. Sunlight can penetrate the surface and illuminate subsurface grains,
the scattered light from which can make its way back out in any direction. At
full phase, all such grains cover their own shadows; the dark shadows being covered
by bright grains, the surface is brighter than normal. (The perfectly full moon
is never visible from Earth. At such times, the moon is eclipsed. From the
Apollo missions, we know that the exact subsolar point  in effect, the
fullest possible moon  is some 30% brighter than the fullest moon seen from
earth. It is thought that this is caused by glass beads formed by impact in the
lunar regolith, which tend to reflect light in the direction from which it
comes. This light is therefore reflected back toward the sun, bypassing earth).
The above discussion
shows that the present understanding of albedo may not be correct. Ice and
snow, which are very cold, show much higher albedo than ocean water. But both
Mercury and Moon show almost similar albedo even though they have much wide
temperature variations. Similarly, if porosity is a criterion, ice occupies
more volume than water, hence more porous. Then why should ice show more albedo
than water. Why should Moon’s albedo be equal to that of Mercury, whose
surface appears metallic, whereas the Moon’s surface soil is brittle. The
reason is, if we heat up lunar soil, it will look metallic like Mercury. In
other words, geologically, both Moon and Mercury belong to the same class as if
they share the same DNA. For this reason, we generally refer to Mercury as the
offspring of Moon. The concept of albedo does not take into account the bodies that emit
radiation.
We can see
objects using solar or lunar radiation. But till it interacts with a body, we
cannot see the incoming radiation. We see only the reflective radiation – the
radiation that is reflected after interacting with the field set up by our
eyes. Yet, we can see both the Sun and the Moon that emit these radiations.
Based on this characteristic, the objects are divided into three categories:
 Radiation that shows selfluminous bodies like stars as well as other similar bodies (we call it swajyoti). The radiation itself has no colors – not perceptible to the eye. Thus, outer space is only black or white.
 Reflected colorless radiation like that of Moon that shows not only the emission from reflecting bodies (not the bodies themselves), but also other bodies (para jyoti), and
 Reflecting bodies that show a sharp change in the planet’s reflectivity as a function of wavelength (which would occur if it had vegetation similar to that on Earth) that show themselves in different colors (roopa jyoti). Light that has reflected from a planet like Earth is polarized, whereas light from a star is normally unpolarized.
 Nonreflecting bodies that do not radiate (ajyoti). These are dark bodies.
Of these, the
last category has 99 varieties including black holes and neutron stars.
BLACK HOLES, NEUTRON STARS ETC:
Before we
discuss about dark matter and dark energy, let us discuss some more aspects
about the nature of radiation. Xray emissions are treated as a signature of
the Blackholes. Similarly, Gamma ray bursts are also keenly watched by
Astronomers. Gamma rays and xrays are clubbed together at the lower end of the
electromagnetic radiation spectrum. However, in spite of some similarities, the
origin of both shows a significant difference. While xrays originate from the
electron shell region, gamma rays originate from the region deep down the
nucleus. We call such emissions “pravargya”.
There
is much misinformation, speculation and sensationalization relating to Black
holes like the statement: “looking ahead inside a Black hole, you will see the
back of your head”. Central to the present concept of Black holes is the
singularities that arise as a mathematical outcome of General Relativity. The
modern concept of singularity does not create a “hole”. It causes all mass to
collapse to a single “point”, which in effect closes any “holes” that may
exist. A hole has volume and by definition, the modern version of singularity
has no volume. Thus, it is the opposite concept of a hole. We have shown that
the basic postulates of GR including the equivalence principle are erroneous.
We have also shown that division by zero leaves a number unchanged. The
zerodimensional point cannot enter any equations defined by cardinal or
counting numbers, which have extensions – hence represent dimensions. Since all
“higher mathematics” is founded on differential equations, there is a need to
relook at the basic concepts of Black holes.
Mr.
Einstein had tried to express GR in terms of the motion of “mass points” in
four dimensional space. But “mass points” is an oxymoron. Mass always has
dimension (the terms like supermassive black hole prove this). A point, by
definition has no dimension. Points cannot exist in equations because equations
show changes in the output when any of the parameters in the input is changed.
But there cannot be any change in the point except its position with reference
to an origin, which depicts length only. What GR requires is a sort of
renormalization, because the concept has been denormalized first. One must
consider the “field strength”. But the lack of complete field strength is
caused by trying to do afterthefact forced fudging of equations to contain
entities such as points that they cannot logically contain. The other
misguiding factor is the concept of “messenger particles” that was introduced
to explain the “attractive force”.
The mathematics of
General Relativity should be based on a constant differential that is not zero
and seek the motion of some given mass or volume. This mass or volume may be as
small as we like, but it cannot be zero. This causes several fundamental and
farreaching changes to the mathematics of GR, but the first of these changes
is of course the elimination of singularity from all solutions. Therefore the
central “fact” of the black hole must be given up. Whatever may be at the
center of a black hole, it cannot be a “singularity”.
Mr. Chandrasekhar
used Mr. Einstein’s field equations to calculate densities and accelerations
inside a collapsing superstar. His mathematics suggested the singularity at the
center, as well as other characteristics that are still accepted as defining
the black hole. Mr. Einstein himself contradicted Mr. Chandrasekhar’s
conclusions. Apart from using mass points in GR, Mr. Einstein made several
other basic errors that even Mr. Chandrasekhar did not correct and is still
being continued. One such error is the use of the term γ, which, as has been
explained earlier, really does not change anything except perception of the
object by different observers unrelated to the time evolution of the object
proper. Hence it cannot be treated as actually affecting the timeevolution of
the object. Yet, in GR, it affects both “x” and “t” transformations. In some
experimental situations γ is nearly correct. But in a majority of situations, γ
fails, sometimes very badly. Also γ is the main term in the mass increase equation.
To calculate volumes or densities in a field, one must calculate both radius
(length) and mass; and the term comes into play in both.
Yet, Mr. Einstein
had wrongly assigned several length and time variables in SR, giving them to
the wrong coordinate systems or to no specific coordinate systems. He skipped
an entire coordinate system, achieving two degrees of relativity when he
thought he had only achieved one. Because his x and t transforms were
compromised, his velocity transform was also compromised. He carried this error
into the mass transforms, which infected them as well. This problem then
infected the tensor calculus and GR. This explains the various anomalies and
variations and the socalled violations within Relativity. Since Mr. Einstein’s
field equations are not correct, Mr. Schwarzschild’s solution of 1917 is not
correct. Mr. Israel’s
nonrotating solution is not correct. Mr. Kerr’s rotating solution is not
correct. And the solutions of Messers Penrose, Wheeler, Hawking, Carter, and
Robinson are not correct.
Let
us take just one example. The black hole equations are directly derived from GR
 a theory that stipulates that nothing can equal or exceed the speed of light.
Yet the centripetal acceleration of the black hole must equal or exceed the
speed of light in order to overcome it. In that case, all matter falling into a
black hole would instantaneously achieve infinite mass. It is not clear how
bits of infinite mass can be collected into a finite volume, increase in
density and then disappear into a singularity. In other words, the assumptions and
the mathematics that led to the theory of the black hole do not work inside the
created field. The exotic concepts like wormholes, tachyons, virtual particle
pairs, quantum leaps and nonlinear itrajectories at 11dimensional
bosonmassed fields in parallel universes, etc, cannot avoid this central
contradiction. It is not the laws of physics that breaks down inside a black
hole. It is the mathematics and the postulates of Relativity that break down.
It is wrongly assumed
that matter that enters a black hole escapes from our universe. Mass cannot
exist without dimension. Even energy must have differential fluid dimension;
otherwise its effect cannot be experienced differently from others. Since the
universe is massive, it must have dimensions – inner space as differentiated
from outer space. Thus, the universe must be closed. The concept of expanding
universe proves it. It must be expanding into something. Dimension cannot be
violated without external force. If there is external force, then it will be
chaotic and no structure can be formed, as closed structure formation is
possible only in a closed universe. From atoms to planets to stars to galaxies,
etc., closed structures go on. With limited time and technology, we cannot
reach the end of the universe. Yet, logically like the atoms, the planets, the
stars, the galaxies, etc, it must be a closed one – hence matter cannot escape
from our universe. Similarly, we cannot enter another universe through the
black hole or singularity. If anything, it prevents us from doing so, as
anything that falls into a black hole remains trapped there. Thus the concept
of white holes or pathways to other dimensions, universes, or fields is a myth.
There has been no proof in support of these exotic concepts.
When
Mr. Hawking, in his A Brief History of Time says: “There are some solutions of
the equations of GR in which it is possible for our astronaut to see a naked
singularity: he may be able to avoid hitting the singularity and instead fall
through a wormhole and come out in another region of the universe”, he is
talking plain nonsense. He admits it in the next sentence, where he says
meekly: “these solutions may be unstable”. He never explains how it is possible
for any astronaut to see a naked singularity. Without giving any justification,
he says that any future Unified Field Theory will use Mr. Feynman’s sumover
histories. But Mr. Feynman’s renormalization trick in sumover histories is to
sum the particle’s histories in imaginary time rather than in real time. Hence Mr.
Hawking makes an assertion elsewhere that imaginary numbers are important
because they include real numbers and more. By implication, he implies
imaginary time includes real time and more! These magical mysteries are good
selling tactics for fictions, but bad theories.
Black holes behave like
a blackbody – zero albedo. Now, let us apply the photoelectric effect to the
black holes – particularly those that are known to exist at the center of galaxies.
There is no dearth of high energy photons all around and most of it would have
frequencies above the threshhold limit. Thus, there should be continuous
ejection of not only electrons, but also xrays. Some such radiations have
already been noticed by various laboratories and are well documented. The
flowing electrons generate a strong magnetic field around it, which appears as
the sunspots on the Sun. Similar effects would be noticed in the galaxies
also. The high intensity magnetic fields in neutron stars are well documented. Thus
the modern notion of black holes needs modification.
We posit that black
holes are not caused by gravity, but due to certain properties of heavier
quarks – specifically the charm and the strange quarks. We call these effects “jyotigouaayuh” and the reflected
sequence “gouaayuhjyoti” for
protons and other similar bodies like the Sun and planet Jupiter. For neutrons
and other similar bodies like the Earth, we call these “vaakgoudyouh” and “goudyouhvaak”
respectively. We will deal with it separately. For the present it would suffice
that, the concept of waves cease to operate inside a black hole. It is a long
tortuous spiral that leads a particle entering a black hole towards its center
(we call it vrhtra). It is dominated
by cool magnetic fields and can be thought of as real antimatter. When it
interacts with hot electric energy like those of stars and galaxies (we call it
Indra vidyut), it gives out
electromagnetic radiation that is described as matter and antimatter
annihilating each other.
The blackholes
are identified due to the characteristic intense xray emission activity in its
neighborhood implying the existence of regions of negative electric charge. The
notion of black holes linked to singularity is self contradictory as hole
implies a volume containing “nothing” in a massive substance, whereas the
concept of volume is not applicable to singularity. Any rational analysis of
the black hole must show that the collapsing star that creates it simply
becomes denser. This is possible only due to the “boundary” of the stars moving
towards the center, which implies dominance of negative charge. Since negative
charge flows “inwards”, i.e., towards the center, it does not emit any
radiation beyond its dimension. Thus, there is no interaction between the
object and our eyes or other photographic equipment. The radiation that fills
the intermediate space is not perceptible by itself. Hence it appears as black.
Since space is only black and white, we cannot distinguish it from its
surroundings. Hence the name black hole.
Electron shells
are a region of negative charge, which always flows inwards, i.e., towards the
nucleus. According to our calculation, protons carry a positive charge, which
is 1/11 less than an electron. But this residual charge does not appear outside
the atom as the excess negative charge flows inwards. Similarly, the black
holes, which are surrounded by areas with negative charge, are not visible.
Then how are the xrays emitted? Again we have to go back to the Voyager data
to answer this question. The socalled event horizon of the black hole is like
the stagnation region in
the outermost layer of the bubble around stars like the Sun. Here, the magnetic
field is piled up, and higherenergy particles from inside appear to be leaking
out into interstellar space. The
outward speed of the solar wind diminishes to zero marking a thick “transition
zone” at the edge of the heliosheath.
Something similar happens with a black hole. A
collapsing star implies increased density with corresponding reduction in
volume. The density cannot increase indefinitely, because all confined objects
have mass and mass requires volume – however compact. It cannot lead to
infinite density and zero volume. There is no need to link these to
hypothetical tachyons, virtual particle pairs, quantum leaps and nonlinear itrajectories
at 11dimensional bosonmassed fields in parallel universes. On the contrary,
the compression of mass gives away the internal energy. The higher energy
particles succeed in throwing out radiation from the region of the negative
charge in the opposite direction, which appear as xray emissions. These
negative charges, in turn, accumulate positively charged particles from the
cosmic rays (we call this mechanism Emusha
varaaha) to create accretion discs that forms stars and galaxies. Thus, we
find black holes inside all galaxies and may be inside many massive stars.
On the other
hand, gamma ray bursts are generated during super nova explosions. In this case,
the positively charged core explodes. According to Coulomb’s law, opposite
charges attract and same charges repeal each other. Hence the question arises,
how does the supernova, or for that matter any star or even the nucleus,
generate the force to hold the positively charged core together. We will
discuss Coulomb’s law before answering this question.
PERCEPTION OF BARE MASS & BARE CHARGE:
Objects are perceived in
broadly two ways by the sensory organs. The ocular, auditory and psychological
functions related to these organs apparently follow action at a distance
principle (homogenous field interaction). We cannot see something very close to
the eye. There must be some separation between the eye and the object because
it need a field to propagate the waves. The tactile, taste and olfactory
functions are always contact functions (discrete interaction). This is proved
by the functions of “mirror neurons”. Since the brain acts like the CPU joining
all data bases, the responses are felt in other related fields in the brain
also. When we see an event without actually participating in it, our mental
activity shows as if we are actually participating in it. Such behavior of the
neurons is well established in medical science and psychology.
In the case of visual
perception, the neurons get polarized like the neutral object and create a
mirror image impression in the field of our eye (like we prepare a casting),
which is transmitted to the specific areas of brain through the neurons, where
it creates the opposite impression in the sensory receptacles. This impression
is compared with the stored memory of the objects in our brain. If the
impression matches, we recognize the object as such or note it for future
reference. This is how we see objects and not because light from the object
reaches our retina. Only a small fraction of the incoming light from the object
reaches our eyes, which can’t give full vision. We don’t see objects in the
dark because there is no visible range of radiation to interact with our eyes. Thus,
what we see is not the object proper, but the radiation emitted by it, which
comes from the area surrounding its confinement  the orbitals. The auditory
mechanism functions in a broadly similar way, though the exact mechanism is
slightly different.
But when we feel an
object through touch, we ignore the radiation because neither our eyes can
touch nor our hands can see. Here the mass of our hand comes in contact with
the mass of the object, which is confined. The same principle applies for our
taste and smell functions. Till the object and not the field set up by it
touches our tongue or nose (through convection or diffusion as against
radiation for ocular perception), we cannot feel the taste or smell. Mass has
the property of accumulation and spread. Thus, it joins with the mass of our
skin, tongue or nose to give its perception. This way, what we see is different from what we touch. These two are
described differently by the two perceptions. Thus we can’t get accurate inputs
to model a digital computer. From the above description, it is clear
that we can weigh and measure the dimensions of mass through touch, but cannot actually
see it. This is bare mass. Similarly, we can see the effect of radiation, but
cannot touch it. In fact, we cannot see the radiation by itself. This is bare
charge. These characteristics distinguish bare charge from bare mass.
DARK MATTER, DARK ENERGY, ETC:
Astrophysical
observations are pointing out to huge amounts of “dark matter” and “dark
energy” that are needed to explain the observed large scale structure and
cosmic dynamics. The emerging picture is a spatially flat, homogeneous Universe
undergoing the presently observed accelerated phase. Despite the good quality
of astrophysical surveys, commonly addressed as Precision Cosmology, the
nature and the nurture of dark energy and dark matter, which should constitute
the bulk of cosmological matterenergy, are still unknown. Furthermore, up till
now, no experimental evidence has been found at fundamental level to explain the
existence of such mysterious components. Let us examine the necessity for
assuming the existence of dark matter and dark energy.
The three
Friedmann models of the Universe are described by the following equation:
Matter density curvature
dark energy
8 πG kc^{2 } Λ
H^{2}
=  ρ   +
, where,
3 R^{2 } 3
H = Hubble’s constant. ρ
= matter density of the universe.
c = Velocity of light
k = curvature of the Universe.
G = Gravitational
constant. Λ = cosmological
constant.
R = radius of the Universe.
In this
equation, ‘R’ represents the scale factor of the Universe, and H is Hubble’s
constant, which describes how fast the Universe is expanding. Every factor in
this equation is a constant and has to be determined from observations  not
derived from fundamental principles. These observables can be broken down into
three parts: gravity (which is treated as the same as matter density in
relativity), curvature (which is related to but different from topology) and
pressure or negative energy given by the cosmological constant that holds back
the speeding galaxies. Earlier it was generally assumed that gravity was the
only important force in the Universe, and that the cosmological constant was
zero. Thus, by measuring the density of matter, the curvature of the Universe
(and its future history) was derived as a solution to the above equation. New
data has indicated that a negative pressure, called dark energy, exists and the
value of the cosmological constant is nonzero. Each of these parameters can
close the expansion of the Universe in terms of turnaround and collapse.
Instead of treating the various constants in real numbers, scientists prefer
the ratio of the parameter to the value that matches the critical value between
open and closed Universes. For example, if the density of matter exceeds the
critical value, the Universe is assumed as closed. These ratios are called as
Omega (subscript M for matter, Λ for the cosmological constant, k for
curvature). For reasons related to the physics of the Big Bang, the sum of the
various Omega is treated as equal to one. Thus: Ω_{M} + Ω_{Λ}_{
} + Ω_{k} = 1.
The three primary
methods to measure curvature are luminosity, scale length and number.
Luminosity requires an observer to find some standard ‘candle’, such as the
brightest quasars, and follow them out to high redshifts. Scale length
requires that some standard size be used, such as the size of the largest
galaxies. Lastly, number counts are used where one counts the number of
galaxies in a box as a function of distance. Till date all these methods have
been inconclusive because the brightest, size and number of galaxies changes
with time in a ways that, cosmologists have not yet figured out. So far, the
measurements are consistent with a flat Universe, which is popular for
aesthetic reasons. Thus, the curvature Omega is expected to be zero, allowing
the rest to be shared between matter and the cosmological constant.
To measure the
value of matter density is a much more difficult exercise. The luminous mass of
the Universe is tied up in stars. Stars are what we see when we look at a
galaxy and it is fairly easy to estimate the amount of mass tied up in self
luminous bodies like stars, planets, satellites and assorted rocks that reflect
the light of stars and gas that reveals itself by the light of stars. This
contains an estimate of what is called the baryonic mass of the Universe, i.e.
all the stuff made of baryons  protons and neutrons. When these numbers are
calculated, it is found that Ω for baryonic mass is only 0.02, which shows a
very open Universe that is contradicted by the motion of objects in the
Universe. This shows that most of the mass of the Universe is not seen, i.e.
dark matter, which makes the estimate of Ω_{M}
to be much too low. So this dark matter has to be properly accounted for in all
estimates: Ω_{M} = Ω_{baryons} + Ω_{dark matter}
Gravity is
measured indirectly by measuring motion of the bodies and then applying Newton’s law of gravitation.
The orbital period of the Sun around the Galaxy gives a mean mass for the
amount of material inside the Sun’s orbit. But a detailed plot of the orbital
speed of the Galaxy as a function of radius reveals the distribution of mass
within the Galaxy. Some scientists describe the simplest type of rotation as
wheel rotation. Rotation following Kepler’s 3rd law is called planetlike or
differential rotation. In this type of rotation, the orbital speeds falls off
as one goes to greater radii within the Galaxy. To determine the rotation curve
of the Galaxy, stars are not used due to interstellar extinction. Instead,
21cm maps of neutral hydrogen are used. When this is done, one finds that the
rotation curve of the Galaxy stays flat out to large distances, instead of
falling off. This has been interpreted to mean that the mass of the Galaxy
increases with increasing distance from the center.
There is very
little visible matter beyond the Sun’s orbital distance from the center of the
Galaxy. Hence the rotation curve of the Galaxy indicates a great deal of mass. But
there is no light out there indicating massive stars. Hence it is postulated
that the halo of our Galaxy is filled with a mysterious dark matter of unknown
composition and type.
The equation: Ω_{M} + Ω_{Λ}_{ } + Ω_{k} = 1 appears tantalizingly
similar to the Mr. Fermi’s description of the three part Hamiltonian for the
atom: H = H_{A} + H_{R} + H_{I}. Here, H is 1. Ω_{M}, which represents matter
density is similar to H_{A}, the bare mass as explained earlier. Ω_{Λ}, which represents the
cosmological constant, is similar to H_{R}, the radiating bare charge. Ω_{k}, which represents curvature
of the universe, is similar to H_{I}, the interaction. This indicates,
as Mr. Mason A. Porter and Mr. Predrag Cvitanovic had found out, that the macro
and the micro worlds share the same sets of mathematics. Now we will explain
the other aberrations.
Cosmologists
tell us that the universe is homogeneous on the average, if it is considered on
a large scale. The number of galaxies and the density of matter turn out to be
uniform over sufficiently great volumes, wherever these volumes may be taken.
What this implies is that, the overall picture of the recessing cosmic system
is observed as if “simultaneously”. Since the density of matter decreases because
of the cosmological expansion, the average density of the universe can only be
assumed to be the same everywhere provided we consider each part of the
universe at the same stage of expansion. That is the meaning of
“simultaneously”. Otherwise, a part would look denser, i.e., “younger” and
another part less dense. i.e., “older” depending on the stage of expansion we
are looking at. This is because light propagates at a fixed velocity. Depending
upon our distance from the two areas of observation, we may be actually looking
at the same time objects with different stages of evolution. The uniformity of
density can only be revealed if we can take a snapshot of the universe. But
the rays that are used for taking the snapshot have finite velocities. Thus, they
can get the signals from distant points only after a time lag. This time lag
between the Sun and the earth is more than 8 minutes. In the scale of the
Universe, it would be billions of years. Thus, the “snapshot” available to us
will reveal the Universe at different stages of evolution, which cannot be
compared for density calculations. By observing the farthest objects  the
Quasars  we can know what they were billions of years ago, but we cannot know
what they look like now.
Another property
of the universe is said to be its general expansion. In the 1930’s, Mr. Edwin
Hubble obtained a series of observations that indicated that our Universe began
with a Creation event. Observations since 1930s show that clusters and
superclusters of galaxies, being at distances of 100300 megaparsec (Mpc),
are moving away from each other. Hubble discovered that all galaxies have a
positive redshift. Registering the light from the distant galaxies, it has
been established that the spectral lines in their radiation are shifted to the
red part of the spectrum. The farther the galaxy; the greater the redshift!
Thus, the farther the galaxy, velocity of recession is greater creating an
illusion that we are right at the center of the Universe. In other words, all
galaxies appear to be receding from the Milky Way.
By the
Copernican principle (we are not at a special place in the Universe), the
cosmologists deduce that all galaxies are receding from each other, or we live
in a dynamic, expanding Universe. The expansion of the Universe is described by
a very simple equation called Hubble’s law; the velocity of the recession v of a galaxy is equal to a constant H times its distance d (v
= Hd). Where the constant is called
Hubble’s constant and relates distance to velocity in units of light years.
The problem of
dark matter and dark energy arose after the discovery of receding galaxies, which
was interpreted as a sign that the universe is expanding. We posit that all
galaxies appear to be receding from the Milky Way because they are moving with
different velocities while orbiting the galactic center. Just like some planets
in the solar system appearing to be moving away at a very fast rate than others
due to their motion around the Sun at different distances with different velocities,
the galaxies appear to be receding from us. In cosmic scales, the observation
since 1930 is negligible and cannot give any true indication of the nature of
such recession. The recent findings support this view.
This cosmological principle  one of the
foundations of the modern understanding of the universe  has come into
question recently as astronomers find subtle but growing evidence of a special
direction in space. The first and most wellestablished data point comes from
the cosmic microwave background (CMB), the socalled afterglow of the big bang.
As expected, the afterglow is not perfectly smooth  hot and cold spots speckle
the sky. In recent years, however, scientists have discovered that these spots
are not quite as randomly distributed as they first appeared  they align in a
pattern that point out a special direction in space. Cosmologists have
theatrically dubbed it the “axis of evil”. More hints of a cosmic arrow come
from studies of supernovae, stellar cataclysms that briefly outshine entire
galaxies. Cosmologists have been using supernovae to map the accelerating
expansion of the universe. Detailed statistical studies reveal that supernovae
are moving even faster in a line pointing just slightly off the axis of evil.
Similarly, astronomers have measured galaxy clusters streaming through space at
a million miles an hour toward an area in the southern sky. This proves our
theory.
Thus, the mass
density calculation of the universe is wrong. As we have explained in various
forums, gravity is not a single force, but a composite force of seven. The
seventh component closes in the galaxies. The other components work in pairs
and can explain the Pioneer anomaly, the deflection of Voyager beyond Saturn’s
orbit and the Flyby anomalies. We will discuss this separately.
Extending the
principle of bare mass further, we can say that from quarks to “neutron stars”
and “black holes”, the particles or bodies that exhibit strong interaction;
i.e., where the particles are compressed too close to each other or less than
10^{15} m apart, can be called bare mass bodies. It must be remembered that the strong
interaction is charge independent: for example, it is the same for neutrons as
for protons. It also varies in strength for quarks and protonneutrons. Further,
the masses of the quarks show wide variations. Since mass is confined field,
stronger confinement must be accompanied with stronger back reaction due to
conservation laws. Thus, the outer negatively charged region must emit its
signature intense xray in black holes and strangeness in quarks. Since similar
proximity like the protonneutrons are seen in black holes also, it is
reasonable to assume that strong force has a macro equivalent. We call these
bodies “Dhruva” – literally meaning
the pivot around which all mass revolves. This is because, be they quarks,
nucleons or blackholes, they are at the center of the all bodies. They are not
directly perceptible. Hence it is dark matter. It is also bare mass without
radiation.
When the
particles are not too close together, i.e., intermediate between that for the
strong interaction and the electromagnetic interaction, they behave differently
under weak interaction. The weak interaction has distinctly different
properties. This is the only known interaction where violation of parity
(spatial symmetry), and violation of the symmetry (between particles and
antiparticles) has been observed. The weak interaction does not produce bound
states (nor does it involve binding energy) – something that gravity does on an
astronomical scale, the electromagnetic force does at the atomic level, and the
strong nuclear force does inside nuclei. We call these bodies “Dhartra” – literally meaning that which
induces fluidity. It is the force that constantly changes the relation between
“inner space” and “outer space” of the particle without breaking its dimension.
Since it causes fluidity, it helps in interactions with other bodies. It is
also responsible for Radio luminescence.
There
are other particles that are not confined in any dimension. They are bundles of
energy that are intermediate between the dense particles and the permittivity
and permeability of free space – bare charge. Hence they are always unstable.
Dividing them by c^{2} does not indicate their mass, but it indicates
the energy density against the permittivity and permeability of the field,
i.e., the local space, as distinguished from “free space”. They can move out
from the center of mass of a particle (gati)
or move in from outside (aagati),
when they are called its antiparticle. As we have already explained, the bare
mass is not directly visible to naked eye. The radiation or bare charge per se
is also not visible to naked eye. When it interacts with any object, then only
that object becomes visible. When the bare charge moves in free space, it
illuminates space. This is termed as light. Since it is not a confined dense
particle, but moves through space like a wave moving through water, its effect
is not felt on the field. Hence it has zero mass. For the same reason, it is
its own antiparticle.
Some scientists
link electric charge to permittivity and magnetism to permeability.
Permittivity of a medium is a measure of the amount of charge of the same
voltage it can take or how much
resistance is encountered when forming an electric field in a medium.
Hence materials with high permittivity are used as capacitors. Since addition
or release of energy leads the electron to jump to a higher or a lower orbit,
permittivity is also linked to rigidity of a substance. The relative static permittivity or dielectric
constant of a solvent is a relative measure of its polarity, which is often
used in chemistry. For example, water (very polar) has a dielectric constant of
80.10 at 20 °C while nhexane (very nonpolar) has a dielectric constant of
1.89 at 20 °C. This information is of great value when designing separation.
Permeability of
a medium is a measure of the magnetic flux it exhibits when the amount of
charge is changed. Since magnetic field lines surround the object effectively
confining it, some scientists remotely relate it to density. This may be highly
misleading, as permeability is not a
constant. It can vary with the position in the medium, the frequency of the
field applied, humidity, temperature, and other parameters, such as the
strength of the magnetic field, etc. Permeability of vacuum is treated as 1.2566371×10^{−6}
(μ_{0}); the same as that of hydrogen, even though susceptibility χ_{m }(volumetric SI)
of vacuum is treated as 0, while that of hydrogen is treated as −2.2×10^{−9}. Permeability of air is taken as 1.00000037.
This implies vacuum is full of hydrogen only.
This is wrong
because only about 81% of the cosmos consists of hydrogen and 18% helium. The
temperature of the cosmic microwave background is about  2.73k, while that of
the interiors of galaxies goes to millions of degrees of k. Further, molecular hydrogen occurs in two isomeric
forms. One with its two proton spins aligned parallel to form a triplet state
(I = 1, α_{1}α_{2}, (α_{1}β_{2} + β_{1}α_{2})/2^{1/2},
or β_{1}β_{2} for which M_{I} = 1, 0, −1 respectively)
with a molecular spin quantum number of 1 (½+½). This is called orthohydrogen.
The other with its two protonspins aligned antiparallel form a singlet (I =
0, (α_{1}β_{2} – β_{1}α_{2})/2^{1/2} M_{I}
= 0) with a molecular spin quantum number of 0 (½½). This is called
parahydrogen. At room temperature and thermal equilibrium, hydrogen consists
of 25% parahydrogen and 75% orthohydrogen, also known as the “normal form”.
The equilibrium ratio of orthohydrogen to
parahydrogen depends on temperature, but because the orthohydrogen form is an
excited state and has a higher energy than the parahydrogen form, it is
unstable. At very low temperatures, the equilibrium state is composed almost
exclusively of the parahydrogen form. The liquid and gas phase thermal
properties of pure parahydrogen differ significantly from those of the normal
form because of differences in rotational heat capacities. A molecular form
called protonated molecular hydrogen, or H^{+}_{3}, is found in
the interstellar medium, where it is generated by ionization of molecular
hydrogen from cosmic rays. It has also been observed in the upper atmosphere of
the planet Jupiter. This molecule is relatively stable in the environment of
outer space due to the low temperature and density. H^{+}_{3}
is one of the most abundant ions in the Universe. It plays a notable role in
the chemistry of the interstellar medium. Neutral triatomic hydrogen H_{3}
can only exist in an excited from and is unstable.
COULOMB’S LAW REVISITED:
In
18^{th} Century, before the modern concepts of atomic and subatomic
particles were known, Mr. Charles Augustin de Coulomb set up an experiment
using the early version of what we call a Torsion Balance to observe how
charged pith balls reacted to each other. These pith balls represented point
charges. However, the point charges are charged bodies that are very small when
compared to the distance between them. Mr. Coulomb observed two behaviors about
electric force:
 The magnitude of electric force between two point charges is directly proportional to the product of the charges.
 The magnitude of the electric force between two point charges is inversely proportional to the square of the distance between them.
The general
description of Coulomb’s law overlooks
some important facts. The pith balls are spherical in shape. Thus, he got
an inverse square law because the spheres emit a spherical field, and a
spherical field must obey the inverse square law because the density of
spherical emission must fall off inversely with the distance. The second
oversight is the emission field itself. It is a real field with its own
mechanics, real photons and real energy as against a virtual field with virtual
photons and virtual energy used in QED, QCD and QFT where quanta can emit
quanta without dissolving in violation of conservation laws. If the electromagnetic
field is considered to be a real field with real energy and mass equivalence,
all the mathematics of QED and QFT would fail. In 1830’s, Faraday assumed that the
“field” was nonphysical and nonmechanical and QED still assumes this. The
Electromagnetic field, like the gravitational field, obeys the inverse square
law because the objects in the field from protons to stars are spheres. Coulomb’s
pith balls were spheres. The field emitted by these is spherical. The field
emitted by protons is also spherical. This determines the nature of charges and
forces.
As
we have repeatedly points out, multiplication implies nonlinearity. It also
implies two dimensional fields. A
medium or a field is a substance or material which carries the wave. It is a
region of space characterized by a physical property having a determinable
value at every point in the region. This means that if we put something
appropriate in a field, we can then notice “something else” out of that field,
which makes the body interact with other objects put in that field in some specific
ways, that can be measured or calculated. This “something else” is a type of
force. Depending upon the nature of
that force, the scientists categorize the field as gravity field, electric
field, magnetic field, electromagnetic field, etc. The laws of modern physics
suggest that fields represent more than the possibility of the forces being
observed. They can also transmit energy and momentum. Light wave is a
phenomenon that is completely defined by fields. We posit that like a particle,
the field also has a boundary, but unlike a particle, it is not a rigid
boundary. Also, its intensity or density gradient falls off with distance. A
particle interacts with its environment as a stable system  as a whole. Its
equilibrium is within its dimensions. It is always rigidly confined till its
dimension breaks up due to some external or internal effect. A field, on the
other hand, interacts continuously with its environment to bring in uniform
density – to bring in equilibrium with the environment. These are the distinguishing
characteristics that are revealed in fermions (we call these satyam) and bosons (we call these rhtam) and explain superposition of
states.
From
the above description, it is apparent that there are two types of fields: One
is the universal material field in which the other individual energy subfields
like electric field, magnetic field,
electromagnetic field, etc appear as variables. We call these variable subfields
as “jaala” – literally meaning a net.
Anything falling in that net is affected by it. The universal material field
also is of two types: stationary fields where only impulses and not particles
or bodies are transmitted and mobile fields where objects are transmitted. The other
category of field explains conscious actions.
Coulomb’s law states that the electrical force
between two charged objects is directly proportional to the product of the
quantity of charge on the objects and is inversely proportional to the square
of the distance between the centers of the two objects. The interaction between
charged objects is a noncontact force which acts over some distance of
separation. In equation form, Coulomb’s law is stated as:
where Q_{1} represents the quantity of charge on one object in
Coulombs, Q_{2} represents the quantity of charge on the other object
in Coulombs, and d represents the distance between the centers of the two
objects in meters. The symbol k is the proportionality constant
known as the Coulomb’s law constant. To find a electric force on one
atom, we need to know the density of the electromagnetic field said to be
mediated by photons relative to the size of the atom, i.e. how many photons are
impacting it each second and sum up all these collisions. However, there is a
difference in this description when we move from micro field to macro field.
The interactions at the micro level are linear – up and down quarks or protons
and electrons in equal measure. However, different types of molecular bonding
make these interactions nonlinear at macro level. So a charge measured at the
macro level is not the same as a charge measured at the quantum level.
It is interesting to note that according to the
Coulomb’s law equation, interaction between a charged particle and a neutral
object (where either Q_{1} or Q_{2}
= 0) is impossible as in that case the equation becomes meaningless. But
it goes against everyday experience. Any charged object  whether positively charged or negatively charged  has
an attractive interaction with a neutral object. Positively charged objects and
neutral objects attract each other; and negatively charged objects and neutral
objects attract each other. This also shows that there are no charge neutral
objects and the socalled charge neutral objects are really objects in which
both the positive and the negative charges are in equilibrium. Every
charged particle is said to be surrounded by an electric field  the area in which the charge exerts a
force. This implies that in charge neutral objects, there is no such field –
hence no electric force should be experienced. It is also said that particles
with nonzero electric charge interact with each other by exchanging photons,
the carriers of the electromagnetic force. If there is no field and no
force, then there should be no interaction – hence no photons. This presents a
contradiction.
Charge in Coulomb’s law has been defined in terms
of Coulomb’s. One Coulomb is one Ampere second. Electrostatics describes
stationary charges. Flowing charges are electric currents. Electric current is
defined as a measure of the amount of electrical charge transferred per unit
time through a surface (the cross section of a wire, for example). It is also
defined as the flow of electrons. This means that it is a summed up force
exerted by a huge number of quantum particles. It is measured at the macro
level. The individual charge units belong to the micro domain and cannot be
measured.
Charge has not
been specifically defined except that it is a quantum number carried by a
particle which determines whether the particle can participate in an
interaction process. This is a vague definition. The degree of interaction is
determined by the field density. But density is a relative term. Hence in
certain cases, where the field density is more than the charge or current
density, the charge may not be experienced outside the body. Such bodies are
called charge neutral bodies. Introduction of a charged particle changes the
density of the field. The socalled charge neutral body reacts to such change
in field density, if it is beyond a threshold limit. This limit is expressed as
the proportionality constant in
Coulomb’s law equation. This implies that, a charged particle does not generate an electric field, but
only changes the intensity of the field,
which is experienced as charge. Thus, charge is the capacity of a particle to
change the field density, so that other particles in the field experience the
change. Since such changes lead to combining of two particles by redistribution
of their charge to affect a third particle, we define charge as the creative
competence (saamarthya sarva bhaavaanaam).
Current density
is the time rate of change of charge (I=dQ/dt). Since charge is measured in
coulombs and time is measured in seconds, an ampere is the same as a coulomb
per second. This is an algebraic relation, not a definition. The ampere is that
constant current, which, if maintained in two straight parallel conductors of
infinite length of negligible circular crosssection, and placed one meter
apart in vacuum, would produce between these conductors a force equal to
2 × 10^{−7} newton per meter of length. This means that the
coulomb is defined as the amount of charge that passes through an almost flat
surface (a plane) when a current of one ampere flows for one second. If the breadth
of the socalled circular crosssection is not negligible, i.e., if it is not a
plane or a field, this definition will not be applicable. Thus, currents flow
in planes or fields only. Electric current is not a vector quantity, as it does
not flow in free space through diffusion or radiation in a particular direction
(like muon or tau respectively). Current is a scalar quantity as it flows only
through convection towards lower density – thus, within a fixed area – not in
any fixed direction. The ratio of current to area for a given surface is the
current density. Despite being the ratio of two scalar quantities, current
density is treated as a vector quantity, because its flow is dictated according
to fixed laws by the density and movement of the external field. Hence, it is
defined as the product of charge density and velocity for any location in
space.
The factor d^{2} shows that it depends on
the distance between the two bodies, which can be scaled up or down. Further,
since it is a second order term, it represents a twodimensional field. Since
the field is always analogous, the only interpretation of the equation is that
it is an emission field. The implication of this is that it is a real field with
real photons with real energy and not a virtual field with virtual photons or messenger
photons described by QED and QCD,
because that would violate conservation laws: a quantum cannot emit a virtual
quantum without first dissolving itself. Also complex terminology and
undefined terms like Hamiltonians,
tensors, gauge fields, complex operators, etc., cannot be used to real fields. Hence,
either QED and QCD are wrong or Coulomb’s
law is wrong. Alternatively, either one or the other or both have to be
interpreted differently.
Where the external field remains
constant, the interaction between two charges is reflected as the nonlinear
summation (multiplication) of the effect of each particle on the field. Thus,
if one quantity is varied, to achieve the same effect, the other quantity must
be scaled up or down proportionately. This brings in the scaling constant,
which is termed as k  the proportionality constant relative
to the macro density. Thus, the Coulomb’s law gives the correct results. But
this equation will work only if the two charges are contained in spherical
bodies, so that the area and volume of both can be scaled up or down uniformly by
varying the diameter of each. Coulomb’s constant can be related to the
Bohr radius. Thus, in reality, it is not a constant, but a variable. This also shows that the charges are
emissions in a real field and not mere abstractions. However, this does not
prove that same charge repels and opposite charges attract.
The interpretation of Coulomb’s law that same charge repels played a big
role in postulating the strong interaction. Protons exist in the nucleus
at very close quarters. Hence they should have a strong repulsion. Therefore it
was proposed that an opposite force overwhelmed the charge repulsion. This
confining force was called the strong force. There is no direct proof of its
existence. It is still a postulate. To make this strong force work, it had to
change very rapidly, i.e., it should turn on only at nuclear distances, but
turn off at the distance of the first orbiting electron. Further, it should be
a confining force that did not affect electrons. Because the field had to
change so rapidly (should have such high flux), that it had to get extremely
strong at even smaller distances. Logically, if it got weaker so fast at
greater distances, it had to get stronger very fast at smaller distances. In
fact, according to the equations, it would approach infinity at the size of the
quark. This didn’t work in QCD, since the quarks needed their freedom. They
could not be infinitely bound, since this force would not agree with experimental
results in accelerators. Quarks that were infinitely bound could not break up
into mesons.
For calculate the flux, one must
calculate how the energy of the field approaches the upper limit. This upper
limit is called an asymptote. An asymptote is normally a line on a graph
that represents the limit of a curve. Calculating the approach to this limit
can be done in any number of ways. Mr. Lev Landau, following the principles of
QED, developed a famous equation to find what is now called a Landau Pole  the
energy at which the force (the coupling constant) becomes infinite. Mr. Landau
found this pole or limit or asymptote by subtracting the bare electric charge e
from the renormalized or effective electric charge e_{R}:
1/e_{R}^{2
} 1/e^{2} = (N/6π^{2})ln(Λ/m_{R})
The value for bare electric charge e has been obtained by one
method and the value for effective electric charge e_{R} has
been obtained by another method to match the experimental value. One value is subtracted
from the other value to find a momentum over a mass (which is of course a
velocity. If we keep the renormalized variable e_{R }constant,_{
}we can find where the bare charge becomes singular. Mr. Landau
interpreted this to mean that the coupling constant had gone to infinity at
that value, and called that energy the Landau pole. In any given experiment,
the electron has one and only one charge, so that either e or e_{R}
must be incorrect. No one has ever measured the “bare charge”. It has
never been experimentally verified. All experiments show only the effective
charge. Bare charge is a mathematical assumption. If two mathematical
descriptions give us two different values, both cannot be correct in the
same equation. Hence either the original mathematics or the renormalized
mathematics must be wrong. Thus Mr. Landau has subtracted an incorrect value
from a correct value, to achieve real physical information because first he had
denormalized the equation by inventing the infinity! We have already shown the
fallacies inherent in this calculation while discussing division by zero and
Lorentz force law. Thus, Göckeler et. al., (arXiv:hepth/9712244v1) found that:
“A detailed study of the relation between bare and renormalized quantities
reveals that the Landau pole lies in a region of parameter space which is made
inaccessible by spontaneous chiral symmetry breaking”.
It is interesting to note that the charge of
electron has been measured by the oil drop experiment, but the charge of
protons and neutrons have not been measured as it is difficult to isolate them.
Historically, proton has been assigned charge of +1 and neutron charge zero on
the assumption that the atom is charge neutral. But the fact that most elements
exist not as atoms, but molecules, shows that the atoms are not charge neutral.
We have theoretically derived the charges of quarks as 4/11 and +7/11 instead
of the generally accepted value of ^{ 1}⁄_{3} or + ^{2}⁄_{3}. This makes the charges of protons and
neutrons +10/11 and 1/11 respectively. This implies that both proton and
neutron have a small amount of negative charge (1 + 10/11) and the atom as a
whole is negatively charged. This residual negative charge is not felt, as it
is directed towards the nucleus.
According to our theory,
only same charges attract. Since proton and electron combined has the same
charge as the neutron, they coexist as stable structures. Already we have
described the electron as like the termination
shock at heliosheath that encompasses the “giant bubble” encompassing the Solar
system, which is the macro equivalent of the extranuclear space. Thus, the
charge of electron actually is the strength of confinement of the extranuclear
space. Neutron behaves like the solar system within the galaxy – a star
confined by its heliospheric boundary. However, the electric charges (1/11 for proton + electron and –1/11 for
neutron) generate a magnetic field within the atom. This doubling
in the intensity of the magnetic field in the stagnation region, i.e., boundary
region of the atom, behaves like cars piling up at a clogged freeway offramp.
The increased intensity of the magnetic field generates inward pressure from
interatomic space compacting it. As a result, there is a 100fold increase in
the intensity of highenergy electrons from elsewhere in the field diffusing
into the atom from outside. This leads to 13 different types of interactions
that will be discussed separately.
When bare
charges interact, they interact in four different ways as flows:
 Total (equal) interaction between positive and negative charges does not change the basic nature of the particle, but only increases their mass number (pushtikara).
 Partial (unequal) interaction between positive and negative charges changes the basic nature of the particle by converting it into an unstable ion searching for a partner to create another particle (srishtikara).
 Interaction between two negative charges does not change anything (nirarthaka) except increase in magnitude when flowing as a current.
 Interaction between two positive charges become explosive (vishphotaka) leading to fusion reaction at micro level or supernova explosion at macro level with its consequent release of energy.
Since both
protons and neutrons carry a residual negative charge, they do not explode, but
coexist. But in a supernova, it is positively charged particles only, squeezed
over a small volume, forcing them to interact. As explained above, it can only
explode. But this explosion brings the individual particles in contact with the
surrounding negative charge. Thus, higher elements from iron onwards are
created in such explosion, which is otherwise impossible.
CONCLUSION:
The micro and
the macro replicate each other. Mass and energy are not convertible at macro
and quantum levels, but are inseparable complements. They are convertible only
at the fundamental level of creation (we call it jaayaa). Their inter se density determines whether the local
product is mass or energy. While mass can be combined in various proportions,
so that there can be various particles, energy belongs to only one category,
but appears differently because of its different interaction with mass. When
both are in equilibrium, it represents the singularity. When singularity
breaks, it creates entangled pairs of conjugates that spin. When such
conjugates envelop a state resembling singularity, it gives rise to other pairs
of forces. These are the five fundamental forces of Nature – gravity that
generates weak and electromagnetic interaction, which leads to strong
interaction and radioactive disintegration. Separately we will discuss in
detail the superposition of states, entanglement, sevencomponent gravity and
fractional (up to 1/6) spin. We will also discuss the correct charge of quarks
(the modern value has an error component of 3%) and derive it from fundamental
principles. From this we will theoretically derive the value of the fine
structure constant (7/960 at the socalled zero energy level and 7/900 at 80
GeV level).
No comments:
Post a Comment
let noble thoughts come to us from all around