BEGINNING OF THE UNIVERSE.
WHAT is the Universe? If it is all existing matter and space
considered as a whole - the cosmos, or is it a particular sphere of activity or
experience. It cannot be the former, because we cannot prove that there is
nothing beyond what we have seen till date. Hence we cannot claim that we see all
existing matter and space. Thus, it has to be a particular sphere of activity
or experience. In other words, it is a function of space-time. Since Einstein,
Spacetime has been fused into one both rightly and wrongly. Rightly because
both space and time are infinities and infinities coexist. Hence space and time
coexist. They are inseparable like two hemispheres of a cosmic brain, joined to
a single entity: space-time. but not as spacetime. Because, like two
hemispheres of the brain, they have totally different properties and cannot be
one and the same entity. For example, space implies the interval between two
objects, which can be either fixed or variable in time. But time implies the
interval between two events that are uniform throughout the universe. A second
is the same interval both here, as well as in any corner in the universe. Time
flows, but there is no proof that space also flows. All observations prove the
contrary – space has fixed coordinates, whereas time has no fixed coordinates –
it is the same flow everywhere. Thus, time is beyond the confines of space.
CONUNDRUMS RELATING TO SPECIAL RELATIVITY.
Special Relativity has confused most people. A man standing
on the platform will see the other on the receding train as shrinking, though
the man on the train will not experience so. He will see the man on the
platform as shrinking, which is also not true. Thus, SR describes appearance –
not reality. Otherwise, the photon should not move. As the length contraction
would make its spatial extent zero, which is the synonym for non-existence at
here-now.
According to relativity, gravitational mass is always equivalent
to the inertial mass. No one knows why there should be two or more mass terms.
In principle there is no reason why mi = mg: why should
the gravitational charge and the inertial mass be equal? The gravitational mass mg is said
to produce and respond to gravitational fields. It is said to supply the mass
factor in the inverse square law of gravitation: F=Gm1m2/r2.
The inertial mass mi is said to supply the mass factor in Newton ’s 2nd
Law: F=ma. If gravitation is proportional to g, say F=kg (because the weight of a particle depends on its
gravitational mass, i.e. mg), and acceleration is given by a, then according to Newton ’s law, ma=kg. Since according to GR, g=a,
combining both we get m=k. Here m is the so-called “inertial mass” and k is the
“gravitational mass”. But the problem is the difference between the values of G
(constant – though it might be changing: doi/10.1103/ PhysRevLett.111.101102)
and g (known to be variable). Alternatively, the inertial mass measures the “inertia”,
while the gravitational mass is the coupling strength to the gravitational
field. The gravitational mass plays the same role as the electric charge for electromagnetic
interactions, the color charge for strong interactions and the particle flavor
for weak interactions. Inertial mass mi
is the mass in Newton ’s
law F=mia. Gravitational
mass mg is the
coupling strength in the Newton ’s
law of gravitation: Fg
= (gm1m2/r2) x mg. Thus, mia = Fg = (gm1m2/r2) x mg.
The quantity gm1m2/r2 is the “gravitational
field” (say G) and mg
is the “gravitational charge”, so that one can write: F x g = mg x G, just like we
write: mi x a = q x
E for the electric field. This has nothing to do with the Brout-Englert-Higgs
mechanism.
Mass is the locked up energy of a body. Essentially, it is
the amount of resistance that a physical object has to any change in its motion.
This includes the resistance that a body has to acceleration or to directional
changes. This type of mass, is called ‘inertial mass’. The EP states that the
effect of gravity does not depend on the nature or internal structure of a
body. Again, according to the same theory, an object must have infinite kinetic
energy when it approaches the speed of light, because light has the limiting
velocity. Thus, the object has infinite mass as well. An object has infinite
kinetic energy when it approaches the speed of light. Since an object will
increase its ‘mass’ as it speeds up. As an object increases in speed, the
amount of energy it has also increases. Since mass and energy is treated as
exchangeable in relativity, this energy is referred to as ‘increase in mass’. Other
objects need an external agency to apply energy to get accelerated. But
photons, which move at the limiting velocity, do not require such an external
agency. They inherently possess this energy. Hence, they should have infinite
mass. This is contrary to evidence.
Gravity does not couple to the “gravitational mass” but
rather to the Ricci Tensor, which works only if space-time is flat. Ricci
Tensor does not provide a full description in more than three dimensions. Schwarzschild
equations for black holes, where space-time is extremely curved, uses the
Riemann Tensor. Using Riemann tensor, instead of Ricci tensor to calculate
energy momentum tensor in 3+1 dimensions would not lead to any meaningful
results, though in most cases, the Riemann Tensor is needed before one can
determine the Ricci Tensor. Thus, there is really no relation between
“gravitational mass” and “inertial mass”, except in Newtonian physics. This is
why photons (with zero inertial mass) are affected by gravity. Only
manipulations of the Standard Model (SM) to include classical gravity (field
theory in curved spacetime) leads to effects like Hawking radiation and the
Unrih effect. This is where gravitation and the SM can hypothetically meet.
When answering a question, one should first determine the framework.
If we assume nothing then there can be no answer. However, if we take as given
that we are going to formulate theories in terms of Lagrangians then there is
essentially only one mass parameter that can appear, i.e., the coefficient of
the quadratic term. Thus, whatever mass is there, it is only one mass. The
Higgs field clearly modifies the on-shell condition in flat space and general
relativity simply says that anyone whose frame is locally flat should reproduce
the same result. Thus, the Higgs field appears to modify the gravitational
mass. It may also modify the inertial mass by the same amount as can be verified
by analyzing some scattering diagrams. However, knowing that we are working
within the context of a Lagrangian theory, the fact that inertial and
gravitational mass are equal is essentially a foregone conclusion.
Similarly, the idea about an astronaut going to outer space
at a very fast rate and coming back younger – is misplaced. When on Earth, the
fluids in the human body are distributed unevenly because of gravity. Most
fluid pools in the lower extremities, leaving very little fluid in the top of
the body. But if we go to space, in the first few weeks most astronauts appear
to have a puffy head and skinny legs. The fluid in their bodies redistributes
evenly when gravity is not playing a significant role in their biological
systems. After some time in orbit, the body adapts to the new distribution of
fluids and the astronauts do not appear as puffy – it self-regulates. In the
near zero relative gravity of space, muscles are not needed to support the
body. Instead of maintaining the usual base of muscle mass needed for life on
Earth, astronauts’ bodies tend to get rid of unnecessary tissues. Astronauts
have to exercise for two hours a day on the space station to maintain a healthy
amount of muscle mass. The exercise also helps prevent bone-density loss. Each
month, astronauts could lose up to 1 percent of their bone density if they do
not get enough exercise.
According to a report published in the magazine PLOS
ONE: DOI: 10.1371/ journal.pone.0106207,
there is a large discrepancy between physiological and functional thresholds
about which we should be cautious when preparing for exposure to low gravity
fields. The strength of gravity required from the physiological threshold for
linear acceleration in up and down directions has been estimated to be 15
percent of Earth’s gravity – nearly equal to the Moon’s gravity. The perception
of up-down is determined not only by gravity, but also visual information, when
available, and assumptions about the orientation of the body. Here on Earth,
plants and animals are exposed to the same amount of gravity as human beings.
Yet, their body functions as if they are in space – distributing body fluids in
an organized manner. If the flow is in the direction of growth; then humans
should be reptiles – body mass distributed down like in a fluid. How to explain
this?
All our biological functions are powered by heart and lungs
that pump blood and oxygen. Once the heart starts beating in the mother’s womb,
the process continues perpetually till death. How did the initial heartbeat,
which is a sign of consciousness, begin? We measure blood pressure to know the
rate and pace at which blood is pumped by the heart. This is a deterministic
mechanical process leading to chain reactions throughout the body. Can the
operations of organisms be described by physical laws, which are probabilistic?
A mechanical replacement of organ is dependent on the adaptability of the host
organism. Can it be described by pure mechanics? Can we place a heart in a
robot to make it alive? Is it life? Without answering these questions, we
cannot claim that an astronaut who moves at a faster pace becomes younger than
those who moves slowly.
The GPS shows time dilation not because of relativity, but
because of refraction of light while moving through different strata of the
atmosphere and outer space, where density fluctuations vary the speed of light.
IS SPACETIME FLUID?
In physics, a fluid is a substance that continually deforms
(flows) under an applied shear stress. Fluids are a phase of matter and include
liquids, gases and plasmas. ... Liquids form a free surface (that is, a surface
not created by the container) while gases do not.
But turns out if you write down the equations for small
wiggles in a medium – such as soundwaves in a fluid – then the equations look
exactly like those of waves in a curved background.
Yes, that’s right. Sometimes, waves in fluids behave like
waves in a curved space-time; they behave like waves in a gravitational field.
Fluids, therefore, can be used to simulate gravity. And that’s some awesome
news because this correspondence between fluids and gravity allows physicists
to study situations that are otherwise experimentally inaccessible; for
example, what happens near a black hole horizon or during the rapid expansion
of the early universe.
This mathematical relation between fluids and gravity is
known as “analog gravity.” That’s “analog” as in “analogy” not as opposed to
digital. But it’s not just math. The first gravitational analogies have been
created in a laboratory.
Most amazing is the work by Jeff Steinhauer at Technion,
Haifa. Steinhauer used a condensate of supercooled atoms that “flows” in a
potential of laser beams which simulate the black hole horizon. In his
experiment, Steinhauer wanted to test whether black holes emit radiation as
Stephen Hawking predicted. The temperature of real, astrophysical, black holes
is too small to be measurable. But if Hawking’s calculation is right, then the
fluid-analogy of black holes should radiate too.
Black holes trap light behind the “event horizon.” A fluid
that simulates a black hole doesn’t trap light; instead it traps the fluid’s
soundwaves behind what is called the “acoustic horizon.” Since the fluid
analogies of black holes aren’t actually black, Bill Unruh suggested calling
them “dumb holes.”
But whether the horizon catches light or sound,
Hawking-radiation should be produced regardless, and it should appear in form
of fluctuations (in the fluid or quantum matter fields, respectively) that are
paired across the horizon.
Of course fluid-analogies are still different from real
gravity. Mathematically, the most important difference is that the curved
space-time which the fluid mimics has to be designed. It is not, unlike real
gravity, an automatic reaction to energy and matter; instead, it is part of the
experimental setup. However, this is a problem which, at least in principle,
can be overcome with a suitable feedback loop.
The conceptually more revealing difference is that the
fluid’s correspondence to a curved space-time breaks down once the experiment
starts to resolve the fluid’s atomic structure. Fluids, we know, are made of
smaller things. Curved space-time, for all we know at present, isn’t. But how
certain are we of this? What if the fluid analogy is more than an analogy?
Maybe space-time really behaves like a fluid; maybe it is a fluid. And if so,
the experiments with fluid-analogies may reveal how we can find evidence for a
substructure of space-time.
Some have pushed the gravity-fluid analogy even further. Gia
Dvali from LMU Munich, for example, has proposed that real black holes are
condensates of gravitons, the hypothetical quanta of the gravitational field.
This simple idea, he claims, explains several features of black holes which
have so-far puzzled physicists, notably the question how black holes manage to
keep the information that falls into them.
We used to think black holes were almost featureless round
spheres. But if they are instead, as Dvali says, condensates of many gravitons,
then black holes can take on many slightly different configurations in which
information can be stored. Even more interesting, Dvali proposes the analogy
could be used to design fluids which are as efficient at storing and
distributing information as black holes are. The link between condensed matter
and astrophysics, hence, works both ways.
Physicists have looked for evidence of space-time being a
medium for some while. For example, by studying light from distant sources,
such as gamma-ray bursts, they tried to find out whether space has viscosity or
whether it causes dispersion (a running apart of frequencies like in a
prism). A new line of research is to
search for impurities – “space-time defects” – like crystals have. So far the
results have been negative. But the experiments with fluid analogies might
point the way forward.
If space-time is made of smaller things, this could solve a
major problem: how to describe the quantum behavior of space time. Unlike all
the other interactions we know of, gravity is a non-quantum theory. This means
it doesn’t fit together with the quantum theories that physicists use for
elementary particles. All attempts to quantize gravity so-far have either
failed or remained unconfirmed speculations. That space itself isn’t
fundamental but made of other things is one way to approach the problem.
ROLE OF GRAVITY.
Gravity is responsible for stuff falling on the ground, as
well as for planets moving in the sky. Scientific theories have been proposed
to account for these phenomena: Newton’s theory of gravity first and Einstein’s
general relativity later. Newton’s gravity is a force that acts instantaneously
to pull bodies closer in virtue of their mass. In other words two massive
bodies, no matter how distant, feel each other’s presence instantly and tend to
get together.
Here now comes evidence. Newton’s theory has been very
successful. The theory predicted, for instance, the Halley comet to be seen
again in 1758. One may even think that our best scientific theories are
definitely proven by experiments - Newton’s theory successfully predicted the
return of the Halley comet, therefore Newton’s theory is true. Right? Well, no.
It does not logically follow that Newton’s theory is true even if all
experiments come out as predicted. It like someone concluding that it is
snowing right now by starting with the consideration that if it's snowing then
the streets will be covered with snow, and then observing that the streets are
now covered with snow. This is unwarranted: it takes snow a very long time to
melt, so snow could have fallen earlier in the day. Similarly, the Halley comet
return is a good indication of the past success of Newton’s theory, but does
not provide any guarantee of its future success.
Indeed, it later turned out that Newton’s theory was false
and it was substituted by Einstein’s theory of relativity. Einstein argued that
gravity is not a force but rather the effect of the modification of the fabric
of space-time due to the presence of material bodies. That is, in empty space a
body will go straight, but the presence of another body will bend its
trajectory as if it were affected by a pulling force. Even if it is an
imprecise analogy, a ball thrown on a bed where a cat is sleeping will not go
straight but will rather curve towards the cat. Anyway, we cannot prove beyond
any doubt a theory to be true, no matter how successful it is. It is better to
say that the theory is confirmed, or more cautiously corroborated, by positive
experiments: arguably, we have more reasons to believe a theory with lots of
confirmatory instances than one with fewer.
Can we at least prove a theory to be false? Newton’s theory
would be proven false if the predicted acceleration of falling bodies were
different from the measured one, say. Indeed, Newton’s theory was falsified by
experiments: the theory predicted Mercury’s orbit around the Sun would not
shift forward, which instead does. Such shift was predicted by Einstein’s
theory of general relativity. So, can falsification be definite? Again, no:
sometimes old theories do not get replaced even if they have contrary evidence.
In this case, experimental refutation of Newton’s theory was not the reason why
relativity took the place of Newton’s theory in the physics books. Even if the
predictions were wrong, scientists were not ready to consider Newton’s theory
to be false and kept using it. After all, is it worth throwing away all the
successes of such a powerful, explanatory theory just for such a small
discrepancy? It could well be some experimental error. Nonetheless, eventually
Newton’s theory was replaced because of theoretical, rather than empirical,
reasons. Einstein proposed his theory of relativity because he found the
‘spooky action-at-a-distance’ of Newton’s theory of gravity extremely
unsatisfactory. Therefore, he looked for another explanation and he found it.
The bonus was that his theory could also correctly recover the shift in
Mercury’s orbit that Newton’s theory could not account for.
Also, consider this other example. Gravity keeps the
universe together and one of the leading early theories of the origin of the
universe is the big bang theory: the universe started expanding after a huge
explosion at the beginning of time. One
should expect sooner or later the universe to slow down, just like the
fragments of a more ‘regular’ explosion. However, recent astronomical data
suggest that the fragments are getting away at increasing speed. This is a
falsification of the big bang theory, which predicted deceleration.
Nonetheless, sometimes, like in the case of Newton’s failure to predict
Mercury’s orbital shift, rejecting falsified theories seems just too harsh. If
I drop an egg on the floor and it does not break as expected, I will not claim
I have refuted the current theory of gravity. Rather, I will check for false
assumptions that would explain the mistaken result. Another example is that
Newton’s theory predicted a different orbit for Uranus than the one observed.
So the theory was, again, falsified. However, instead of rejecting Newton’s
theory, astronomers questioned the assumption that there were seven planets:
the existence of another planet, Neptune, would explain the observed orbit of
Uranus, which they indeed later observed.
Back to the case of the accelerating universe, many
astronomers decided to do the same thing: they did not refute the theory, even
with contrary evidence. In a sense, they proposed that gravity has its own dark
side: something, now known as dark energy, which overpowers gravity’s
attraction. More precisely, they questioned the assumption that space-time has
no energy in itself. One may think of this as a repulsive gravity or
anti-gravity, but do not read too much into it. Notice that hidden,
unquestioned, assumptions are everywhere. For instance, when using a
microscope, we assume light propagates in a straight line, even if it does not.
There are some situations in which this is irrelevant, but some others in which
it may not. Hence, when facing empirical refutation, scientists always have the
option to put the theory into question or to challenge some hidden assumption
instead. In this case, astronomers could either deny the existence of dark
energy and radically modify general relativity, or assume dark energy exists
without modifying general relativity too much. If the former is the case, there
is a sense in which there is anti-gravity; if the latter, there is not. The
philosophical question therefore is: when is it reasonable for a scientist to
hold on to her theory, and when is she just stubbornly in love with it?
Even if this is not the case here, I am sure you understand
the gravity of the situation (pun intended!) if alternative theories are
empirically equivalent. That is, in the case in which no experiment can be
performed to tell them apart. This happens, for instance, between some
different formulations of non-relativistic quantum mechanics. If we cannot
choose which theory is correct based on the empirical results, what can help
us? It is unclear: some will say super-empirical, or purely theoretical
virtues, should be important. Simpler theories, for instance, should be
preferred. However, what is simplicity? Why should we believe that the universe
is simple?
The bottom line is therefore this: one is never able to
prove or rule out a scientific theory beyond any doubt with experiments alone.
That means that there will certainly be alternatives and it is unclear how
theoretical virtues may help in theory selection. Having said that, I believe
that scientific theories are powerful tools that can tell us about the nature
of reality. Even if we cannot definitively prove they are true or false, they
are either one or the other. There is something about the scientific method
that allows science, as opposed to the unscientific alternatives like crystal
ball gazing or tarot reading, to track truth, even if we do not know exactly
what it is. Not knowing it yet does not imply we will never find out more. And
not knowing what it is it does not mean that it does not work: my mum’s
ignorance about the way in which a nuclear power plant works does not make the
plant stop working.
So does anti-gravity exist? Either it does or it does not.
We do not know yet and we will never be able to know for sure. However, science
can still give fallible knowledge of the world: we sometime get things wrong
but we are getting somewhere. Therefore, if you want to investigate the
mysteries of gravity, as well as any other, keep studying, become a scientist
and keep your philosophical eye open: the path is going to be uphill, but there
is no fun without a challenge.
But it’s not a force. That was Einstein’s great discovery.
How can we say that? Well because you can, at least for a while, simply make it
vanish! How do you do that? Just let go! In other words, jump off a building,
and you’ll feel no gravity as you fall down (hitting the Earth does not count
as falling down). More gently, join a freely orbiting space station crew, and
you’ll find life difficult because there will be no felt gravity to hold you
down on your seat or to hold your coffee in a cup. In short, what appears to be
a gravitational force actually depends, locally at least, on how you are
moving. You can make it go away by allowing yourself to fall freely.
The reason this is true is because the gravitational mass of
a body is the same as its inertial mass. This is what Galileo discovered,
allegedly, by dropping objects of different weight from the Leaning Tower of
Pisa (that experiment has since been done much more accurately by modern
physicists - see this video for a feather and a ball falling at the same
speed). That means that if you are in lift and the rope breaks, you and
everything around you will fall at the same rate as the lift – so you will no
longer feel gravity holding you to the floor of the lift. This was Einstein’s “happiest discovery”.
Gravity is now understood as being an effect of space-time
curvature; in a static situation, of spatial curvature. A model is as follows:
if you consider two aircraft that start off 1000 miles apart at the same
instant from the Earth’s equator and they each fly at the same speed in an
unchanging Northerly direction, they will get closer and closer together and
will eventually collide at the North Pole. It is as if a force was pulling them
together even though there was no attractive force acting between them. It was
the curvature of the Earth that was the cause of this apparent force. Spacetime
curvature is like that: if, for example, you let a spacecraft fall freely
around the Earth at the right speed, with the engine turned off, it will arrive
back exactly where it started because of the curvature of space caused by the
Earth’s gravitational field. It never fired
an engine to change direction but just kept going.
To really get to grips with this, you need to study tensor
calculus. This is what Einstein had to learn when he was developing his theory
of gravitation between 1912 and 1916 – he learnt it from a friend, and found it
did exactly what was needed. The key idea is a curvature tensor: a mathematical
object with 20 components (assuming spacetime is 4-dimensional, as all the
evidence suggests). This resulted in the
Einstein Field Equations, whereby 10 of these components (comprising the Ricci
tensor) are determined at each point by the matter and energy present there,
and another 10 components (comprising the Weyl tensor) are instead determined
by the cumulative effects of all the matter and energy at other spacetime
locations. It is the non-local effects of the Weyl tensor that convey curvature
from one place to another, so letting us feel that gravitational tidal force
due to the Moon on the Earth, and allowing the ripples in space-time that are
gravitational waves to travel from distant colliding neutron stars and black
holes to the Earth.
Einstein’s theory is a classical theory, and does not take
quantum effects into account. Now most physicists assume that, at base,
gravity, like all the other fields we know, is actually a quantum field. But despite
stupendous efforts by a great many very talented physicists, we still don’t
have a solid agreed-on theory of quantum gravity. So we don’t know the theory
of gravity that would apply at the very start of the universe, or at the very
end of the life of a Black Hole.
General Relativity has passed all these tests with flying
colours. But some scientists, for example, are claiming you don’t need to have
the huge amounts of dark matter in the universe that are suggested by standard
studies – because they assume that General Relativity is correct. Maybe a
modified gravitational theory, for example one in which the gravitational
constant changes with space or time, might remove the need for dark matter. So
many alternatives are being proposed and tested.
It is difficult to test on Earth because it is a long range
force, It is dominant in the Universe on large scales because all gravitational
mass is positive, unlike electricity, where there are equal numbers of positive
and negatively charged particles.
We understand Einstein’s theory pretty well, despite its
complexity. But that is not the end of the story. If you want to take part in
the search for the ultimate answer, you will have to learn the maths (tensor
calculus, maybe spinors) and the physics (variational principles and symmetry
groups, for example) and then get going. No one knows what direction may lead
to new and unexpected answers.
The universe is expanding, and Einstein’s theory of gravity
makes a definite prediction about how the expansion rate should change over
time: it should decrease, since the gravitational attraction between all the
matter in the universe continually opposes the expansion.
The first time this prediction was observationally tested,
around 1998, it was found to be spectacularly in error. The expansion of the
universe is accelerating, not decelerating, and the acceleration has been going
on for about six billion years.
How did cosmologists respond to this anomaly? If they
adhered to the ideas of philosopher Karl Popper, they would have said: “Our
theory of gravity has been conclusively disproved by the observations;
therefore we will throw our theory out and start afresh.” In fact, they did
something very different: they postulated the existence of a new,
universe-filling substance which they called “dark energy”, and endowed dark
energy with whatever properties were needed to reconcile the conflicting data
with Einstein’s theory.
Philosophers of science are very familiar with this sort of
thing (as was Popper himself). Dark energy is an example of what philosophers
call an “auxiliary hypothesis”: something that is added to a theory in order to
reconcile it with falsifying data. “Dark matter” is also an auxiliary
hypothesis, invoked in order to explain the puzzling behavior of galaxy
rotation curves.
Karl Popper first began thinking about these things around
1920, a time when intellectuals had many exciting new theories to think about:
Einstein’s theory of relativity, Freud’s theory of psychoanalysis, Marx’s
theory of historical materialism, etc. Popper noticed that Einstein’s theory
differed from the theories of Freud and Marx in one important way. Freud and
Marx (and their followers) appeared unwilling to acknowledge any
counter-examples to their predictions; every observed fact was interpreted as
confirmation of the theory. Whereas Einstein made definite predictions and was
prepared to abandon his theory if the predictions were found to be incorrect.
Popper argued, in fact, that this difference is the
essential difference between science and non-science. A scientist, Popper said,
is someone who states—before a theory is tested—what observational or
experimental results would falsify it. Popper’s “criterion of demarcation” is
still the best benchmark we have for distinguishing science from non-science.
At the same time, Popper recognized an obvious logical flaw
in his criterion. Theories, after all, are arbitrary; they are created out of
thin air. What is to keep a scientist, Popper asked, from responding to an
anomaly by saying: “Oh, wait, that is not the theory that I meant to test. What
I actually meant to propose was a theory that contains this additional
hypothesis”—a hypothesis that explains the anomalous new data. (This is
precisely what some cosmologists do when they say that dark energy has been in
Einstein’s theory all along.) Logically, this is perfectly kosher; but if
scientists are allowed to proceed in this way - Popper realized - there could
be no hope of ever separating science from non-science.
So Popper came up with a set of criteria for deciding when
changes or additions to a theory were acceptable. The two most important were:
(i) the modified theory must contain more content than the theory it replaces:
that is, it must make some new, testable predictions; and (ii) at least some of
the new predictions should be verified: the more unlikely a prediction in the
light of the original theory, the stronger the corroboration of the modified
theory when the prediction is shown to be correct. Popper did not simply propose
these criteria; he argued for them on logical and probabilistic grounds. Popper
was adamant that the total number of verified predictions was irrelevant in
terms of judging the success of a theory since theories can always be adjusted
to “explain” new data. All that matters, he said, are the novel
predictions—predictions that no one had thought to make before the new theory
came along.
How does the standard cosmological model—which contains
Einstein’s theory of gravity as part of its “hard core”—fare according to the
standards set by Popper? Here I can’t resist first quoting from Imre Lakatos, a
student of Popper who tested and refined Popper’s criteria by comparing them
with the historical record. Lakatos distinguished between what he called “progressive”
and “degenerating” research programs:
"A research programme is said to be progressing as long
as its theoretical growth anticipates its empirical growth, that is, as long as
it keeps predicting novel facts with some success (`progressive problemshift’);
it is stagnating if its theoretical growth lags behind its empirical growth,
that is, as long as it gives only post-hoc explanations either of chance
discoveries or of facts anticipated by, and discovered in, a rival programme
(`degenerating problemshift’)."
(Lakatos invented the
term ‘problemshift’ because, he said, “‘theoryshift’ sounds dreadful”.)
The standard cosmological model clearly fails to satisfy the
criteria set by Lakatos for a progressive research program. Dark matter, dark
energy, inflation all were added to the theory in response to unanticipated
facts. None of these auxiliary hypotheses have yet been confirmed; for
instance, attempts to detect dark matter particles in the laboratory have
repeatedly failed. And the standard cosmological model is notoriously lacking
in successful predictions; it seems always to be playing catch-up. The ability
of the model to reproduce the spectrum of temperature fluctuations in the
cosmic microwave background is often put forward as a notable success, but as
astrophysicist Stacy McGaugh has pointed out, this success is achieved by
varying the dozen or so parameters that define the model and some of those
parameters are forced to have values that are stubbornly inconsistent with the
values determined in other, more direct ways. This does not quite meet the
standards for a successful novel prediction.
All of this would be of fairly academic interest, if not for
one thing. It turns out that there exists an alternate theory (or “research
program”) of gravity, which has been around since the early 1980’s, and which
has quietly been racking up successful, novel predictions. As of this writing,
about a dozen of its predictions—some quite startling when they were first
made—have been verified observationally. And I am not aware of a single
prediction from this research program that has been conclusively falsified.
I am referring here to the Milgromian research program. In
1983, Mordehai Milgrom suggested that galaxy rotation curves are flat—not
because of dark matter—but because the laws of gravity and motion differ from
those of Newton or Einstein in the regime of very low acceleration. Milgrom’s theory was designed to give flat
rotation curves, and so the fact that it does so is not, of course, a novel prediction.
But a long list of other predictions follow immediately from this single
postulate. Milgrom outlined many of these predictions in his first papers from
1983 and a number of others have been pointed out since. One example: Milgrom’s
postulate implies a unique, universal relation between the orbital speed in the
outer parts of a galaxy, and the total mass (real, not dark) of the galaxy. No
one had even thought to look for such a relation before Milgrom predicted it;
no doubt because—according to the standard model—it is the dark matter, not the
ordinary matter, that sets the rotation velocity. But Milgrom’s prediction has
been splendidly confirmed—a beautiful example of a corroborated, novel
prediction.
Milgrom’s theory is successful in another way that the
standard model is not. In the early days of quantum theory, Max Planck pointed
out that the convergence of various, independent determinations of Planck’s
constant on 6.6 x 10-27 erg-sec was compelling evidence for a theory of
quantized energy (exactly which theory of quantized energy was not yet clear).
It would be almost miraculous, Planck argued, for such convergence to exist
otherwise. In the same way, Milgrom has pointed out that the “acceleration
constant” a0 that appears in his theory, and that marks the transition from
Newtonian to non-Newtonian behavior, can be extracted from astrophysical data
in many independent ways, all converging on the value ~ 1.2 x 10-10 m sec-2. As
I noted above, nothing like this degree of convergence exists for the
parameters that define the standard cosmological model.
What does all this mean? As a non-cosmologist, I have no
stake in the correctness of any particular theory of cosmology or gravity. But
I am impressed by the arguments of philosophers like Popper and Lakatos, and by
the demonstrated power of their criteria to distinguish between successful
theories and theories that end up on the rubbish heap. And so I am encouraged
by the fact that there is a small, but growing, group of scientists who have
chosen to develop Milgrom’s ideas. It is hard for me to believe that these
scientists aren’t on the track of something important—quite possibly a new, and
better, description of gravity.
DARK MATTER & BEYOND.
In deep underground laboratories, buried below rock and
shielded from cosmic radiation, physicists have built extremely sensitive
detectors aimed at solving one of the Universe’s greatest mysteries. They are
awaiting signals of a new kind of particle, promised to them by cosmologists
and astrophysicists: Dark Matter. The highly elusive particle is thought to
dominate the mass budget of our galaxy and of the Universe as such. There
should be about six times more Dark Matter than ordinary, “baryonic” matter
(which includes everything from interstellar gas clouds, stars, and planets, to
the screen you are reading this on, and you yourself). Dark Matter has not yet
been directly detected, despite numerous experiments, their painstaking efforts
to reduce background signals, and thus ever increasing sensitivity. Many
researchers nevertheless remain confident that a detection is within reach. Yet
some worry: what if we are chasing a phantom? What if Dark Matter does not
exist?
There are several lines of argument for the existence of
Dark Matter. On the scale of galaxies, the need for Dark Matter is mostly
inferred from their dynamics. Disk galaxies rotate. Counting up the
distribution of mass visible in a galaxy – in the form of stars and gas – we
can use Newton’s law of gravity to calculate how fast the galaxy should rotate
at different distances from its center (the “rotation curve”). The rotation
should be faster in the center and slower with increasing distance. Yet
measurements reveal that galaxies rotate faster than expected and that the
rotation velocity does not drop at increasing radii. Taken at face value, this
would imply that galaxies are not gravitationally bound; the gravity of their
stars and gas is insufficient to keep them from flying apart. To be stable,
galaxies would have to contain large amounts of unseen mass. This mass has been
termed “Dark” Matter because it only interacts through gravity, but not with
electromagnetic radiation.
This argument for Dark Matter implies one crucial
assumption: Newton’s law of gravity applies on galactic scales. This is a long
stretch. Newton’s law was uncovered on Earth, where the gravitational
acceleration is 1011 times stronger than typical for galaxies, and
in the Solar system, where even the most distant planet Neptune experiences a
10,000 times stronger acceleration than stars in galaxies. It is therefore far
from confirmed whether Newton’s law can be extrapolated to the very low
acceleration regime that galaxies live in.
This was also noticed by Israeli physicist Mordehai Milgrom.
In 1983, he suggested a radically different approach to explain the high
rotation speeds of galaxies. Instead of introducing Dark Matter, Milgrom
proposed that the laws of gravity are different on the scale of galaxies – that
Newtonian Dynamics becomes “Modified Newtonian Dynamics” (MOND). In MOND, or
Milgromian Dynamics, the gravitational acceleration of a given mass is stronger
than in the Newtonian case, and does not scale as 1/r2 with distance
r but rather as 1/r. This explains why galaxies rotate fast without tearing
themselves apart, and why the rotation curve does not drop at larger radii.
Since MOND must preserve the successes of Newtonian gravity, which is very well
tested in the solar system, there has to be an acceleration at which a
transition occurs. This acceleration scale is called a0.
The parameter a0 is not fixed by the theory. It
has to be measured. Such measurements provide a first test. Does every galaxy
require the same a0, or does the parameter have to be fixed for each
system independently? The former is consistent with a fundamental theory,
whereas the latter would be much less convincing. It turns out that the former
is indeed the case: every galaxy results in the same a0. In fact, the parameter
can be measured in several independent ways, not only for different galaxies.
They all point to the same value.
One hallmark of a scientific hypothesis is that it makes
testable predictions. Dark Matter cosmology makes predictions for the
large-scale evolution of the universe and for statistical samples of galaxies.
However, it has almost no predictive power for an individual galaxy. While the
rotation curve of a galaxy can be fitted by adding a distribution of Dark
Matter to it, this does not work the other way around. Given just the
distribution of stars and gas in a galaxy, Dark Matter models do not predict
the detailed rotation curve. The visible galaxy could, in principle, be
embedded in a variety of different Dark Matter distributions, all resulting in
different rotation curves.
MOND, in contrast, makes precise and accurate predictions
for individual galaxies. If the distribution of stars and gas in a galaxy is
known, Milgrom’s law allows us to calculate what its rotation curve should look
like, down to bumps and wiggles. These predictions are routinely confirmed observationally.
While a modified gravity law offers a conceptual explanation
for why such predictions work, the underlying, extremely tight correlation is
purely empirical and independent of MOND. It has been termed the Radial
Acceleration Relation (RAR). One cannot stress enough how fascinating it is
that the distribution of baryons (stars, gas) in a galaxy uniquely predicts the
galaxy’s dynamics. This observational fact must be understood in any model of
the Universe, especially in Dark Matter models in which such predictability is
not necessarily expected.
Nevertheless, MOND does have problems if applied beyond the
regime of galaxies for which it was developed. An example is galaxy clusters,
large agglomerations of galaxies that even in MOND appear to require the
addition of Dark Matter to be bound structures (albeit of only a factor of
about two compared to the ordinary matter). A related issue is colliding galaxy
clusters such as the Bullet Cluster, in which the mass distribution inferred
from gravitational lensing also appears more consistent with the presence of
Dark Matter than with a modified gravity interpretation. However, what is often
neglected in discussing this issue is that the clusters’ collision speed is
surprisingly high for Dark Matter cosmology, but more reasonable in MOND. The
Bullet Cluster thus neither uniquely supports nor uniquely falsifies either of
the two competing concepts.
In a sense, Dark Matter and MOND have distinct regimes of
applicability. The former is more successful on larger scales, while the latter
is most successful on smaller scales, being able to predict galaxy dynamics and
also offering an explanation for several observed scaling relations between
galaxy properties. Once we attempt to expand the models beyond their respective
regimes of primary applicability, problems appear. Dark Matter models suffer
from a number of “small-scale problems” on the scale of galaxies and their
satellite galaxy populations. MOND cannot be successfully applied to large
systems of galaxies or the universe as a whole.
This apparent complementarity could offer a way out of the
current conceptual stalemate. Some physicists are developing models that join
the two seemingly incompatible approaches. One example is Superfluid Dark
Matter, developed by Justin Khoury at UPenn. In this model, Dark Matter around
galaxies phase-transitions into a superfluid, which gives rise to a MOND-like
behavior for ordinary matter, but only in this region. Interestingly, this
results in some predictions that are distinct from those of both “pure” Dark
Matter and MOND, making this a testable alternative born out of two competing
concepts.
Laboratory searches for Dark Matter are important. They hold
the potential for a groundbreaking detection confirming the hypothesis.
However, detectability is not falsifiability. What if we do not succeed in
detecting Dark Matter? The reason could be that Dark Matter really does not
interact with ordinary matter except gravitationally, or because it does not
exist. One can imagine falling for a sunk-cost fallacy in such a case, by
sticking with the Dark Matter hypothesis because of the amount of resources
already invested in it. To prevent such a risk, we should already be
considering and developing alternative approaches. Maybe the best argument for
this is the predictive power of MOND. Since this is an empirical success
linking the observed distribution of stars and gas directly to their
velocities, it will have to be understood in any successful model of cosmology,
including those based on Dark Matter. Therefore research into models based on
the modified gravity concept are a worthwhile addition to the Dark Matter
approach. A diversity of ideas (as well as people) should be cherished and
supported. Ultimately, building on the successes of both the Dark Matter and
the modified gravity approach might offer the crucial insights necessary to
unravel the composition of the universe and the nature of gravity.
How our universe has begun?
On the center of our galaxy where was born the sun by very
light proto particles the very first orbital system representing “hydrogen
atom” by two almost equal proto particles the force is very small almost
inclining to zero (if you like can call it proto electrical force). Afterwards
the “light ray” (proto field) reaches another and another masses which
increases inertia of the very first “hydrogen orbital system” and the nucleus
(proto proton) is shrinking because the distance to proto electron is
increases. Because the proto electron is on the periphery it is shrinking
slowly than the nucleus (proton). That is expansion of the universe from our
position of observation – proto hydrogen and begin to differentiates the
electrostatic and gravitational forces and appears the phenomenon energy itself
all because of the expansion of the universe, see part II USM
www.kanevuniverse.com
What is the essence of the energy in the universe itself and
from where it comes from and what is its value? From page 96, 97, 98 USM
www.kanevuniverse.com follows that in the beginning of springing up of Our
space the full energy is equal to zero for us. Then with thickening of the
micro cosmos and expanding of macro cosmos due to the asymmetry of these
coefficients, which depends by our position of observation in this case our living
position, we have the illusion that the energy for us is increases, because we
are close to the nuclei of the atoms rather than the stars and galaxy. But the
macro cosmos expands more rapidly by the same reason, so full energy in our
space is again equal to zero. So it is seen that the energy is “one reality in
the illusion”, as well as the world itself! See USM www.kanevuniverse.com So by
this very elegant way (haw has liking to said Einstein himself) is determinates
one of the most mysterious physical phenomenon in the universe!
About forces do surrounding us in our world. That is right
Brian with one very important correction. In fact these three forces
electrostatic gravitational and nuclear are the same force, but only the
thickening of the space around us, during the expansion of the universe
(actually only our galaxy) gradually forms their quantity differences and
deludes us that they are three different forces. That is explained in detail on
part II USM www.kanevuniverse.com Quality identity and inertial character of
these three forces is explained in part I on the same site. The rest namely the
polarization of the space and following there essence of the weak and strong
interaction is shown here: On pages 55, 56, 57 USM www.kanevuniverse.com is
shown the essence of polarization of the space and the equation of that, which
is universally about the all stable fields: gravitational, electromagnetic and
nuclear and explain in very simple and convincing way behavior of strong and
weak interactions. In particular it is explains the belts of Jupiter and Saturn
and their approximately sizes. Most important conclusion is that these belts
have relativistic velocity of birth, which leads to the conclusion that these
two planets are the most dangerous place on the solar system.
Another case of delusion that physicist made up about
originate of the universe. The premature conclusion is based upon the data of
neutrino registration deep underground (under ice) in Antarctica, where for
about several years was registered some ten of cases: several high energy
neutrinos and some ten cases relatively low energy neutrinos….. that are the
facts! There is necessary a lot much years of confirming such cases to be sure
from what directions come mainly these particles and to estimate the energy
spectrum through more cases observing. So the experiment is far away from the
conclusions. But some physicist instantly decided that this is evidence about
“big bang” “young stars” and soon. And all these, if the physics nowadays can’t
say anything about the essence of these mysterious particles called neutrino!
Let me explain: According to USM www.kanevuniverse.com in the beginning when
our space (universe) was started in the center of our galaxy, that started is
with very first orbital system containing only two proto particles representing
very first “hydrogen atom” from two almost equal proto particles with mass
1,8.10 rising to a 13 power times smaller than the mass of proton. In this
moment with appearance of this very first orbital system appears the energy
itself as well and this energy is almost equal to zero and the first real value
appears together with the first difference between the masses of the two proto
particles. And that difference appears because of the beginning of space
expansion provoked by the added new masses (proto particles), which disturb the
inertial balance of this very first orbital system (see part I USM). That was
our universe in this very first moment. Then begin to expands the macro cosmos
and begin respectively to thickening the micro cosmos, which means that the
nuclear proto particle in the beginning orbital system begin to weigh more
quickly than the orbiting proto particle, because the first is on the center
(more close to the micro cosmos). How it is seen the two particles during
movement towards the periphery of our galaxy come to our position of
observation where the central particle already has the mass of proton and the
orbiting one possesses mass of electron, so in our position of observation the
proton has resonance radius equal to the size of proton (see USM) and the
electron has resonance radius equal to the size of atom. But what has happened
with expansion of the macro cosmos (our galaxy) looked at our position of
observation? The resonance radius of the first proto particle in the very first
orbital system (“hydrogen atom”) in fact is the size of this very first orbital
system, so during the expansion of the macro cosmos this resonance radius begin
to expands which means that it represents more and more lightering of the proto
particle mass and when this process come to our point of observation this
particle already has the mass of neutrino, see page 128 to 138 USM
www.kanevuniverse.com In the formulae 125 and 126 is given the masses spectrum
of electronic neutrino and the masses spectrum of muon neutrino and the
explanation for that. So the high energy spectrum neutrinos actually comes from
the smaller resonance radius in any galaxy (more closer belts of centripetal
accelerations) and lower spectrum respectively comes from the periphery belts
of centripetal accelerations again in any galaxy. So obviously observed here
neutrinos has nothing to do with “big bang” and so for absurdity!
No comments:
Post a Comment
let noble thoughts come to us from all around