सोमवार, दिसंबर 09, 2013

SOLUTIONS TO THE BLACK HOLE FIREWALL PROBLEM



THE BLACK HOLE FIREWALL PROBLEM.
INFORMATION PARADOX RESOLVED
USING RUSSELL’S PARADOX OF SET THEORY.

THE PARADOX:

The concept of black-hole firewall postulated by J. Polchinski and others in July 2012 (http://arxiv.org/abs/1207.3123) was extended this year to suggest that typical black holes with field theory duals have firewalls at the event horizon (10.1103/ PhysRevLett.111.171301). This argument makes no reference to entanglement between the black hole and any distant system; hence it is not evaded by identifying degrees of freedom inside the black hole with those outside. During the last one year, more than 100 papers and three conferences/workshops have addressed the idea of firewalls and examined different aspects. We present three different empirical solutions to the paradox by revisiting the foundational principles in each case. In this paper, we reexamine foundations of the Equivalence Principle (EP) using Russell’s paradox of set theory.

First the black hole firewall concept needs to be explained for the uninitiated. Consider a scenario: frustrated Alice wants to commit suicide by jumping into a very large black hole, leaving Bob outside the event horizon, beyond which nothing, not even light, can escape. According to the EP, if the black hole is large enough, Alice will not notice anything unusual as she falls through the event horizon – she will see the same phenomena as an observer floating in empty space. In this scenario, dubbed “No Drama”, the gravitational forces will not become extreme until she approaches a point inside the black hole called the singularity. There, the gravitational pull will gradually tug at her feet more strongly than at her head. As she inexorably plunges downwards, the difference in forces would quickly increase and Alice will be “spaghettified” or crushed and torn (remember the saying in the last century: looking ahead inside a black hole, you will see the back of your head in front of you!). The new hypothesis suggests: as Alice crosses the event horizon, breaking correlation with Bob (her entangled partner) would release lots of energy turning the event horizon into a massive firewall that will incinerate her.

Empty space is full of particles-antiparticles pairs that continually pop up into existence before rapidly recombining and instantly vanishing releasing lots of energy. If a pair forms just outside a black hole’s event horizon, sometimes one particle may fall inside the event horizon, while the other may escape as the Hawking radiation. The first particle would balance the positive energy of the outgoing particle by carrying negative energy inwards. This is allowed by Quantum Mechanics (QM). That negative energy would get subtracted from the black hole’s mass, causing the hole to shrink and steadily lose mass. If no ordinary matter falls in, the hole would eventually evaporate. With this, all information about the black hole would disappear permanently.

But the equations of General Relativity (GR) say that black holes can only swallow mass and grow - not evaporate. Also QM says that information cannot be destroyed. Now consider another possibility. Since the particle pairs have their states ‘entangled’, by measuring the state of the radiation coming out from the emitted particles, we can get all information about the objects falling into the black hole even after the hole evaporates (it must be encoded in the quantum states of the emitted particles). Which of the possibilities is likely? This is the information paradox.


THE PROBLEM:
If somehow lots of radiating twin-particles could break their correlation with their in-falling partners, massive energy should be released like breaking the bonds of many molecules. The released energy should create a firewall around the black hole event horizon. But this violates one aspect of the equivalence principle that free-fall should feel the same as floating in empty space. Thus either firewall exists or information is lost in black holes permanently. The above scenario creates a paradox bringing into focus the inherent conflict between Relativity and Quantum theories, because it means that at least one of the following three established notions of theoretical physics must be wrong.

  • First: the postulates of “No Drama”. According to the EP, there is no difference between free fall - even into the strong gravitational field inside a black hole - and inertial motion in empty space. Since Alice is in free fall when she crosses the event horizon, she should not feel extreme effects of gravity. Is the EP universally valid or it breaks down at the event horizon or somewhere else? Are the mathematics or concepts that lead to singularity or event horizon, correct? What is gravity? Is it like the other interactions? Can gravity be quantized?
  • Second: the postulates of “unitarity”. Alice and Bob are like an entangled particle pair so that they are strongly correlated. The information carried by the radiation is emitted from the region near the event horizon, with low energy effective field theory valid beyond some microscopic distance from the event horizon. Can entanglement be by-passed at the event horizon? Can the notion of monogamous quantum entanglement be changed to two different kinds of entanglements?
  • Third: the postulates of “normality”. Physics works normally far away from a black hole even though it breaks down at some point within the black hole. Is Hawking radiation in a pure state – all information is lost in the black holes? Can quantum Xeroxing - seeing the same information in the Hawking radiation - be resolved by complementarity? What about black-hole particle-jets and blazars?

Together, these concepts make up what is dubbed “the menu from hell”. Since all three cannot be simultaneously true, the paradox is: which of the above three concepts, is/are wrong? One solution lies in Russell’s paradox of set theory and revisiting the foundations of Relativity instead of building on “accepted theories” that goes tangentially in a reductionist manner like “Is time Newtonian or relativistic?” without defining time.

EQUIVALENCE PRINCIPLE REVISITED:

The cornerstone of GR is the principle of equivalence of inertial and gravitational masses: mi = mg. The EP does not flow from any mathematics. No one has given any mathematical reason (like a consistency constraint) why all matter fields should couple universally to gravity. This is not the case for the other fundamental forces or the Higgs field (which is why different particles have different masses). Higgs field is specific as to which particle couples to it. Gravity is a universal field - an all pervading medium. Every particle in the universe, whether massive or not, couples to it. Since F=ma and universal free fall for all mass types hold, F ≈ g ≈ a holds. It can be explained only if gravity acts like river current propelling all objects uniformly based on local density gradient. The apple fell because its coupling with the stem softened and became weak. The galactic and star systems are like a “free vortex” arising out of conflicting currents in which the tangential velocity ‘v’ increases as the center line is approached, so that the angular momentum ‘rv’ is constant. The orbits are not elliptical, but circles with a shifting center. Hence gravity cannot be quantized and gravitons will never be found.
The EP has been generally accepted without much questioning. Actually GR assumes general covariance and the equivalence of the two masses follows. General covariance means invariance under diffeomorphisms. This implies the equivalence principle. This implies that gravitational and inertial masses are equal. It is not a first principle of physics, but merely an ad hoc metaphysical concept designed to induce the uninitiated to imagine that gravity has magical non-local powers of infinite reach. The appeal to believe in such a miraculous form of gravity is very strong. Virtually everyone accepts EP as an article of faith even though it has never been positively verified directly by either experimental or observational physics. All indirect experiments show that the equivalence or otherwise of gravitational and inertial masses is only one of description as is shown below.

No one knows why there should be two or more mass terms. In principle there is no reason why mi = mg: why should the gravitational charge and the inertial mass be equal? The underlying gauge symmetries that describe the fundamental interactions require the fundamental fields to be massless. The Higgs mechanism of spontaneous symmetry breaking appears in the equation of motion of the field particle, i.e., mi (in the classical limit). If we put the particle in a gravitational field, then it will “feel a force” given by the “gravitational charge” times the gravitational field. This appears as two masses “mg” and “mi”, though there is only one mass term associated with each field.

The gravitational mass mg is said to produce and respond to gravitational fields. It is said to supply the mass factor in the inverse square law of gravitation: F=Gm1m2/r2. The inertial mass mi is said to supply the mass factor in Newton’s 2nd Law: F=ma. If gravitation is proportional to g, say F=kg (because the weight of a particle depends on its gravitational mass, i.e. mg), and acceleration is given by a, then according to Newton’s law, ma=kg. Since according to GR, g=a, combining both we get m=k. Here m is the so-called “inertial mass” and k is the “gravitational mass”. But the problem is the difference between the values of G (constant – though it might be changing: doi/10.1103/ PhysRevLett.111.101102) and g (known to be variable).

Alternatively, the inertial mass measures the “inertia”, while the gravitational mass is the coupling strength to the gravitational field. The gravitational mass plays the same role as the electric charge for electromagnetic interactions, the color charge for strong interactions and the particle flavor for weak interactions. Inertial mass mi is the mass in Newton’s law F=mia. Gravitational mass mg is the coupling strength in the Newton’s law of gravitation: Fg = (gm1m2/r2) x mg. Thus, mia = Fg = (gm1m2/r2) x mg. The quantity gm1m2/r2 is the “gravitational field” (say G) and mg is the “gravitational charge”, so that one can write: F x g = mg x G, just like we write: mi x a = q x E for the electric field. This has nothing to do with the Brout-Englert-Higgs mechanism.

Some think that the EP implies that a test particle travels along a geodesic in the background space-time. The EP assumes that in all locally Lorentz (inertial) frame, the laws of Special Relativity (SR) must hold. From this, it is concluded that only the geometric structure of spacetime can define the paths of free bodies. If x is a particle’s world-line, parameterized by proper time, T is its tangent vector, D denotes covariant differentiation along the world-line, and R is the Ricci tensor, then: D(T) = 0 and D(T)=R(T) are both tensorial; hence generally covariant. But only one of them describes a geodesic in a general curved space-time.

Gravity does not couple to the “gravitational mass” but rather to the Ricci Tensor, which works only if space-time is flat. Ricci Tensor does not provide a full description in more than three dimensions. Schwarzschild equations for black holes, where space-time is extremely curved, uses the Riemann Tensor. Using Riemann tensor, instead of Ricci tensor to calculate energy momentum tensor in 3+1 dimensions would not lead to any meaningful results, though in most cases, the Riemann Tensor is needed before one can determine the Ricci Tensor. Thus, there is really no relation between “gravitational mass” and “inertial mass”, except in Newtonian physics. This is why photons (with zero inertial mass) are affected by gravity. Only manipulations of the Standard Model (SM) to include classical gravity (field theory in curved spacetime) leads to effects like Hawking radiation and the Unrih effect. This is where gravitation and the SM can hypothetically meet.

Gravitation and GR are not included in the SM. Hence the SM really cannot say anything about gravitational mass. If any theory conclusively unifies gravitation with the SM, it may be able to explain the equivalence of the inertial mass and the gravitational mass. The Higgs Boson and the Higgs fields are predictions of the SM which incorporates SR. The Higgs mechanism is intended to explain the “rest mass” of fundamental particles such as quarks and electrons that constitute only about 4% of the total theorized mass of the universe. This rest mass of fundamental particles comprises only a tiny fraction (~1%) of the “rest mass” of atoms. Most of the invariant mass of protons and neutrons is the product of quark kinetic energy confinement when bound by the strong interaction mediated by gluons. It is not directly the result of the Higgs mechanism. However, since SR is part of the SM and since e = mc2, the SM may be said to imply that rest mass from the Higgs mechanism and binding energy from the color force will both contribute equivalently to inertial rest mass of all particles.

It is believed that the Higgs field obeys ordinary theory of GR. It means that equivalence of inertial and gravitational masses takes place. The mass-energy of the universe that Dark Energy is said to represent has been reduced from 72.8% to 68.3%. At the same time Dark Matter has been increased from 22.7% to 26.8%. This means the percentage of ordinary matter has gone up from 4.5% to 4.9% only. Yet the constituent particles of these mysterious fields most likely do not couple to Higgs field at all.

EQUIVALENT OR DIFFERENT?

If we think of gravitational and inertial masses outside the context of a generally covariant theory, then there is still no evidence that they are equal. They may differ by an arbitrary factor which may be absorbed into G or by a variable G. The equivalence of the inertial and gravitational masses has been proved by the Eötvös experiment and many later experiments. An analysis of the experiments of Eötvös about the ratio of gravitational to kinetic mass of a few substances by some scientists yields the result that this ratio for the hydrogen atom, and for the binding energies are equal to that for the neutron with a precision of one part in at least 5.105, and 104 respectively. No conclusion can be drawn about these ratios for the proton and the electron separately.

The Eöt-Wash experiment of University of Washington tried to measure the difference in these two masses indirectly by considering “charge/mass” ratios. They have obtained a result, which can be summarized as: (mg/mi) -1│≤ 10-13.

Lunar Laser Ranging (LLR) experiment used to test for 35 years the equivalence principle with the moon, earth and sun being the test-masses to determine whether, in accordance with the EP, these two celestial bodies are falling toward the Sun at the same rate, despite their different masses, compositions, and gravitational self-energies. Analyses of precision laser ranges to the Moon continue to provide increasingly stringent limits on any violation of the equivalence principle. Current LLR solutions give Δ(mg/mi)EP=(-1.0±1.4)×10-13 for any possible inequality in Δ(mg/mi) - the ratios of the gravitational and inertial masses for the Earth and Moon. This result, in combination with laboratory experiments on the weak EP, yields a strong equivalence principle (SEP) test of:
Δ(mg/mi)SEP = (-2.0 ± 2.0) × 10-13.

Also, the corresponding SEP violation parameter η is (4.4±4.5)×10-4, where η=4β-γ-3 and both β and γ are post-Newtonian parameters. Using the Cassini γ, the η result yields β-1=(1.2±1.1)×10-4. The geodetic precession test, expressed as a relative deviation from general relativity, is: Kgp=-0.0019±0.0064. The time variation in the gravitational constant results in G˙/G=(4±9)×10-13yr-1. Consequently there is no evidence for local (1AU) scale expansion of the solar system. (DOI: 10.1103/PhysRevLett. 93.261101). Apart from the technical problems in these indirect methods and the assumed values of various parameters - including latest precisely measured value of G - continuing the uncertainty, the measured result that the Moon is moving away from the Earth at the rate of about 3.8 centimeters higher in its orbit each year shows that these indirect results cannot be fully relied upon.

The indirect methods to prove equivalence or otherwise; are questionable. It has been accepted as given that ma=mg. This equivalence is faulty because the description: F=ma is faulty. Once a force is applied to move the body along any axis and the body moves, the force ceases to act and the body moves at constant velocity v’ due to inertia (assuming no other forces present). The relation between the original velocity v (zero if the body is at rest) and v’ is the rate of change. To accelerate the body further, we need another force to be applied to the body. Without such a new force, the body cannot be accelerated. What is this new force and from where it comes? If any other force acts, then it has to be introduced into the equation. Where is that? Further, the new force will change the velocity v’ to v’’ – a new action. The “rate of change of the rate of change” means relating v to v’, v’’, etc. But why should we compare v’’ with v instead of v’?

When answering a question, one should first determine the framework. If we assume nothing then there can be no answer. However, if we take as given that we are going to formulate theories in terms of Lagrangians then there is essentially only one mass parameter that can appear, i.e., the coefficient of the quadratic term. Thus, whatever mass is there, it is only one mass. The Higgs field clearly modifies the on-shell condition in flat space and general relativity simply says that anyone whose frame is locally flat should reproduce the same result. Thus, the Higgs field appears to modify the gravitational mass. It may also modify the inertial mass by the same amount as can be verified by analyzing some scattering diagrams. However, knowing that we are working within the context of a Lagrangian theory, the fact that inertial and gravitational mass are equal is essentially a foregone conclusion. Are they really different? Let us examine.

RUSSELL’S PARADOX:

Now we will examine EP in the light of Russell’s paradox of Set theory. Russell’s paradox raises an interesting question: If S is the set of all sets which do not have themselves as a member, is S a member of itself? The general principle is that: there cannot be a set without individual elements (example: a library – collection of books – cannot exist without individual books). There cannot be a set of one element or a set of one element is superfluous (example: a book is not a library). Collection of different objects unrelated to each other would be individual members as it does not satisfy the condition of a set. Thus a collection of objects is either a set with its elements, or individual objects that are not the elements of a set.
Let us examine the property p(x): x Ï x, which means the defining property p(x) of any element x is such that it does not belong to x. Nothing appears unusual about such a property. Many sets have this property. A library [p(x)] is a collection of books. But a book is not a library [x Ï x]. Now, suppose this property defines the set R ={x : x Ï x}. It must be possible to determine if RÎR or RÏR. However if RÎR, then the defining properties of R implies that RÏR, which contradicts the supposition that RÎR. Similarly, the supposition RÏR confers on R the right to be an element of R, again leading to a contradiction. The only possible conclusion is that, the property “x Ï x” cannot define a set. This idea is also known as the Axiom of Separation in Zermelo-Frankel set theory, which postulates that; “Objects can only be composed of other objects” or “Objects shall not contain themselves”. In order to avoid this paradox, it has to be ensured that a set is not a member of itself. It is convenient to choose a “largest” set in any given context called the universal set and confine the study to the elements of such universal set only. This set may vary in different contexts, but in a given set up, the universal set should be so specified that no occasion arises ever to digress from it. Otherwise, there is every danger of colliding with paradoxes such as the Russell’s paradox. And in the case of EP, we do just that.

THE THOUGHT EXPERIMENTS OF GR AND EP:

There are similar paradoxes in the theory of SR, GR and the EP. Let us examine EP. All objects fall in similar ways under the influence of gravity. Hence locally, one, it is said, cannot tell the difference between an accelerated frame and an un-accelerated frame. But these must be related to be compared as equivalent or not? Let us take the example of a person in an elevator. The person seats in the elevator that is falling down a shaft. It is assumed that locally (i.e., during any sufficiently small amount of time or over a sufficiently small space) the person in the elevator can make no distinction between being in the falling elevator and being stationary in completely empty space, where there is no gravity. This is a wrong assumption. We have experienced the effect of gravity in closed elevators. Even otherwise, unless the door opens and we find a different floor in front of us, we cannot relate motion of the elevator to the un-accelerated structure of the building – hence no equivalence. The moment we relate to the structure beyond the elevator, we can know the relative motion of the elevator, because unlike the effect of inertia or gravitation, both of which induce motion, the building is stationary.

Inside a spaceship in deep space, objects behave like suspended particles in a fluid (un-accelerated) or like the asteroids in the asteroid belt (accelerated). Usually, they are relatively stationary (fixed velocity) within the medium unless some other force acts upon them. This is because of the relative distribution of mass and energy inside the spaceship and its dimensional volume that determines the average density at each point in the medium. Further the average density of the local medium of space is factored into in this calculation. If the person is in a spaceship where he can see the outside objects, then he can know the relative motions by comparing objects at different distances. In a train, if we look only at nearby trees, we may think the trees are moving, but when we compare it with distant objects, we realize the truth. If we cannot see the outside objects, then we will consider only our position with reference to the spaceship – stationary or floating within a frame. There is no equivalence because there is no other frame for comparison. The same principle works for other examples.

It is said that a ray of light, which moves in a straight line will appear curved to the occupants of the spaceship. The light ray from outside can be related to the spaceship only if we consider the bigger frame of reference containing both the space emitting light and the spaceship. If the passengers could observe the scene outside the spaceship, they will notice this difference and know that the spaceship is moving. In that case, the reasons for the apparent curvature of light path will be known. If we consider outside space as a separate frame of reference unrelated to the spaceship, the ray emitted by it cannot be considered inside the spaceship. The consideration will be restricted to those rays emanating from within the spaceship. In that case, the ray will move straight inside the spaceship. In either case, the description of Einstein is faulty. Thus, the foundation of GR - the EP - is wrong description of reality. Hence all mathematical derivatives built upon such wrong description are also wrong. There is only one type of mass.

The shifting of Mercury’s perihelion that is used to validate GR can be explained by (v/c)2 radians per revolution, where v is not the escape velocity, but the velocity component induced by Sun’s motion in the galaxy, which drags the planets also. Mercury being smallest and closest to the Sun, its effect is most profound. Before Einstein, Gerber has solved the problem differently. Eddington’s experiment about gravitational lensing has been questioned repeatedly. The effect is due to contrasting refractive indices of the media like the time dilation seen in GPS, where light bends and travels a longer path (also slows down) after entering the denser atmosphere of Earth. Every material that light can travel through has a refractive index, denoted by the letter n. The velocity of light in a vacuum is about 3.0 × 108 m/s. The refractive index equals the ratio of the velocities of light in vacuum (c) to that in the medium (v), that is n = c/v. Light slows down when traveling through a medium, thus the refractive index of any medium will be greater than one. By definition, the refractive index of vacuum is 1. For air at STP it is 1.000277. For air at 0 °C and 1 atm., it is 1.000293. This, and not time dilation, slows down light.

SPECIAL RELATIVITY REVISITED:

Now let us examine Lorentz transformation. The description of the measured state at a given instant is physics and the use of the magnitude of change at two or more designated instants to predict the outcome at other times is mathematics. Measurement is a comparison between similars, of which the constant one is called the unit. The factor v2/c2 or (v/c)2 is ratio or comparison of two dynamical quantities where c is the constant - hence a unit of measurement of a dynamic variable. It can be used to measure only the comparative dynamical velocities – not changes in mass or dimension, which is possible only through accumulation or reduction of similars. The two dimensional factor (v/c)2 represents the modifications of incoming light signal (third dimension like the e.m. radiation) as seen by an observer without changing any physical characteristics of the observed. This is why we have three dimensions of ocular perception.

The concept of measurement has undergone a big change over the last century. It all began with the problem of measuring the length of a moving rod. Two possibilities of measurement suggested by Einstein in his 1905 paper (published as Zur Elektrodynamic bewegter Körper in Annalen der Physik 17:891, 1905) were as follows:

(a) “The observer moves together with the given measuring-rod and the rod to be measured, and measures the length of the rod directly by superposing the measuring-rod, in just the same way as if all three were at rest”, or
(b) “By means of stationary clocks set up in the stationary system and synchronizing with a clock in the moving frame, the observer ascertains at what points of the stationary system the two ends of the rod to be measured are located at a definite time. The distance between these two points, measured by the measuring-rod already employed, which in this case is at rest, is the length of the rod”
The method described at (b) is misleading. We can do this only by setting up a measuring device to record the emissions from both ends of the rod at the designated time, (which is the same as taking a photograph of the moving rod) and then measure the distance between the two points on the recording device in units of velocity of light or any other unit. But the picture will not give a correct reading due to two reasons:
·  If the length of the rod is small or velocity is small, then length contraction will not be perceptible according to the formula given by Einstein.
·  If the length of the rod is big or velocity is comparable to that of light, then light from different points of the rod will take different times to reach the recording device and the picture we get will be distorted due to Doppler shift of different points. Thus, there is only one way of measuring the length of the rod as in (a).

Here also we are reminded of an anecdote relating to a famous scientist, who once directed two of his students to precisely measure the wave-length of sodium light. The students returned with two different results – one resembling the normally accepted value and the other a different value. Upon enquiry, the latter replied that he had also come up with the same result as the accepted value, but since everything including the Earth and the scale on it is moving, for precision measurement he applied length contraction to the scale treating the star Betelgeuse as a reference point. This changed the result. The scientist told him to treat the scale and the object to be measured as moving with the same velocity and recalculate the wave-length of light again without any reference to Betelgeuse. After sometime, both the students returned to tell that the wave-length of sodium light is infinite. To a surprised scientist, they explained that since the scale is moving with light, its length would shrink to zero. Hence it will require an infinite number of scales to measure the wave-length of sodium light!

Some scientists try to overcome this difficulty by pointing out that length contraction occurs only in the direction of motion. They claim that if we hold the rod in a transverse direction to the direction of motion, then there will be no length contraction. But how can the length be measured by holding the rod in a transverse direction! If the light path is also transverse to the direction of motion, then the terms c+v and c-v vanish from the equation making the entire theory redundant. If the observer moves together with the given measuring-rod and the rod to be measured, and measures the length of the rod directly by superposing the measuring-rod while moving with it, he will not find any difference because the length contraction, if real, will be in the same proportion for both.

            The fallacy in Einstein’s description is that if one treats “as if all three were at rest”, one cannot measure dynamic variables such as velocity or momentum, as the object will be relatively as rest, which means zero relative velocity. Either Einstein missed this point or he was clever enough to camouflage this when he said: “Now to the origin of one of the two systems (k) let a constant velocity v be imparted in the direction of the increasing x of the other stationary system (K), and let this velocity be communicated to the axes of the co-ordinates, the relevant measuring-rod, and the clocks”. But is this the velocity of k as measured from k, or is it the velocity as measured from K? This is crucial because K and k each have their own clocks and measuring rods, which are not treated as equivalent by Einstein. Therefore, according to his theory, the velocity will be measured by each differently. In fact, they will measure the velocity of k differently. But Einstein does not assign the velocity specifically to either system. His spinning disk and other example in SR and GR also fall for the same reason.

Before we discuss time orderings or whether time is Newtonian or Relativistic, let us define time precisely. In his 1905 paper, Einstein says: “It might appear possible to overcome all the difficulties attending the definition of ‘time’ by substituting ‘the position of the small hand of my watch’ for ‘time’. And in fact such a definition is satisfactory when we are concerned with defining a time exclusively for the place where the watch is located; but it is no longer satisfactory when we have to connect in time series of events occurring at different places, or - what comes to the same thing - to evaluate the times of events occurring at places remote from the watch”.

It is not a precise or scientific definition of time, but the description of the recordings of a clock, which is subject to mechanical error in its functioning. Space, Time and coordinates, like matter, have no physical existence. They arise out of orderings or sequence or our notions of priority and posterity. When the orderings are for objects, the interval between them is called space. When it is for transformations in objects (events), the intervals are called time. When we describe the specific nature of orderings of space (straight line, geodesic, angular, etc), it is called coordinate system. Since measurement is a comparison between similars (Einstein uses fixed speed of light per second to measure distance), we use similar, but easily intelligible and uniformly transforming natural sequence, such as the day or year or its subdivisions as the unit of time. If a clock stops or functions erratically, time does not stop or becomes erratic. Now is a fleeting interface between two events. Hence while at the universal level it is the minimum perceivable interval between two events, in specific cases, it can have longer durations as present continuous or continued existence for that form. For example, all life cycles that are created undergo six stages of evolution:  transformation from quantum state to macro state (from being to becoming), linear growth due to accumulation of similar particles, non-linear growth or transformation due to accumulation of dissimilar particles, transmutation leading to the reverse process of decomposition and disintegration. The total duration is a life cycle and is continued existence for those individuals or objects.  Comparison between two different natural life cycles is the time dilation between them. Hence Einstein’s definition of time is scientifically wrong. His definition of synchronization is also wrong as shown below.

 Einstein uses a privileged frame of reference to define synchronization between clocks and then denies the existence of any privileged frame of reference – a universal “now” - for time. We quote from his 1905 paper: 

We have so far defined only an ‘A time’ and a ‘B time’. We have not defined a common ‘time’ for A and B, for the latter cannot be defined at all unless we establish by definition that the ‘time’ required by light to travel from A to B equals the ‘time’ it requires to travel from B to A. Let a ray of light start at the ‘A time’ tA from A towards B, let it at the ‘B time’ tB be reflected at B in the direction of A, and arrive again at A at the ‘A time’ t’A. In accordance with definition the two clocks synchronize if: tB- tA = t’A-tB.
We assume that this definition of synchronism is free from contradictions, and possible for any number of points; and that the following relations are universally valid:—
  1. If the clock at B synchronizes with the clock at A, the clock at A synchronizes with the clock at B.
  2. If the clock at A synchronizes with the clock at B and also with the clock at C, the clocks at B and C also synchronize with each other.”
 The concept of relativity is valid only between two objects. Introduction of a third object brings in the concept of privileged frame of reference and all equations of relativity fall. Yet, Einstein precisely does the same while claiming the very opposite. In the above description, the clock at A is treated as a privileged frame of reference for proving synchronization of the clocks at B and C. Yet, he claims it is relative! Thus; his conclusion - there are many quite different but equally valid ways of assigning times to events or different observers moving at constant velocity relative to one another require different notions of time, as their clocks run differently - is wrong. Paradoxically, standard formulations of quantum mechanics use the universal “now” frequently.

SPEED OF LIGHT:

            The constant speed of light, which is one of the foundations of SR, only measures equal distance in equal time units in a medium of uniform density. Using this or a multiple or a fraction of this as the unit, the fixed (uniformly accelerating) distance between A and B can be measured by way of length comparison in any uniform medium. But this will not be time measurement, as A and B are not time variant events or states, but time invariant positions. Of course we have the choice of taking the interval between the events when light leaves A and reaches B as the unit and compare the other intervals with it to get the time measured. But light travels at different velocities in different media and the interval for it to cross the same distance in various media will not be the same. The GPS proof has already been discussed. The same is true for particle accelerator experiments that are contained in high flux magnetic tubes. The speedometer reading and the actual kilometer readings in cars do not match. It is always slower due to friction. This puts severe restrictions on Einstein’s proposition, which cannot be used universally. For example, if there is a very hot or very cold cloud of gas between points A and B not equidistant from both, the results would be different as is evident from absorption and emission spectra. Some of the wave-lengths are absorbed by the gas cloud. If the cloud is not at the center, this will happen at different intervals for both way motion.

            After his SR paper of 1905, Einstein has frequently held that the speed of light is not constant. In his 1911 paper “ON THE INFLUENCE OF GRAVITATION ON THE PROPAGATION OF LIGHT”, he says: “For measuring time at a place which, relatively to the origin of the co-ordinates, has the gravitation potential Φ, we must employ a clock which – when removed to the origin of co-ordinates – goes (1 + Φ/) times more slowly than the clock used for measuring time at the origin of co-ordinates. If we call the velocity of light at the origin of co-ordinates c0, then the velocity of light c at a place with the gravitation potential Φ will be given by the relation: c = c0 (1 + Φ/c²)……………(3).

The principle of the constancy of the velocity of light holds good according to this theory in a different form from that which usually underlies the ordinary theory of relativity (italics ours).

4. Bending of Light-Rays in the Gravitational Field
FROM the proposition which has just been proved, that the velocity of light in the gravitational field is a function of the place, we may easily infer, by means of Huyghens's principle, that light-rays propagated across a gravitational field undergo deflexion”.

            Interestingly, it was not the only occasion when Einstein maintained that velocity of light is not constant. In 1912, he wrote “On the other hand I am of the view that the principle of the constancy of the velocity of light can be maintained only insofar as one restricts oneself to spatio-temporal regions of constant gravitational potential. He repeated this in 1913 when he said: “I arrived at the result that the velocity of light is not to be regarded as independent of the gravitational potential. Thus the principle of the constancy of the velocity of light is incompatible with the equivalence hypothesis". In 1915, he wrote in Die Relativitätstheorie on page 259:the writer of these lines is of the opinion that the theory of relativity is still in need of generalization, in the sense that the principle of the constancy of the velocity of light is to be abandoned.
                                                                                                                            
He repeated it again in late 1915, on page 150, “The Foundation of the General Theory of Relativity”, where he says “the principle of the constancy of the velocity of light in vacuo must be modified”. He really spells it out in section 22 of the 1916 book “Relativity: The Special and General Theory”, where he wrote “In the second place our result shows that, according to the general theory of relativity, the law of the constancy of the velocity of light in vacuo, which constitutes one of the two fundamental assumptions in the special theory of relativity and to which we have already frequently referred, cannot claim any unlimited validity. A curvature of rays of light can only take place when the velocity of propagation of light varies with position. Now we might think that as a consequence of this, the special theory of relativity and with it the whole theory of relativity would be laid in the dust. But in reality this is not the case. We can only conclude that the special theory of relativity cannot claim an unlimited domain of validity; its results hold only so long as we are able to disregard the influences of gravitational fields on the phenomena (e.g. of light). Thus, Einstein himself has contradicted one of the fundamental postulates that has gone into developing SR without abandoning the findings based on such wrong postulates.

Einstein has used equations x2+y2+z2-c2t2 = 0 and ξ2 + η2 + ζ2 - c2 τ2 = 0 to describe two spheres that the observers see of the evolution of the same light pulse. The above equation of the sphere is mathematically wrong. Since x2+y2 = 0 describes a circle, x2+y2- c2 = 0 describes a sphere with z-axis zero and x2+y2-c2t2 = 0 describes a circle that evolves in time. Multiplying and not adding another factor z2 will transform a two dimensional circle (representing area) into a three dimensional sphere (volume). Both the equations mentioned by Einstein can at best describe two spheres with origin at (0,0,0) and the points (x,y,z) and (ξ, η, ζ ) on the circumference of the respective spheres. Since the second person is moving away from the origin, the second equation is not relevant in his case (he is there). Assuming he sees the other sphere, he should know its origin (because he has already seen it, otherwise he will not know that it is the same light pulse. In that case, there is no way to relate both pulses) and its present location. In other words, he will measure the same radius as the other person, implying:  c2t2 = c2 τ2 or  t = τ. 
Again, if  x2+y2+z2-c2t2 = x’2+y’2+z’2-c2 τ 2,       t ≠ τ.
This creates a contradiction, which invalidates his mathematics.

            Since space is not empty and local density of space can vary, light emitted from a source moves at constant velocity due to inertia irrespective of the motion of the body, but such velocity is not a universal constant, as it depends on the local density of space. This is proved by the bending of light while passing near big stars. It is not due to relativistic effects, but due to refraction. We have seen how a glass rod immersed in water appears to bend because of the relative density of water and air. Similarly, since most of the mass near a star is concentrated at one area, the local density of space near that area is higher than that of far off places. This variation causes different density gradients that bend the light rays near the star.

Relativity is an operational concept, but not an existential concept. The equations apply to data and not to particles. When we approach a mountain from a distance, its volume appears to increase. The visual perception of volume (scaling up of the angle of incoming radiation) changes at a particular rate. But there is no such impact on the mountain. It exists as it was. The same principle applies to the perception of objects with high velocities. The changing volume is perceived at different times depending upon our relative velocity. If we move fast, it appears earlier. If we move slowly, it appears later. Our differential perception is related to changing angles of radiation and not the changing states of the object. It does not apply to locality. Thus, the Galilean relativity is real and the Lorentz transformation is apparent to the observer only. Einstein’s assertion that the clash between Lorentz invariance and the Galilei invariance of Newtonian mechanics was inconsistent with the physical principle of relativity is misplaced and wrong.

CONCLUSION:

Thus, it is clear that simultaneity - the notion of “now” - is not relative, the universal clock is not fiction, and time is not a proxy for the movement and change of objects in the universe – it is the rate of change in objects. It is not true that two events are truly simultaneous only if they are causally related – unless we assign that cause to application of energy. However, since application of energy at one position on one object cannot generate action (event) at another position involving another object, they cannot be causally related.

Einstein had wrongly assigned several length and time variables in SR, giving them to the wrong coordinate systems or to no specific coordinate systems. He skipped an entire coordinate system, achieving two degrees of relativity when he thought he had only achieved one. Because his x and t transformations were compromised, his velocity transformations were also compromised. He carried this error into the mass transformations, which infected them as well. This problem then infected the tensor calculus and GR. This explains the various anomalies and variations and the so-called violations within Relativity. Since Einstein’s field equations are not correct, Schwarzschild’s solution of 1917 is not correct. Israel’s non-rotating solution is not correct. Kerr’s rotating solution is not correct. And the solutions of Penrose, Wheeler, Hawking, Carter, and Robinson are not correct. The three Friedmann models of the Universe and the equation-of-state parameter are not correct. The so-called expansion of the Universe only at galactic scales and not lesser scales is actually temporary and will be reversed in future, as the galactic clusters are rotating against a common center like the planets around the Sun. The concept of Dark matter and dark energy are not correct because energy is perceived only through its interactions; hence cannot be dark. The smoothness and persistence indicates a background structure, which it is.

“Lorentz Invariance” is the symmetry of SR. General covariance, which comes from SR, is limited to space-time coordinate systems related to each other by uniform relative motions only - “Inertial frames”. It extends Lorentz invariance and treats it as a property of GR. EP deals with the equivalence of gravitational and inertial mass. We have shown both covariance and EP are wrong descriptions of reality. Thus, we have solved one paradox. In the next paper, we will discuss macro representation of entanglement and the mathematics that leads to singularity and event horizon. We will also explain gravity, and discuss misconceptions about dark matter and dark energy to show their true nature.



गुरुवार, नवंबर 28, 2013

KNOWLEDGE DRIVEN TECHNOLOGY & MANAGEMENT



 KNOWLEDGE DRIVEN TECHNOLOGY & MANAGEMENT

THE PROBLEM:

Technology is the application of knowledge for practical purposes. Hence it should be guided by theory. But the technological advancements in various sectors has led to data-driven discoveries in the belief that if enough data is gathered, one can achieve a “God’s eye view”. Data is not synonymous with knowledge. By combining lots of data, we generate something big and different, but unless we have knowledge about the mixing procedure to generate the desired effect, it may create the Frankenstein’s monster - a tale of unintended consequences. Already physics is struggling with misguided concepts like extra-dimensions that are yet to be discovered even after a century. Weirdness of the concepts of  superposition and entanglement are increasingly being questioned with macro examples. The LHC experiment has finally ruled out super-symmetry. Demand for downgrading the status of the Heisenberg’s uncertainty postulate is gaining momentum. Yet, fantasies like dark energy or vacuum energy, where theory and observation differ by a factor of 1057 to 10120, get Nobel Prize! Theoreticians are vanishing. Technologists are being called scientists. Increase of trial and error based technology that lack the benefit of foresight, are leading to more nonlinearly non-green technology necessitating Minamata Mercury Convention (for reducing Mercury poisoning) type conferences to prescribe do’s and don’ts for some industries. Technology has become the biggest polluter.

With increasing broadband access, wireless connectivity and content, dependence on gadgets like smart phones, tablets, etc, is growing. Apart from its impact on vegetation (browning), birds, and the ecosystem in general, the impact this human–machine bond will have on our lives is yet to be fully assessed. The current trend is to create a product out of an idea (not necessity), for which technology is invented later. The necessary recommendation algorithms are compartmentalized in different branches of science. For example, to find the accelerating expansion of the universe and define the nature of dark energy, researchers used baryon acoustic oscillations as the yardstick. It was created from sound waves that rippled through the universe when it was young and hot and became imprinted in the distribution of galaxies as it cooled. In sync with the idea, Google+ and Apple’s Siri came up with learning algorithms that respond to one’s voice. Apple’s new iPhone fingerprint sensor is directed at the machine knowing our bodies. Such devices start by recognizing one’s thumb or voice; then other’s voices, the way they move, etc. If such devices put such information together with information about one’s location and his engagement calendar, it will be an integral part of our life. Social media is changing the kinship diagram through emotionless physical relationships. Network Administrators and algorithms regulate ‘date’ vetting. Human beings are increasingly submitting themselves to machines and becoming mechanized.

As the available resources get depleted and demand for more intelligent solutions and services using nano-technology increases, there is pressure for more re-generative and ‘intelligent’ – GREEN and SMART– technologies emphasizing the need for knowledge collaboration in engineering. Green technology encompasses a continuously evolving group of methods and materials, from techniques for generating non-exhausting energy sources like solar or wind or tidal power to non-toxic clean products (based on their production process or supply chain) that are environmental-friendly and biodegradable. It involves energy efficiency, recycling, safety and health concerns, renewable resources, etc. Yet, it has to fight the ever increasing greed for easy money. For example; as the world gold prices surges, small-scale ‘artisanal’ gold mining has become the world’s leading source of mercury pollution. Miners use mercury to separate flecks of gold from rocks, sediment and slurry and then dump or burn the excess. It exposes ground water and air to mercury poisoning. But to motivate the miners to adopt green alternatives is nearly impossible. Recycling without the knowledge of its adverse side-effects is causing more pollution world wide. But the greed for higher Return on Investment is eulogized as prosperity and advancement.

“SMART” stands for “Self-Monitoring, Analysis and Reporting Technology”. It gets input from somewhere, applies some ‘intelligence’ or ‘brainpower’ to it and the result is innovative. For example, regular glasses used in spectacles are shaped in such a way as to bend light for correct vision - to make the world appear sharper and clearer. Photo-chromatic lenses contain molecules that react to certain kinds of light and change tint in sunshine. Though it seems intelligent, these are just physical reactions. By adding a camera and a computer to a pair of glasses, many innovations can be made. A video camera at the corners of the spectacles that feed into a tiny pocket computer that light up parts of an LED array in the lenses can enable the wearer to see objects in greater detail. It could include optical character recognition for reading newspaper headlines. The glasses use cameras and some software to interpret the data and put zoomed in images on a screen in front of the wearer’s eyes. This is only exemplary.

Artificial Intelligence (AI) is the current buzz word. AI is of two types called narrow (ANI) and general (AGI) artificial intelligence. ANI is the intelligent function at one narrow task like playing a chess game or searching the web and is increasingly ubiquitous in our world. ANI may outsmart humans only in the area in which it is specialized - hence not a big transformative concern. But AGI, which is potentially intelligent across a broad range of domains, is cause for concern. We mix the different sensory inputs by our intelligence and apply our freewill to determine net response, but an AGI would probably think or mix differently in unexpected ways. If we command a super-intelligent robot to make us happy, it might cram electrodes into the pleasure centers of our brains. If we command it to win at chess, it may calculate all possible moves endlessly. This absurd logic holds because AI lacks our instincts and the notions of absurdity and justification of mixing inputs. It does what we program it to do, but without freewill. Once the embryo starts breathing, it breathes perpetually till death, but the child also has limited free-will and uses his instincts. After being switched on, computers obey commands, but have no free-will or instincts. Since these cannot be preprogrammed, AI can never be conscious.

Knowledge is not data, but the ‘awareness’ of exposure/result of measurement associated with any object, energy or interaction stored in memory as an invariant concept that can be retrieved even in the absence of fresh inputs or impulses. It describes through a language the defining characteristics of some previously known thing – physical properties and chemical interactions - by giving it a name that remain the same as a concept at all times – thus immune to spatiotemporal variations - till it is modified by fresh inputs. The variations of the object, energy or interaction under different specific circumstances and the predetermined result thereof form part of knowledge. In a mathematical format, it depicts the right hand side of each equation or inequality representing determinism. Once the parameters represented by the left hand side are chosen and the special conditions represented by the equality sign are met, the right hand side becomes deterministic. In ancient times, it was technically covered under the term Aanwikshiki, which literally means describable facts about the invariant nature of everything.

Engineering and Management which deal with the efficient use of objects or persons; are related to left hand side of an equation – free-will; which presupposes knowledge of the deterministic behavior of objects or humans that can be chosen or effectively directed to create something or function in a desired manner in a maximally economic and regenerative way. This was called Trayi – literally the three aspects of behavior of mass, energy and radiation in their three states of solid, fluid and plasma in all combinations – physical and chemical properties (protestation, loyalty and expectation for humans). The responsive mechanism was called Danda Neeti – principles of inducement through reward and punishment (essentially material addition or reduction). The regenerative mechanism was called Vaartaa – problem solving. These four basic tenets, equally valid for both technology and management, are also immutable - invariant in time, space and culture leading to deterministic consequences. Lack of knowledge of the deterministic behavior to guide choice of the freewill components has led engineering and management astray. The fast changing technology and management principles point to their inherent deficiencies that need immediate remedy. Knowledge guidance is the only way out.

There is a pressing need for knowledge to take the lead for greener technology keeping in view sustainability, cradle-to-cradle design, source reduction, viability, innovation, etc. Hence it is necessary that pure science guide technology in the right direction in ALL sectors. Till date all efforts in this regard have been sector specific such as energy, chemicals, medical, real estate, hardware, etc. As a result, green and smart technology has been reduced to transferring problems in a discrete manner – they solve problem in one area (for example by recycling something) ignoring the effect of the new process or its by-products on other areas. It is high time to discuss a global strategy to meet the new challenges.

THE PARADIGM SHIFT:

Earlier, some individual scientists with their over-towering genius developed a postulate and took the lead in Universities or Research Institutions to develop suitable experimental setups to test those. These days, individual scientists have to network and collaborate across State and National boundaries to take advantage of State and International funding. They generate incredibly massive data without any postulate. Communication technology has made the efforts of individual researchers coalesce into a seamless whole merging identities of who contributed what. The 2013 Nobel Prize in physics was the result of many ideas that were floated around in early 1960’s by at least six scientists. During that time lots of new particles were being discovered and it was a fair bet that some particle would be discovered in the vacant 124-126 GeV range. Hence it was proposed as a gamble. The model tested at the LHC was not that of Higgs and Englert, who got the prize, but one for which Weinberg and Salam had already won a Nobel! The general mechanism was first postulated by Philip Anderson a couple years before Higgs and Englert. Already there are protests against the decision.

As individual efforts became obscured and team efforts took-over, more and more data are accumulated making their storage and analysis a big problem. When subatomic particles are smashed together at the LHC, they create showers of both known and unknown new particles whose signatures are recorded by four detectors. The LHC captures 5 trillion bits of data (more information than all of the world’s libraries combined) every second. After the application of filtering algorithms, more than 99 percent of those data are discarded, but still the four detectors produce 25 petabytes (25×1015 bytes) of data per year that must be stored and analyzed. These are processed on a vast computing grid of 160 data centers around the world, a distributed network that is capable of transferring as much as 10 gigabytes per second at peak performance. Are these data really necessary? Can we be sure that useful data are not being discarded while filtering, particularly when do not know what we are searching for or are searching selectively? Is there no other way to formulate theory?  Is the outcome cost-effective?

The unstructured streams of digital potpourri are no longer stored in a single computer - it is distributed across multiple computers in large data centers or even in the “cloud”. It demands developing rigorous scientific methodologies and different data-processing requirements - not only flexible databases, massive computing power and sophisticated algorithms, but also a holistic (not reductionist) approach to get any meaningful information. One possible solution to this dilemma is to embrace a new paradigm. In addition to distributed storage, why not analyze the data in a distributed manner as well! Each unit (or node) in a network of computers perform a small piece of the computation! Each partial solution is then integrated to find the full result. For example, at LHC, one complete copy of the raw data (after filtering) is stored at the CERN in Switzerland. A second copy is divided into batches that are then distributed to data centers around the world. Each center analyzes a chunk of data and transmits the results to regional computers before moving on to the next batch. But this lacks the holistic approach. The reports of the six blind men about the body parts of the elephant are individually correct. But unless someone has seen an elephant, he cannot make any sense out of it.

THE BIG-DATA CHALLENGE:

The demand for ever-faster processors, while important, is not the primary focus anymore. Processing speed has become completely irrelevant now. The challenge is not how to solve problems with a single, ultra-fast processor, but how to solve them with a large number of slower processors. Yet, many problems in big-data cannot be adequately addressed by adding more parallel processing. These problems are more sequential, where each step depends on the outcome of the preceding step. Sometimes the work can be split-up among a bunch of processors, but that is not easy always. Time taken to complete one task is not always inversely proportional to the number of persons. Often the software is not written to take full advantage of the extra processors. Failure of Just-in-time and super-efficiency in management has led to world-wide economic crisis. We may be approaching a similar crisis in the scientific and technological field. The Y2K problem was a precursor to what could happen.

Addressing the storage-capacity challenges of big-data involves building more memory and managing fast movement of data. Identifying correlated dimensions is exponentially more difficult than looking for a needle in a haystack. When one does not know the correlations one is looking for, one must compare each of the ‘n’ pieces of data with every other piece, which takes n-squared operations. The amount of data is roughly doubling every year (Moore’s Law). If in our algorithm, for each doubling of data, we have to do two-squared times computing, then in the following year, we have to do 16 times (four squared) as much computing. By next year, our computers will only be twice as fast and in two years our computers will only be four times as fast. Thus, we are exponentially falling behind in our ability to store and analyze the collected data. There are non-technical problems also. The analytical tools of the future require not only the right mix of physics, chemistry, biology, mathematics, statistics, computer science, etc., but also the team leader to take a holistic approach – free of reductionism. In the big-data scenario, mathematicians and statisticians should normally become intellectual leaders. But mathematics is more focused on abstract work and do not encourage people to develop leadership skills – it tends to rank people linearly to determine an individual pecking order introducing bias. Engineers are used to working on teams focused on solving problems, but they cannot visualize new theories.

Although smaller studies via distributed processing provide depth and detail at a local level, they are also limited to a specific set of queries and reflect the particular methodology of the investigator, which makes the results more difficult to reproduce or reconcile with broader models. The big impacts on the ecosystem including effects of global warming cannot be studied with short-term, smaller studies. But in the big-data age of distributed computing; the most important decision to be taken is: how to conduct distributed science across a network of researchers - not merely “interdisciplinary research”, but a state of “trans-disciplinary research” - free from the reductionist approach? Machines are not going to organize data-science research. Researchers have to turn petabytes of data into scientific knowledge. But who is leading data-science right now? There is a leadership crisis! There is a conceptual crisis!

Today’s big data is noisy, unstructured, and dynamic rather than static. It may also be corrupted or incomplete. Many important data are not shared till its theoretical, economical or intellectual property right aspects are fully exploited. Sometimes data is fudged. Data should comprise of vectors – a string of numbers and coordinates. But now researchers need new mathematical tools, such as text recognition, or data compression by selecting key words and their synonyms, etc., in order to glean useful information from and intelligently curate the data-sets. For this either we need a more sophisticated way to translate it into vectors, or we need to come up with a more generalized way of analyzing it. Several promising mathematical tools are being developed to handle this new world of big, multimodal data.

THE NEW APPROACH:

One solution suggested is based on consensus algorithms. It’s a mathematical optimization system. Algorithms with past data are useful for creating an effective SPAM filter on a single computer, with all the data in one place. But when the problem becomes too large for a single computer, a consensus optimization approach works better. In this process, the data-set is chopped into bits and distributed across several “agents” each of which analyze their bit and produce a model based on the data they have processed - something similar in concept to the Amazon’s Mechanical Turk crowd-sourcing methodology. The program learns from the feedback, aggregating the individual responses into its working model to make better predictions in the future. In this system, the process is iterative, creating a feedback loop. Although each agent’s model can be different, all the models must agree in the end - hence “consensus algorithms”. The initial consensus is shared with all agents, which update their models and reach a second consensus, and so on. The process repeats until all the agents agree.

Another prospect is quantum computing, which is fundamentally different from parallel processing. A classical computer stores information as bits that can be either 0s or 1s. A quantum computer could exploit a weird property called superposition of states. If we flip a regular coin, it will land on heads or tails. There is zero probability that it will be both heads and tails. But if it is a quantum coin, it is said to exist in an indeterminate state of both heads and tails until we look to see the outcome. Thereafter, it collapses - assumes a fixed value. This is a wrong description of reality. The result of measurement is always related to a time t, and is frozen for use at later times t1, t2, etc, when the object has evolved further and the result of measurement does not depict its true state. Thus, we can only know the value that existed at the moment of observation or measurement. Scientists impose their ignorance of the true state of the system at any moment on the object or the system and describe the combined unknown states together as superposition of all possible states. It is physically unachievable.

The quantum computers, if built, will be best suited to simulate quantum mechanical systems or to factor large numbers to break codes in classical cryptography. Quantum computing might be able to assist big-data by searching very large, unsorted data-sets in a fraction of the time for parallel processors. However, to really make it work, we would need a quantum memory that can be accessed while in a quantum superposition, but the very act of accessing the memory would collapse or destroy the superposition. Some claim to have developed a conceptual prototype of quantum RAM (Q-RAM), along with an accompanying program called Q-App (pronounced “quapp”) targeted to machine learning. The system could find patterns within data without actually looking at any individual records, thereby preserving the quantum superposition (questionable idea). One is supposed to access the common features of billions of items in his database at the same time, without individually accessing them. With the cost of sequencing human genomes (where a single genome is equivalent to 6 billion bits) dropping, and commercial genotyping services rising, there is a great push to create such a database. But knowing about malaria without knowing who is having it, is useless for treatment purposes.

Another approach is integrating across very different data sets. No matter how much we speed up the computers or computers together, the real issues are at the data level. For example, a raw data-set could include thousands of different tables scattered around the Web, each one listing similar data, but each using different terminology and column headers, known as “schema”. The problem can be overcome with a header to describe the state. We must understand the relationship between the schemas before the data in all those tables can be integrated. That, in turn, requires breakthroughs in techniques to analyze the semantics of natural language. What if our algorithm needs to understand only enough of the surrounding text to determine whether, for example, a table includes specific data so that it can then integrate the table with other, similar tables into one common data set? It is one of the toughest problems in AI. But Panini has already done it with the Pratyaahaara style of the 14 Maheshwari Sootras.

One widely used approach is the topological data analysis (TDA), which is an outgrowth of machine learning - a way of getting structured data out of unstructured data so that machine-learning algorithms can act directly on it. It is a mathematical version of Occam’s razor: While there may be millions of possible reconstructions for a fuzzy, ill-defined image, the sparsest (simplest) version is probably the best fit. Compressed sensing was born out of this serendipitous discovery. With compressed sensing, one can determine which bits are significant without first having to collect and store them all. This allows us to acquire medical images faster, make better radar systems, or even take pictures with single pixel cameras. The idea was there since Euler, who puzzled over a conundrum: is it possible to walk across seven bridges connecting four geographical regions, crossing each bridge just once, and yet end up at one’s original starting point?  The relevant issue was the number of bridges and how they were connected. Euler reduced the four land regions to nodes connected by the bridges represented by lines. To cross all the bridges only once, each land region would need an even number of bridges. Since that was not the case, such a journey was impossible. A similar story is told in B-Schools. If 32 teams play a knock-out tournament, how many games will be played totally? One reasoned that in every game, one team will be defeated. Only one team will remain undefeated till the end. Thus, the total number of games is 31. This is the essence of compressed sensing. Using compressed sensing algorithms, it is possible to sample only 100 out of 1000 pixels in an image, and still be able to reconstruct it in full resolution - provided the key elements of sparsity (which usually denotes an image’s complexity or lack thereof) and grouping (or holistic measurements) are present.

Taking these ideas, mathematicians are representing big data-sets as a network of nodes and edges, creating an intuitive map of data based solely on the similarity of data points. This uses distance as an input that translates into a topological shape or network. The more similar the data points are, the closer they will be to each other on the resulting map. The more different they are, the further apart they will be on the map. This is the essence of TDA. Many of the methods in machine learning are most effective when working with data matrices, like an Excel spreadsheet, but what if our data set does not fit that framework? 

TDA is all about the connections. In a social network, relationships between people can be mapped: with clusters of names as nodes and connections as edges illustrating how they are connected. There will be clusters relating to family, friends, colleagues, etc. But it is not always discernible. From friendship to love is not a linear relationship. It is possible to extend the TDA approach to other kinds of data-sets, such as genomic sequences. One can lay the sequences out next to each other and count the number of places where they differ. That number becomes a measure of how similar or dissimilar they are and one can encode that as a distance function. This is supposed to reveal the underlying shape of the data. A shape is a collection of points and distances between those points in a fixed order. But such a map will not accurately represent the defining features. If we represent a circle by a hexagon with six nodes and six edges, it may be recognizable as a circular shape, but we have to sacrifice roundness. A child grows with age, but the rate of growth is not uniform in every part of the body. Some features develop only after certain stage. If the set at lower representations has topological features in it, that is not a sure indication that there are features in the original data also. The visual representation of the flat surface of Earth does not belie its curvature. Topological methods are also a lot like casting a two-dimensional shadow of a three-dimensional object on the wall: they enable us to visualize a large, high-dimensional data set by projecting it down into a lower dimension. The danger is that, as with the illusions created by shadow puppets, one might be seeing patterns and images that are not really there. There is a joke that topologists can not tell the difference between their rear end and a coffee cup because the two are topologically equivalent.

Some researchers emphasize the need to develop a broad spectrum of flexible tools that can deal with many different kinds of data. For example, many users are shifting from traditional highly structured relational databases, broadly known as SQL, which represent data in a conventional tabular format, to a more flexible format dubbed NoSQL. It can be as structured or unstructured as we need it to be, depending on the application. Another method favored by many is the maximal information coefficient (MIC), which is a measure of two-variable dependence designed specifically for rapid exploration of many-dimensional data-sets. It was claimed that MIC possesses a desirable mathematical property called equitability that mutual information lacks. It has been disputed that MIC does not address the issues of equitability, but rather focuses on the statistical power. MIC is said to be less powerful than a recently developed statistic called distance correlation (dCor) and a different statistic, HHG, both of which have their own problems and are not satisfactory either.

            In all these, we are missing the woods for the trees. We do not need massive data – we need theories out of the data. Higgs boson is said to validate Standard Model, which does not include gravity – hence incomplete. The graviton, also predicted by SM but described differently in string theory, is yet to be discovered. That the same experiment disproves Super symmetry (SUSY), which united gravity with SM, questions it and points to science beyond SM.

A report published in July, 2013 in the Proceedings of the National Academy of Sciences USA, shows that to make healthy sperm, mice must have genes that enable the sense of taste. Sperm have been shown to host bitter-taste receptors and smell receptors, which most likely sense chemicals released by the egg. But the idea that such proteins might function in sperm development is new. Elsewhere researchers have found taste and smell receptors in the body that help to sense toxins, pick up messages from gut bacteria or foil pathogens. This opens up a whole world of alternative uses of these genes. When we assign functions to genes, it is a very narrow view of biology. Probably for every molecule that we assign a specific function to, it is doing other things in other contexts. If anyone bothered to read the ancient system of Medicine Ayurveda or properly interpret Mundaka Upanishadic dictums “annaat praano”, they will be surprised to rediscover the science that is indicated through the latest data. We are not discussing it here due to space constraint. There are many such examples. Instead of looking outward to data, let us look inward and study the objects and develop the theories (many of which have become obsolete) afresh based on currently available data. The mind-less data-chase must stop.

THE WAY AHEAD:

            Theory without technology is lame. Technology without theory is blind. Both need each other. But theory must guide technology and not the opposite. Nature provides everything for our sustenance. We should try to understand Nature and harmonize our actions to natural laws. While going for green technology, we must focus on the product that we use, than on the packaging that we discard. As per a recent study, in London people waste 60% of the food they buy to eat while others go hungry. Necessity and not idea should lead to creation of a product. Minimizing waste is also green. Only products that are really not essential for our living need advertisement. The concept of every business is show business must change. Product liability laws should be strengthened; specifically in FMCG sector. But what is the way out when economic and military considerations drive research? We propose an approach as follows:

·                    The cult of incomprehensibility and reductionism that rules science must end and trans-disciplinary research values inculcated. Theory must get primacy over technology. There should be more seminars to discuss theory with feedback from technology. Most of the data collected at enormous cost are neither necessary nor cost effective. This methodology must change.
·                    The superstitious belief in ‘established theories’ must end and truth should replace fantasy. We have given alternative explanations of ten-dimensions, time dilation, wave-particle duality, superposition, entanglement, dark-energy, dark matter, inflation, etc, before international scientific forums with macro-examples without cumbersome mathematics, while pointing out the deficiencies in many ‘established theories’. Those views have not yet been contradicted.
·                    To overcome economic and military pressure, International Conventions on important areas like the Minamata Mercury Convention should be held regularly for other problem areas under the aegis of UNESCO or similar International bodies.
·                    There is no need to go high-tech in all fields. We should think out of box. Traditional knowledge is a very good source of information (most herbal product companies use it successfully). If we analyze these scientifically without any bias, we can get lot of useful inputs. The chain of “Amma Canteens” in Tamil Nadu, India, is an excellent example of green technology. It supplies fresh food at cheap rates with minimum infrastructure, storage, transportation, pollution, wastages and maximum employment. The focus is on the locally available product and not the package. There can be many more such examples and innovations without big-data.
·                    General educational syllabus must seek to address day-to-day problems of the common man. Higher education should briefly integrate other related branches while focusing on specialization.
·                    Technologist is a honorable term. But stop calling them scientists.

N.B.: Here we have used ancient concepts with modern data.