Sunday, March 04, 2012



Notices of the American Mathematical Society Volume 52, Number 9 published a paper in which Mr. Mason A. Porter and Mr. Predrag Cvitanovic had shown that the theory of dynamical systems used to design trajectories of space flights and the theory of transition states in chemical reactions share the same set of mathematics. We posit that this is a universal phenomenon and every quantum system and phenomena including superposition, entanglement and spin have macro equivalents. This will be proved, inter alia, by deriving bare mass and bare charge (subjects of Quantum Electrodynamics and Field Theory) without renormalization and without using a counter term, and linking it to dark matter and dark energy (subjects of cosmology). In the process we will give a simple conceptual mechanism for deriving all forces starting from a single source. We also posit that physics has been deliberately made incomprehensible with a preponderance of “mathematical modeling” to match experimental and observed data through back door. Most of the “mathematics” in physics does not conform to mathematical principles.

In a paper “Is Reality Digital or Analogue” published by the FQXi Community on Dec. 29, 2010, we have shown that: uncertainty is not a law of Nature. It is the result of natural laws relating to measurement that reveal a kind of granularity at certain levels of existence that is related to causality. The left hand side of all valid equations or inequalities represents free-will, as we are free to choose (or vary within certain constraints) the individual parameters. The right hand side represents determinism, as the outcome is based on the input in predictable ways. The equality (or inequality) sign prescribes the special conditions to be observed to achieve the desired result. These special conditions, which cannot be always predetermined with certainty or chosen by us arbitrarily, introduce the element of uncertainty in measurements.

            While the particles and bodies are constantly changing their alignment within their confinement, these are not always externally apparent. Various circulatory systems work within our body that affects its internal dynamics polarizing it differently at different times which become apparent only in our interaction with other bodies. Similarly, the interactions of subatomic particles are not always apparent. The elementary particles have intrinsic spin and angular momentum which continually change their state internally. The time evolution of all systems takes place in a continuous chain of discreet steps. Each particle/body acts as one indivisible dimensional system. This is a universal phenomenon that creates the uncertainty because the internal dynamics of the fields that create the perturbations are not always known to us. We may quote an example. Imagine an observer and a system to be observed. Between the two let us assume two interaction boundaries. When the dimensions of one medium ends and that of another medium begins, the interface of the two media is called the boundary. Thus there will be one boundary at the interface between the observer and the field and another at the interface of the field and the system to be observed. In a simple diagram, the situation is like:
O represents the observer and S the system to be observed. The vertical lines represent the interaction boundaries. The arrows represent the effect of O and S on the medium that leads to information exchange that is called observation.

All information requires an initial perturbation involving release of energy, as perception is possible only through interaction (exchange of force). Such release of energy is preceded by freewill or a choice of the observer to know about some aspect of the system through a known mechanism. The mechanism is deterministic – it functions in predictable ways (hence known). To measure the state of the system, the observer must cause at least one quantum of information (energy, momentum, spin, etc) to pass from him through the boundary to the system to bounce back for comparison. Alternatively, he can measure the perturbation created by the other body across the information boundary.

The quantum of information (seeking) or initial perturbation relayed through an impulse (effect of energy etc) after traveling through (and modified by) the partition or the field is absorbed by the system to be observed or measured (or it might be reflected back or both) and the system is thereby perturbed. The second perturbation (release or effect of energy) passes back through the boundary to the observer (among others), which is translated after measurement at a specific instant as the quantum of information. The observation is the observer’s subjective response on receiving this information. The result of measurement will depend on the totality of the forces acting on the systems and not only on the perturbation created by the observer. The “other influences” affecting the outcome of the information exchange give rise to an inescapable uncertainty in observations.

The system being observed is subject to various potential (internal) and kinetic (external) forces which act in specified ways independent of observation. For example chemical reactions take place only after certain temperature threshold is reached. A body changes its state of motion only after an external force acts on it. Observation doesn’t affect these. We generally measure the outcome – not the process. The process is always deterministic. Otherwise there cannot be any theory. We “learn” the process and form theory by different means – observation, experiment, hypothesis, teaching, etc.

The observer observes the state at the instant of second perturbation – neither the state before nor after. If ∑ represents the state of the system before and ∑ ± ∑ represents the state at the instant of perturbation, then the difference linking the transformations in both states (other effects being constant) is minimum if  ∑ << ∑. If I is the impulse selected by the observer to send across the interaction boundary, then ∑ must be a function of I: i.e. ∑ = f (I). Thus the observation is affected by the choices made by the observer also. Observation records only a temporal state and freezes it as the result of observation. Its true state at any other instant is not evident. Quantum theory takes these uncertainties into account. However, the mathematical format of the uncertainty principle is wrong.

The inequality: δx. δph permits simultaneous determination of position along   the x-axis and momentum along the y-axis; i.e., δx. δpy = 0. Hence the statement that position and momentum cannot be measured simultaneously is not universally valid. Further, position has fixed coordinates and the axes are fixed arbitrarily so that the dimensions remain invariant under mutual transformation. Position along x-axis and momentum along y-axis can only be related with reference to a fixed origin (0, 0). If one has a non-zero value, the other has zero value (if it has position say x = 5 and y = 7, then it implies that it has zero momentum, otherwise it would not be a fixed position). Multiplying both, the result will always be zero. Thus no mathematics is possible between position (fixed coordinates) and momentum (mobile coordinates) as they are mutually exclusive in time, which is related to momentum. They do not commute.

Thus, Uncertainty is not a law of Nature. We can’t create a molecule from any combination of atoms as it has to follow certain “special conditions”. The conditions may be different like the initial perturbation sending the signal out and the second perturbation leading to the reception of the signal back for comparison because the inputs may be different like c+v and c-v. These “special conditions” and external influences that regulate and influence all actions, and not the process of measurement, create uncertainty.

Number is a property of all substances by which we differentiate between similars. If there are no similars, it is one. If there are similars, it is many. Depending upon the sequence of perception of numbers, many can be 2, 3, 4…n. Mathematics is accumulation and reduction in numbers of the same class of objects (similar numbers), which describes the changes in the physical phenomena when the numbers of any of the parameters are changed. Mathematics is related to the result of measurement. Measurement is a conscious process of comparison between two similar quantities, one of which is called the scaling constant (unit). Hence Nature is mathematical in some perceptible ways. This has been proved by the German physiologist Ernst Heinrich Weber, who measured human response to various physical stimuli. Carrying out experiments with lifting increasing weights, he devised the formula:
ds = k (dW / W),
where ds is the threshold increase in response (the smallest increase still discernible), dW the corresponding increase in weight, W the weight already present and k the proportionality constant. This has been developed as the Weber-Fechner law. This shows that the conscious response follows a somewhat logarithmic law. This has been successfully applied to a wide range of physiological responses.

Measurement is a conscious process. The comparison (measurement) is done at “here-now”. The readings, which indicates the state at a designated instant (out of an infinite set of temporally evolving states) is frozen for use at other times and is known as the “result of measurement”. The states at all other times, which cannot be measured; hence remain unknown, are clubbed together and are collectively referred to as the “superposition of states” (we call it adhyaasa). This concept has not only been misunderstood, but also unnecessarily glamorized in the Schrödinger’s cat and other examples to bring in incomprehensibility. This has led to coupling one aspect of an object in a state of superposition with other aspects not related to the measurement, to create a state of coupled-superposition (aadhyaasika taadaatmya), which is mathematically, physically and conceptually void. We will discuss it later with examples.

Mathematics is related to accumulation and reduction of these numbers. Since measurements are comparison between similar quantities, mathematics is possible only between similars (linear) or partly similars (non-linear) but never between the dissimilars. We cannot add or multiply 3 protons and 3 neutrons. They can be added only by taking their common property of mass to give mass number. These accumulation and reduction of numbers are expressed as the result of measurement after comparison with a scaling constant (standard unit) having similar characteristics (such as length compared with unit length, area with unit area, volume with unit volume, density with unit density, interval with unit interval, etc). The results of measurements are always pure numbers, i.e., scalar quantities, because the dimensions of the scaling constants are same for both the measuring device and the object being measured. Thus, mathematics explains only “how much” one quantity accumulates or reduces in an interaction involving similar or partly similar quantities and not “what”, “why”, “when”, “where”, or “with whom” about the objects involved in such interactions. These are the subject matters of physics.

Mathematics is an expression of Nature, not its sole language. Though observer has a central role in Quantum theories, its true nature and mechanism has eluded the scientists. There cannot be an equation to describe the observer, the beauty of the rising sun, the grandeur of the towering mountain, the spread of the night sky, the enchanting fragrance of the wild flower or the endearing smile on the lips of the beloved. It is not the same as any chemical reaction or curvature of lips. Mathematics is often manipulated to spread the cult of incomprehensibility. The electroweak theory is extremely speculative and uses questionable mathematics as a cover for opacity to predict a yet unverified Higg’s mechanism. But millions of meaningless papers have been read out in seminars based on such unverified myth for a half century and more. They use the data from the excellent work done by experimental scientists to develop theories based on reverse calculation to match the result. It is nothing but politics of physics – claim credit for bringing in water in the river when it rains. Experiment without the backing of theory is blind. It can lead to disaster. The rain also brings floods. Experiments guided by economic and military considerations have wrought havoc with our lives.

We don’t see the earlier equations in their original format because all verified inverse square laws are valid only in spherically symmetric emission fields that rule out virtual photons and messenger photons etc. Density is a relative term and relative density is related to volume, which is related to diameter. Scaling up or down the diameter brings in corresponding changes in relative density. This gives rise to inverse square laws in a real emission field. The quanta cannot spontaneously emit other quanta without violating conservation laws. The modern physicists are afraid of reality. To cover up for their inadequacies, the equations have been rewritten using different unphysical notations to make it incomprehensible for even those making a career out of it. Reductionism, superstitious belief on the validity of “accepted theories”, and total reliance on them, compound the problem. Thus, while the “intellectual supremacy (?)” of a small group is reinforced before “outsiders”, it goes unchallenged by even their own community.

The validity of a physical statement is judged by its correspondence to reality. The validity of a mathematical statement is judged by its logical consistency. Manipulation of mathematics to explain physics has violated the principle of logical consistency in most cases. One example is renormalization or elimination of infinities using a “counter term”, which is logically not consistent, as mathematically all operations involving infinity are void. Some describe it as divergence linking it to the concept of limit. We will show that the problem with infinities can be solved in mathematically consistent ways without using a “counter term” by re-examining the concept of limit.

Similarly, Feynman’s sum-over histories is the sum of the particle’s histories in imaginary time rather than real time. Feynman had to do the sum in imaginary time because he was following Minkowski, who assigned time to the imaginary axis. That is the four vector field in GR. Feynman was not using imaginary time; he was using real time which has been assigned to the imaginary axis. Minkowski assigned time to that axis to make the field symmetrical. It was a convenience for him, not a physical necessity or reality. But once it is done, it continued to de-normalize everything. This gets the correct answer not because the theory is correct, but because it had been proposed through back calculation from experimental results. The gaps and the greater technical difficulties of trying to sum these in real time are avoided through technical jargon. These greater technical difficulties are also considered a form of renormalization, but they require infinite renormalization, which is mathematically not permissible.

Mathematics is also related to the measurement of time evolution of the state of something. These time evolutions depict rate of change. When such change is related to motion; like velocity, acceleration, etc, it implies total displacement from the position occupied by the body and moving to the adjacent position. This process is repeated due to inertia till it is modified by the introduction of other forces. Thus, these are discrete steps that can be related to three dimensional structures only. Mathematics measures only the numbers of these steps, the distances involved including amplitude, wave length, etc and the quanta of energy applied etc. Mathematics is related also to the measurement of area or curves on a graph – the so-called mathematical structures, which are two dimensional structures. Thus, the basic assumptions of all topologies, including symplectic topology, linear and vector algebra and the tensor calculus, all representations of vector spaces, whether they are abstract or physical, real or complex, composed of whatever combination of scalars, vectors, quaternions, or tensors, and the current definition of the point, line, and derivative are necessarily at least one dimension less from physical space.

The graph may represent space, but it is not space itself. The drawings of a circle, or a square or a vector or any other physical representation, are similar abstractions. The circle represents a cross section of a sphere. . It may represent an orbit, but it is not the orbit itself. The square represents a surface of a cube. Without the cube or similar structure (including the paper), it has no physical existence. The vector is a fixed representation of velocity; it is not the dynamical velocity itself, and so on. The so-called simplification or scaling up or down of the drawing does not make it abstract. The basic abstraction is due to the fact that the mathematics that is applied to solve physical problems actually applies to the two dimensional diagram, and not to the three dimensional space. The numbers are assigned to points on the piece of paper or in the Cartesian graph, and not to points in space. The point in space can exist by itself as the equilibrium position of various forces. But a point on a paper exists only with reference to the arbitrarily assigned origin. If additional force is applied, the locus of the point in space resolves to two equal but oppositely directed forces. But the locus of a point on a graph is always unidirectional and depicts distance – linear or non-linear, but not force. Thus, a physical structure is different from its mathematical representation.

The scientists disregard even reality. Example: in “Reviews of Modern Physics”, Volume 77, July 2005, p. 839, Gell-Mann says: “In order to obtain such relations that we conjecture to be true, we use the method of abstraction from a Lagrangian field-theory model. In other words, we construct a mathematical theory of the strongly interacting particles, which may or may not have anything to do with reality, find suitable algebraic relations that hold in the model, postulate their validity, and then throw away the model. We may compare this process to a method sometimes employed in French cuisine: a piece of pheasant meat is cooked between two slices of veal, which are then discarded”. Is it physics? Thankfully, he has not differentiated between the different categories of veal: Prime, Choice, Good, Standard, Utility and Cull. Veal is used because of its lack of natural fat, its delicate flavor and fine texture. These qualities creep into the pheasant meat even after the veal is discarded. But what Gell-Mann proposes is: use A to prove B. Then throw away A! B cannot stand without A. It is the ground for B.

There is no surprise that the equations of QCD remain unsolved at energy scales relevant for describing atomic nuclei! The various terms of QCD like “color”, “flavor”, the strangeness number (S) and the baryon number (B) etc, cannot be mechanically assigned. Even in the current theory spin cannot be mechanically assigned for quarks except assigning a number. The quantum spin is said to be not real since quarks are point like and cannot spin. If quarks cannot spin, how does chirality and symmetry apply to them at this level? How can a point express chirality and how can a point be either symmetrical or non-symmetrical? If W bosons that fleetingly mediate particle have been claimed to leave their foot-prints, quarks should be more stable! But quarks have never been seen in bubble chambers, ionization chambers, or any other experiments. We will explain the mechanism of spin (1/6 for quarks) to show that it has macro equivalents and that spin zero means absence of spin – which implies only energy transfer.

Objects in three dimensional spaces evolve in time. Mathematical structures in two dimensions do not evolve in time – it only gets scaled up or down. Hawking and others were either confused or trying to fool others when they suggested “time cone” and “event horizon” by manipulating a two dimensional structure and suggesting a time evolution and then converting it to a three dimensional structure. You cannot plot or regulate time. You can only measure time or at best accommodate your actions in time. A light pulse in two dimensional field evolves in time as an expanding circle and not as a conic section. In three dimensions, it will be an expanding sphere and not a cone. The reverse direction will not create a reverse cone, but a smaller sphere. Thus, their concept of time cone is not even a valid mathematical representation of physical reality.

The description of the state at a given instant is physics and the quantum of measured change at “here-now” is mathematics. But the concept of measurement has undergone a big change over the last century leading to changes in “mathematics of physics”. It all began with the problem of measuring the length of a moving rod. Two possibilities of measurement suggested by Einstein in his 1905 paper were:
(a) “The observer moves together with the given measuring-rod and the rod to be measured, and measures the length of the rod directly by superposing the measuring-rod, in just the same way as if all three were at rest”, or
(b) “By means of stationary clocks set up in the stationary system and synchronizing with a clock in the moving frame, the observer ascertains at what points of the stationary system the two ends of the rod to be measured are located at a definite time. The distance between these two points, measured by the measuring-rod already employed, which in this case is at rest, is the length of the rod”

The method described at (b) is misleading. We can do this only by setting up a measuring device to record the emissions from both ends of the rod at the designated time, (which is the same as taking a photograph of the moving rod) and then measure the distance between the two points on the recording device in units of velocity of light or any other unit. But the picture will not give a correct reading due to two reasons:
·  If the length of the rod is small or velocity is small, then length contraction will not be perceptible according to the formula given by Einstein.
·  If the length of the rod is big or velocity is comparable to that of light, then light from different points of the rod will take different times to reach the recording device and the picture we get will be distorted due to Doppler shift. Thus, there is only one way of measuring the length of the rod as in (a).

Here we are reminded of an anecdote related to a famous scientist. Once he directed two of his students to precisely measure the wave-length of sodium light. Both students returned with different results – one resembling the accepted value and the other different. Upon enquiry, the other student replied that he had also come up with the same result as the accepted value, but since everything including the Earth and the scale on it is moving, he applied length contraction to the scale treating the star Betelgeuse as a reference point. This changed the result. The scientist told him to follow the operation as at (a) above and recalculate the wave-length of light again without any reference to Betelgeuse. After sometime, both the students returned to tell that the wave-length of sodium light is infinite. To a surprised scientist, they explained that since the scale is moving with light, its length would shrink to zero. Hence it will require an infinite number of scales to measure the wave-length of light!

Some scientists we have come across try to overcome this difficulty by pointing out that length contraction occurs only in the direction of travel. They claim that if we hold the rod in a transverse direction to the direction of travel, then there will be no length contraction. But we fail to understand how the length can be measured by holding the rod in a transverse direction to the direction of travel. If the light path is also transverse to the direction of motion, then the terms c+v and c-v vanish from the equation making the entire theory redundant. If the observer moves together with the given measuring-rod and the rod to be measured, and measures the length of the rod directly by superposing the measuring-rod while moving with it, he will not find any difference because the length contraction, if real, will be in the same proportion for both.

Either Einstein missed this point or he was clever enough to camouflage this when in his 1905 paper, he said: “Now to the origin of one of the two systems (k) let a constant velocity v be imparted in the direction of the increasing x of the other stationary system (K), and let this velocity be communicated to the axes of the co-ordinates, the relevant measuring-rod, and the clocks”. But is this the velocity of k as measured from k, or is it the velocity as measured from K? This question is extremely crucial. K and k each have their own clocks and measuring rods, which are not treated as equivalent by Einstein. Therefore, according to his theory, the velocity will be measured by each differently. In fact, they will measure the velocity of k differently. But Einstein does not assign the velocity specifically to either system. Everyone missed it and all are misled. His spinning disk example in GR also falls for the same reason.

Einstein uses a privileged frame of reference to define synchronization and then denies the existence of any privileged frame of reference. We quote from his 1905 paper on the definition of synchronization: “Let a ray of light start at the “A time” tA from A towards B, let it at the “B time” tB be reflected at B in the direction of A, and arrive again at A at the “A time” t’A. In accordance with definition the two clocks synchronize if:
tB -  tA = t’A - tB.

We assume that this definition of synchronism is free from contradictions, and possible for any number of points; and that the following relations are universally valid:—
  1. If the clock at B synchronizes with the clock at A, the clock at A synchronizes with the clock at B.
  2. If the clock at A synchronizes with the clock at B and also with the clock at C, the clocks at B and C also synchronize with each other.”

The concept of relativity is valid only between two objects. Introduction of a third object brings in the concept of privileged frame of reference and all equations of relativity fall. Yet, Einstein precisely does the same while claiming the very opposite. In the above description, the clock at A is treated as a privileged frame of reference for proving synchronization of the clocks at B and C. Yet, he claims it is relative!

The cornerstone of GR is the principle of equivalence. It has been generally accepted without much questioning. But if we analyze the concept scientifically, we find a situation akin to the Russell’s paradox in Set theory, which raises an interesting question: If S is the set of all sets which do not have themselves as a member, is S a member of itself? The general principle (discussed in our book Vaidic Theory of Numbers) is that: there cannot be many without one, meaning there cannot be a set without individual elements (example: a library – collection of books – cannot exist without individual books). In one there cannot be many, implying, there cannot be a set of one element or a set of one element is superfluous (example: a book is not a library) - they would be individual members unrelated to each other as is a necessary condition of a set. Thus, in the ultimate analysis, a collection of objects is either a set with its elements, or individual objects that are not the elements of a set.

Let us examine set theory and consider the property p(x): x Ï x, which means the defining property p(x) of any element x is such that it does not belong to x. Nothing appears unusual about such a property. Many sets have this property. A library [p(x)] is a collection of books. But a book is not a library [x Ï x]. Now, suppose this property defines the set R = {x : x Ï x}. It must be possible to determine if RÎR or RÏR. However if RÎR, then the defining properties of R implies that RÏR, which contradicts the supposition that RÎR. Similarly, the supposition RÏR confers on R the right to be an element of R, again leading to a contradiction. The only possible conclusion is that, the property “x Ï x” cannot define a set. This idea is also known as the Axiom of Separation in Zermelo-Frankel set theory, which postulates that; “Objects can only be composed of other objects” or “Objects shall not contain themselves”. This concept has been explained in detail with examples in the chapter on motion in the ancient treatise “Padaartha Dharma Samgraha” – Compendium on Properties of Matter written by Prashastapaada.

In order to avoid this paradox, it has to be ensured that a set is not a member of itself. It is convenient to choose a “largest” set in any given context called the universal set and confine the study to the elements of such universal set only. This set may vary in different contexts, but in a given set up, the universal set should be so specified that no occasion arises ever to digress from it. Otherwise, there is every danger of colliding with paradoxes such as the Russell paradox. Or as it is put in the everyday language: “A man of Serville is shaved by the Barber of Serville if and only if the man does not shave himself?”

            There is a similar problem in the theory of General Relativity and the principle of equivalence. Inside a spacecraft in deep space, objects behave like suspended particles in a fluid or like the asteroids in the asteroid belt. Usually, they are stationary in the medium that moves unless some other force acts upon them. This is because of the relative distribution of mass inside the spacecraft and its dimensional volume that determines the average density at each point inside the spacecraft. Further the average density of the local medium of space is factored into in this calculation. If the passengers could observe the scene outside the space-craft, they will notice this difference and know that the space craft is moving. The light ray from outside can be related to the space craft only if we consider the bigger frame of reference containing both the space emitting light and the spacecraft. If we relate outside space so that the source of light is evident, the reasons for the apparent curvature will be known. If we consider outside space as a separate frame of reference unrelated to the space craft, the ray emitted by it cannot be considered inside the space craft (praagaabhaava). The emission of the ray, if any, will be restricted within the spacecraft. In that case, the ray will move straight. In either case, the description of Einstein is faulty. Thus, both SR and GR including the principles of equivalence are wrong descriptions of reality. Hence all mathematical derivatives built upon these wrong descriptions are wrong. We will explain all so-called experimental verifications of the SR and GR by alternative mechanisms or other verifiable explanations.

            Relativity is an operational concept, but not an existential concept. The equations apply to data and not to particles. When we approach a mountain from a distance, its volume appears to increase. What this means is the visual perception of volume (measurement of the angle of incoming radiation) is changing at a particular rate. But locally, there is no such impact on the mountain. It exists as it was. The same applies to the perception of objects with high velocities. Similar perception of the changing volume is encountered at different times depending upon our relative velocity. If we move fast, it appears earlier. If we move slowly, it appears later. Our differential perception is related to changing angles of radiation and not the changing states of the object. It does not apply to locality. Einstein has also admitted this. But the Standard model treats these as absolute changes that not only change the perceptions, but change the particle also!

The above description points to some very important concepts. If the only way to measure is to move with the object of measurement, it implies that all measurements can be done only at “here-now”. Since “here-now” is ever changing, how do we describe the result? We cut out an easily perceived and fairly repetitive segment of it and freeze it for future reference as the scaling constant (unit). We compare all future states (also past, where it had been measured) with this constant and call the result of such comparison as the “result of measurement”. The operation involving such measurement is called mathematics. Since result of measurement can only be scalar quantities, i.e., numbers. Thus, mathematics is the science of numbers. Since numbers are always discrete units, and the objects they represent are bound by different degrees of freedom, mathematics must follow these principles. But in most of the “mathematics” used by the physicists, these principles are totally ignored.

Let us take the example of complex numbers. The imaginary are abstract descriptions and are illusions that can never be embodied in the “phenomena” because they do not conform to the verifiable laws of the phenomena in nature. Conversely, only the real can be embodied in the verifiable phenomena. A negative sign assigned to a number points to the “deficiency of a physical characteristic” at “here-now”. Because of conservation laws, the negative sign must include a corresponding positive sign “elsewhere”. While the deficiency is at “here-now”, the corresponding positive part is not at “here-now”. They seek each other out, which can happen only in “other times”.

Let us take the example of an atom. The proton is deficient in negative charge, i.e., it has a charge of – (–1). This double negative appears as the positive charge (actually, the charge of proton is slightly deficient from +1). We posit that the negative potential is the real and only charge. Positive potential is perceived due to relative deficiency of negative potential (we call it nyoona). We will discuss this statement while explaining what an electron is. The proton tries to fulfill its relative deficiency by uniting with an electron to become a neutron (or hydrogen atom, which is also unstable because of the deficiency). The proton-neutron interaction is dependent upon neutrinos-antineutrinos. Thus, there is a deficiency of neutrinos- antineutrinos. The neutron and proton-electron pairs search for it. This process goes on. At every stage, there is an addition, which leads to a corresponding “release” leading to fresh deficiency in a linear mechanism. The deficiency generates the charge that is the cause for all other forces and actions.

The operation of deficiency leads to linear addition with corresponding subtraction. This is universally true for everything and we can prove it. Hence a deficiency cannot be reduced in a non-linear manner. This is because both positive and negative potentials do not exist together at “here-now”, where the mathematics is done. They must be separated in space. For this reason, negative numbers (–1) cannot be reduced non-linearly (√–1). Also why stop only at square-root? Why not fourth, eighth etc, roots ad infinitum? The complex numbers are neither physical nor mathematical. This is proved by the fact that complex numbers cannot be used in computer programming, which mimics conscious processes of measurement. Since mathematics is done by conscious beings, there cannot be mathematics involving un-physical complex numbers.

            To say that complex numbers are “complete”, because they “include real numbers and more” is like saying dreams are “complete”, because they “include what we perceive in wakeful state and more”. Inertia is a universal law of Nature that arises after all actions. Thought is the inertia of mind, which is our continued response to initial external stimuli. During wakeful state, the “conscious actions” involve perception through sense organs, which are nothing but measurement of the field set up by the objects by the corresponding field set up by our respective sense organs at “here-now”. Thus, any inertia they generate is bound by not only the physical characteristics of the objects of perception, but also the intervening field. During dreams, the ocular interaction with external fields ceases, but their memory causes inertia of mind due to specific tactile perception during sleep. Thus, we dream of only whatever we have seen in our wakeful state. Since memory (saakshee) is a frozen state like a scaling constant and is free from the restrictions imposed by the external field, dreams are also free from these restrictions. We have seen horses that run and birds that fly. In dream, we can generate images of flying horses. This is not possible in wakeful state. This is not the ways of Nature. This is not physics. This is not mathematics either.

Dirac proposed a procedure for transferring the characteristic quantum phenomenon of discreteness of physical quantities from the quantum mechanical treatment of particles to a corresponding treatment of fields. Employing the quantum mechanical theory of the harmonic oscillator, Dirac gave a theoretical description of how photons appear in the quantization of the electromagnetic radiation field. Later, Dirac’s procedure became a model for the quantization of other fields as well. There are some potential ingredients of the particle concept which are explicitly opposed to the corresponding (and therefore opposite) features of the field concept. A core characteristic of a field is supposed to be that it is a system with an infinite number of degrees of freedom, whereas the very opposite holds true for particles. A particle can be referred to by the specification of the coordinates x (t) that pertains to its center of mass (pre-supposing impenetrability). However, the operator-valued-ness of quantum fields generally mean that to each space-time point x (t), a field value φx (t) is assigned, which is called an operator. Operators are mathematical entities which are defined by how they act on something. They do not represent definite values of quantities, but they specify what can be measured. This is a fundamental difference between classical fields and quantum fields because an operator valued quantum field φx (t) does not by itself correspond to definite values of a physical quantity like the strength of an electromagnetic field. The quantum fields are determinables as they are described by mappings from space-time points to operators.

Another feature of the particle concept is explicitly in opposition to the field concept. In pure particle ontology, the interaction between remote particles can only be understood as an action at a distance. In contrast to that, in field ontology, or a combined ontology of particles and fields, local action is implemented by mediating fields. Further, classical particles are massive and impenetrable, again in contrast to classical fields. The concept of particles has been evolving through history of science in accordance with the latest scientific theories. Therefore, particle interpretation for QFT is a very difficult proposition.

Wigner’s famous analysis of the Poincaré group is often assumed to provide a definition of elementary particles. Although Wigner has found a classification of particles, his analysis does not contribute very much to the question “what a particle is” and whether a given theory can be interpreted in terms of particles. What Wigner has given is rather a conditional answer. If relativistic quantum mechanics can be interpreted in terms of particles, then the possible types of particles correspond to irreducible unitary representations of the Poincaré group. However, the question whether and if yes, in what sense at least, relativistic quantum mechanics can be interpreted as a particle theory at all, has not been addressed in Wigner’s analysis. For this reason the discussion of the particle interpretation of QFT is not closed with Wigner’s analysis. For example the pivotal question of the localizability of particle states is still open.

Each measurable parameter in a physical system is said to be associated with a quantum mechanical operator. Part of the development of quantum mechanics is the establishment of the operators associated with the parameters needed to describe the system. The operator associated with the system energy is called the Hamiltonian. The word operator can in principle be applied to any function. However in practice, it is most often applied to functions that operate on mathematical entities of higher complexity than real number, such as vectors, random variables, or other “mathematical expressions”. The differential and integral operators, for example, have domains and co-domains whose elements are “mathematical expressions of indefinite complexity”. In contrast, functions with vector-valued domains but scalar ranges are called “functionals” and “forms”. In general, if either the domain or co-domain (or both) of a function contains elements significantly more complex than real numbers, that function is referred to as an operator. Conversely, if neither the domain nor the co-domain of a function contains elements more complex than real numbers, that function is referred to simply as a function. Trigonometric functions such as cosine are examples of the latter case. Thus, operators are not mathematical, but illegitimate manipulations in the name of mathematics.

The Hamiltonian is said to contain the operations associated with both kinetic and potential energies. Kinetic energy is related to motion of the particle – hence uses binomial terms associated with energy and fields. This is involved in interaction with the external field while retaining the identity of the body, with its internal energy, separate from the external field. Potential energy is said to be related to the position of the particle. But it remains confined to the particle even while the body is in motion. The example of pendulum, where potential energy and kinetic energy are shown as interchangeable is a wrong description, as there is no change in the potential energy between the pendulum when it is in motion and when it is at rest.

The motion of the pendulum is due to inertia. It starts with application of force to disturb the equilibrium position. Then both inertia of motion and inertia of restoration take over. Inertia of motion is generated when the body is fully displaced. Inertia of restoration takes over when the body is partially displaced, like in the pendulum, which remains attached to the clock. This is one of the parameters that cause wave and sound generation through transfer of momentum. As the pendulum swings to one side due to inertia of motion, the inertia of restoration tries to pull it back to its equilibrium position. This determines the speed and direction of motion of the pendulum. Hence the frequency and amplitude depend on the length of the chord (this determines the area of the cross section) and the weight of the pendulum (this determines the momentum). After reaching equilibrium position, the pendulum continues to move due to inertia of motion or restoration. This process is repeated. If the motion is sought to be explained by exchange of PE and KE, then we must account for the initial force that started the motion. Though it ceases to exist, its inertia continues. But the current theories ignore it. The only verifiable explanation is; kinetic energy, which is determined by factors extraneous to the body, does not interfere with the potential energy.

In a Hamiltonian, the potential energy is shown as a function of position such as x or the potential V(x). The spectrum of the Hamiltonian is said to be the set of all possible outcomes when one measures the total energy of a system. A body possessing kinetic energy has momentum. Since position and momentum do not commute, the functions of position and momentum cannot commute. Thus, Hamiltonian cannot represent total energy of the system. Since potential energy remains unchanged even in motion, what the Hamiltonian actually depicts is the kinetic energy only. It is part of the basic structure of quantum mechanics that functions of position are unchanged in the Schrödinger equation, while momenta take the form of spatial derivatives. The Hamiltonian operator contains both time and space derivatives. The Hamiltonian operator for a class of velocity-dependent potentials shows that the Hamiltonian and the energy of the system are not simply related, and while the former is a constant of motion and does not depend on time explicitly, the latter quantity is time-dependent, and the Heisenberg equation of motion is not satisfied.

The spectrum of the Hamiltonian is said to be decomposed via its spectral measures, into a) pure point, b) absolutely continuous, and c) singular parts. The pure point spectrum can be associated to eigen vectors, which in turn are the bound states of the system – hence discrete. The absolutely continuous spectrum corresponds to the so-called free states. The singular spectrum comprises physically impossible outcomes. For example, the finite potential well admits bound states with discrete negative energies and free states with continuous positive energies. When we include un-physical parameters, only such outcomes are expected. Since all three decompositions come out of the same Hamiltonian, it must come through different mechanism. Hence a Hamiltonian cannot be used without referring to the specific mechanism that causes the decompositions.

Function is a relationship between two sets of numbers or other mathematical objects where each member of the first set is paired with only one member of the second set. It is an equation, for which any x that can be plugged into the equation, will yield exactly one y out of the equation - one-to-one correspondence – hence discreteness. Functions can be used to understand how one quantity varies in relation to (is a function of) changes in the second quantity. Since no change is possible without energy, which is said to be quantized, such changes should also be quantized, which imply discreteness involving numbers.

Despite its much publicized predictive successes, quantum mechanics has been plagued by conceptual difficulties since its inception. No one is really clear about what is quantum mechanics? What does quantum mechanics describe? Since it is widely agreed that any quantum mechanical system is completely described by its wave function, it might seem that quantum mechanics is fundamentally about the behavior of wave functions. Quite naturally, all physicists starting with Erwin Schrödinger, the father of the wave function, wanted this to be true. However, Schrödinger ultimately found it impossible to believe. His difficulty was not so much with the novelty of the wave function: “That it is an abstract, unintuitive mathematical construct is a scruple that almost always surfaces against new aids to thought and that carries no great message”. Rather, it was that the “blurring” suggested by the spread out character of the wave function “affects macroscopically tangible and visible things, for which the term ‘blurring’ seems simply wrong” (Schrödinger 1935).

For example, in the same paper Schrödinger noted that it may happen in radioactive decay that “the emerging particle is described ... as a spherical wave ... that impinges continuously on a surrounding luminescent screen over its full expanse. The screen however does not show a more or less constant uniform surface glow, but rather lights up at one instant at one spot ....”. He observed that one can easily arrange, for example by including a cat in the system, “quite ridiculous cases” with the ψ-function of the entire system having in it the living and the dead cat mixed or smeared out in equal parts. Thus it is because of the “measurement problem” of macroscopic superposition that Schrödinger found it difficult to regard the wave function as “representing reality”. But then what does reality representing? With evident disapproval, Schrödinger describes how the reigning doctrine rescues itself by having recourse to epistemology. We are told that no distinction is to be made between the state of a natural object and what we know about it, or perhaps better, what we can know about it. Actually – it is said - there is intrinsically only awareness, observation, measurement.

The measurement problem in quantum physics is really not a problem, but the result of wrong assumptions. As has been described earlier, measurement is done only at “here-now”. It depicts the state only at “here-now” – neither before nor after it. Since all other states are unknown, they are clubbed together and described as superposition of states. This does not create a bizarre state of “un-dead” cat at all other times. The state at “here-now” is a culmination of the earlier states that are time evolution of the object. This is true “wave function collapse”, where the unknown collapses to become transitorily known (since the object continues to evolve in time). The collapse does not bring the object to a fixed state at ever-after. It describes the state only at “here-now”.

How much one quantity is changing in response to changes in some other quantity is called its derivative. Derivatives are of two types. Geometrical derivatives presuppose that the function is continuous. At points of discontinuity, a function does not have a derivative. Physical derivatives are always discrete. Since numbers are always discrete quantities, a continuous function cannot represent numbers universally. While fields and charges are continuous, particles and mass are discrete. The differentiating characteristic between these two is dimension. Dimension is the characteristic of objects by which we differentiate the “inner space” of an object from its “outer space”. In the case of mass, it is discreet and stable. In the case of fluids, it is continuous and unstable. Thus, the term derivative has to be used carefully. We will discuss its limitations by using some physical phenomena. We will deal with dimensions and singularity cursorily and spin and entanglement separately. Here we focus on bare mass and bare charge that will also explain black holes, dark matter and dark energy. We will also explain “what is an electron” and review Coulomb’s law.

Unlike Quantum physicists, we will not use complex terminology and undefined terms, will not first write everything as integrals and/or partial derivatives unless the context so demands. We will not use Hamiltonians, covariant four-vectors and contravariant tensors of the second rank, Hermitian operators, Hilbert spaces, spinors, Lagrangians, various forms of matrices, action, gauge fields, complex operators, Calabi-Yau shapes, 3-branes, orbi-folding and so on to make it incomprehensible. We will not use “advanced mathematics”, such as the Abelian, non-Abelian, and Affine models etc, based on mere imagery at the axiomatic level. We will describe physics as it is perceived. We will use mathematics only to determine “how much” a system changes, when some input parameters are changed and then explain the changed output, as it is perceived.


Lorentz force law deals with what happens when charges are in motion. This is a standard law with wide applications including designing TV Picture Tubes. Thus, its authenticity is beyond doubt. When parallel currents are run next to one another, they are attracted when the currents run in the same direction and repulsed when the currents run in opposite directions. The attractive or repulsive force is proportional to the currents and points in a direction perpendicular to the velocity. Observations and measurements demonstrate that there is an additional field that acts only on moving charges. This force is called the Lorentz force. This happens even when the wires are completely charge neutral. If we put a stationary test charge near the wires, it feels no force.

Consider a long wire that carries a current I and generates a corresponding magnetic field. Suppose that a charge moves parallel to this wire with velocity ~v. The magnetic field of the wire leads to an attractive force between the charge and the wire. With reference to the wire frame, there is no contradiction. But with reference to the charge-frame, it is stationary. Hence there cannot be any magnetic force. Further, a charged particle can gain (or lose) energy from an electric field, but not from a magnetic field. This is because the magnetic force is always perpendicular to the particle’s direction of motion. Hence, it does no work on the particle. (For this reason, in particle accelerators, magnetic fields are often used to guide particle motion e.g., in a circle, but the actual acceleration is performed by the electric fields.) The only solution to the above contradiction is to assume some attractive force in the charge frame. The only attractive force in the charge frame must be an attractive electric field. In other words, a force is generated by the charge on itself while moving, i.e., back reaction, so that the total force on the charge is the back reaction and the applied force.

Classical physics gives simple rules for calculating this force. An electron at rest is surrounded by an electrostatic field, whose value at a distance r is given by:
ε (r) = (e2/r). …………………………………………………………………(1)

If we consider a cell of unit volume at a distance r, the energy content of the cell is: (1/8π)ε2(r).                                                                                                    (2)

The total electrostatic energy E is therefore obtained by integrating this energy over the whole of space. This raises the question about the range of integration. Since electromagnetic forces are involved, the upper limit is taken as infinity. The lower limit could depend upon the size of the electron. When Lorentz developed his theory of the electron, he assumed the electron to be a sphere of radius a. With this assumption, he arrived at:
E = e2/2a. ……………………………………………………………………  (3)

The trouble started when attempts were made to calculate this energy from first principles. When a, the radius of the electron, approaches zero for a point charge, the total energy diverges to infinity:
E → ∞. ……………………………………………………………………… (4)

As Feynman puts it; “What’s wrong with an infinite energy? If the energy can’t get out, but must stay there forever, is there any real difficulty with an infinite energy? Of course, a quantity that comes out as infinite may be annoying, but what matters is only whether there are any observable physical effects. To answer this question, we must turn to something else besides the energy. Suppose we ask how the energy changes when we move the charge. Then, if the changes are infinite, we will be in trouble”.

            Electrodynamics suggests that mass is the effect of charged particles moving, though there can be other possible sources of origin of mass. We can take mass broadly of two types: mechanical or bare mass that we denote as m0 and mass of electromagnetic origin that we denote as mem. Total mass is a combination of both. In the case of electron, we have a mass experimentally observed, which must be equal to:
mexp = m0 + mem, …………………………………………………………….(5)
i.e., experimental mass = bare mass + electromagnetic mass.

This raises the question, what is mass? We will explain this and the mechanism of generation of mass without Higgs mechanism separately. For the present, it would suffice to note that mass is “field confined”. Energy is “mass unleashed”. Now, let us consider a paradox! The nucleus of an atom, where most of its mass is concentrated, consists of neutrons and protons. Since the neutron is thought of as a particle without any charge, its mass should be purely mechanical or bare mass. The mass of the charged proton should consist of m0 + mem. Hence, the mass of proton should have been higher than that of neutron, which, actually, is the opposite. We will explain this apparent contradiction later.

When the electron is moved with a uniform velocity v, the electric field generated by the electron’s motion acquires a momentum, i.e., mass x velocity. It would appear that the electromagnetic field acts as if the electron had a mass purely of electromagnetic origin. Calculations show that this mass mem is given by the equation:
    ……………………………………………………………….(6) or  
where a defines the radius of the electron.

Again we land in problem, because if we treat a = 0, then equation (6) tells us that mem = ∞. ……………………………………………………………………….(8)

Further, if we treat the bare mass of electron m0 = 0 for a point particle, then the mass is of purely electromagnetic in origin. In that case:
mem = mexp = observed mass = 9.10938188 × 10-31 kilograms.………….…...  (9),
which contradicts equation (8).

Putting the value of eq.9 in eq.7, we get: a = (2/3) (e2/ mexp c2)..….… (10),
as the radius of the electron. But we know that the classical electron radius:
..……………..…………. ……………………..……………… (11).

            The factor 2/3 in a depends on how the electric charge is actually distributed in the sphere of radius a. We will discuss it later. The r0 is the nominal radius. According to the modern quantum mechanical understanding of the hydrogen atom, the average distance between electron and proton is ≈1.5a0, somewhat different than the value in the Bohr model (≈ a0), but certainly the same order of magnitude. The value 1.5a0 is approximate, not exact, because it neglects reduced mass, fine structure effects (such as relativistic corrections), and other such small effects.

If the electron is a charged sphere, since it contains same charge, normally it should explode. However, if it is a point charge where a = 0, it will not explode – since zero has existence but no dimensions. Thus, if we treat the radius of electron as non-zero, we land at instability. If we treat the radius of electron as zero, we land at “division of a number by zero”. It is treated as infinity. Hence equation (6) shows the mem as infinity, which contradicts equation (8), which has been physically verified. Further, due to the mass-energy equation E = m0c2, mass is associated with an energy. This energy is known as self-energy. If mass diverges, self-energy also diverges. For infinite mass, the self-energy also becomes infinite. This problem has not been solved till date.

According to standard quantum mechanics, if E is the energy of a free particle, its wave-function changes in time as:
Ψ (t) =  e-iEt / ħ Ψ(0)…………………………………………………………… (12)

            Thus, effectively, time evolution adds a phase factor e-iEt / ħ. Thus, the “dressing up” only changes the value of E to (E+ ΔE). Hence, it can be said that as the mass of the particle changes from m0, the value appropriate to a bare particle, to (m0 + Δm), the value appropriate to the dressed up or physically observable “isolated” or “free” particle changes from E to (E+ ΔE). Now, the value of (m0 + Δm), which is the observed mass, is known to be 9.10938188 × 10-31 kilograms. But Δm, which is same as mem = ∞. Hence again we are stuck with an infinity.

Tomonaga, Schwinger and Feynman independently tried to solve the problem. They argued that what we experimentally observe is the bare electron, which cannot be observed, because it is always interacting with its own field. It is this interaction Δm that “dresses up” the electron by radiative corrections. Since this was supposed to be ∞, they tried to “nullify” or kill the infinity by using a counter term. They began with the hydrogen atom. They assumed the mass of the electron as m0 + Δm and switched on to both coulombic and radiative interactions. However, the Hamiltonian for the interaction was written not as Hi, but HiΔm. Thereafter, they cancelled + Δm by – Δm. However, this operation is mathematically not legitimate, as in mathematics, all operations involving infinity are void. The whole problem has arisen primarily because of the mathematics involving division by zero, which has been assumed to be infinite. Hence let us examine this closely. First the traditional view.


Division of two numbers a and b is the reduction of dividend a by the divisor b or taking the ratio a/b to get the result (quotient). Cutting or separating an object into two or more parts is also called division. It is the inverse operation of multiplication. If: a x b = c, then a can be recovered as a = c/b as long as b ≠ 0. Division by zero is the operation of taking the quotient of any number c and 0, i.e., c/0. The uniqueness of division breaks down when dividing by 0, since the product 0 x b = 0 is the same for any value of b. Hence a cannot be recovered by inverting the process of multiplication. Zero is the only number with this property and, as a result, division by zero is undefined for real numbers and can produce a fatal condition called a “division by zero error” in computer programs (Derbyshire, 2004, p. 36). Even in fields other than the real numbers, division by zero is never allowed (Derbyshire 2004, p. 266).

            Now let us evaluate (1+1/n)n for any number n. As n increases, 1/n reduces. For very large values of n, 1/n becomes almost negligible. Thus, for all practical purposes, (1+1/n) = 1. Since any power of 1 is also 1, the result is unchanged for any value of n. This position holds when n is very small and is negligible. Because in that case we can treat it as zero and any number raised to the power of zero is unity. There is a fatal flaw in this argument, because n may approach ∞ or 0, but it never “becomes” ∞ or 0.

            On the other hand, whatever be the value of 1/n, it will always be more than zero, even for large values of n. Hence, (1+1/n) will always be greater than 1. When a number greater than zero is raised to increasing powers, the result becomes larger and larger. Since (1+1/n) will always be greater than 1, for very large values of n, the result of (1+1/n)n will also be ever bigger. But what happens when n is very small and comparable to zero? This leads to the problem of “division by zero”. The contradicting result shown above was sought to be resolved by the concept of limit, which is at the heart of calculus. The generally accepted concept of limit led to the result: as n approaches 0, 1/n approaches ∞. Since that created all problems, let us examine this aspect closely.


In Europe, the concept of limit goes back to Archimedes. His method was to inscribe a number of regular polygons inside a circle. In a regular polygon, all sides are equal in length and each angle is equal with the adjacent angles. If the polygon is inscribed in the circle, its area will be less than the circle. However, as the number of sides in a polygon increases, its area approaches the area of the circle. Similarly by circumscribing the polygon over the circle, its circumference and area would be approaching the area of the circle, as the number of its sides goes up. Hence, the value of p can be easily found out by dividing the circumference with the diameter. If we take polygons of increasingly higher sides and repeat the process, the true value of p can be “squeezed” between a lower and an upper boundary. His value for p was within limits of:

Long before Archimedes, the idea was known in India and was used in the Shulba Sootras, world’s first mathematical works. For example, one of the formulae prevalent in ancient India for determining the length of each side of a polygon with 3,4,…9 sides inscribed inside a circle was as follows: Multiply the diameter of the circle by 103923, 84853, 70534, 60000, 52055, 45922, 41031, for polygons having 3 to 9 sides respectively. Divide the products by 120000. The result is the length of each side of the polygon. This formula can be extended further to any number of sides of the polygon.

Brahmagupta (591 AD) solved indeterminate equations of the second order in his books “Brahmasphoota Siddhaanta”, which came to be known in Europe as Pell’s equations after about 1000 years. His lemmas to the above solution were rediscovered by Euler (1764 AD), and Lagrange (1768 AD). He enunciated a formula for the rational cyclic quadrilateral. Chhandas is a Vedic metric system, which was methodically discussed first by Pingala Naaga of antiquity. His work was developed by subsequent generations, particularly, Halaayudha during the 10th Century AD. Using chhandas, Halaayudha postulated a triangular array for determining the type of combinations of n syllables of long and short sounds for metrical chanting called Chityuttara. He developed it mathematically into a pyramidal expansion of numbers. The ancient treatise on medicine – Kashyapa Samhita uses it for classifying chemical compositions and diseases and used it for treatment. Much later, it appeared in Europe as Pascal’s triangle. Based on this, (1+1/n)n has been evaluated as the limit:
e = 2.71828182845904523536028747135266249775724709369995….

Bhaaskaraachaarya – II (1114 AD), in his algebraic treatise “Veeja Ganitam”, had used the “chakravaala” (cyclic) method for solving the indeterminate equations of the second order, which has been hailed by the German mathematician Henkel as “the finest thing achieved in the theory of numbers before Lagrange”. He used basic calculus based on “Aasannamoola” (limit), “chityuttara” (matrix) and “circling the square” methods several hundreds of years before Newton and Leibniz. “Aasannamoola” literally means “approaching a limit” and has been used in India since antiquity. Surya Siddhanta, Mahaa Siddhanta and other ancient treatises on astronomy used this principle. The later work, as appears from internal evidence, was written around 3100 BC. However, there is a fundamental difference between these methods and the method later adopted in Europe. The concepts of limit and calculus have been tested for their accuracy and must be valid. But while the Indian mathematicians held that they have limited application in physics, the Europeans held that they are universally applicable. We will discuss this elaborately.

            Both Newton and Leibniz evolved calculus from charts prepared from the power series, based on the binomial expansion. The binomial expansion is an infinite series expansion of a complex differential that approached zero. But this involved the problems of the tangent to the curve and the area of the quadrature. They found the solution to the calculus while studying the “chityuttara” principle or the so-called Pascal’s differential triangle. To solve the problem of the tangent, this triangle must be made smaller and smaller. We must move from x to Δx. But can it be mathematically represented? No point on any possible graph can stand for a point in space or an instant in time. A point on a graph stands for two distances from the origin on the two axes. To graph a straight line in space, only one axis is needed. For a point in space, zero axes are needed. Either you perceive it directly without reference to any origin or it is non-existent.

            While number is a universal property of all substances, there is a difference between its application to objects and quantities. Number is related to the object proper that exist as a class or an element of a set in a permanent manner, i.e., at not only “here-now”, but also at other times. Quantity is related to the objects only during measurement and is liable to change from time to time. For example, protons and electrons as separate classes can be assigned numbers 1 and 2 or any other permanent class number. But their quantity, i.e., the number of protons or electrons as seen during measurement of a sample, can change. The difference between these two categories is a temporal one. While the description “class” is time invariant, the description quantity is time variant, because it can only be measured at “here-now” and may subsequently change. The class does not change. This is important for defining zero, as zero is related to quantity, i.e., the absence of a class of substances that exist elsewhere but not at “here-now”. It is not a very small quantity, because even then the infinitely small quantity is present at here-now. Thus, the expression: limn → ∞1/n = 0 does not mean that 1/n will ever be equal to zero.

Infinity is like one: without similars, but while the dimensions of “one” are fully perceptible, those for infinity are not perceptible. Thus, space and time, which are perceived as without similars, but whose dimensions cannot be measured fully, are infinite. Infinity is not a very big number. We use arbitrary segments of it that are fully perceptible and label it differently for our purpose. Ever-changing processes can’t be measured other than in time – time evolution. Since we observe the state and not the process of change during measurement (which is instantaneous), objects under ideal conditions are as they evolve independent of being perceived. What we measure reflects only a temporal state of their evolution. Since these are similar for all perceptions of objects and events, we can do mathematics with it. The same concept is applicable to space also. A single object in void cannot be perceived, as it requires at least a different backdrop and an observer to perceive it. Space provides the backdrop to describe the changing interval between objects. In outer space, we do not see colors. It is either darkness or the luminous bodies – black and white. The rest about space are like time.

            There are functions like an = (2n +1) / (3n + 4), which hover around values that are close to 2/3 for all values of n. Even though objects are always discrete, it is not necessary that this discreteness must be perceived after direct measurement. If we measure a sample and infer the total quantity from such direct measurement, the result can be perceived equally precisely and it is a valid method of measurement – though within the constraints of the mechanism for precision measurement. However, since physical particles are always discrete, the indeterminacy is terminated at a desired accuracy level that is perceptible. This is the concept behind “Aasannamoola” or digital limit. Thus, the value of π is accepted as 3.141...Similarly, the ratio between the circumference and diameter of astral bodies, which are spheroids, is taken as √10 or 3.16....We have discussed these in our book “Vaidic Theory of Number”. This also conforms to the modern definition of function, according to which, every x plugged into the equation will yield exactly one y out of the equation – a discrete quantity. This also conforms to the physical Hamiltonian, which is basically a function, hence discrete.

            Now, let us take a different example: an = (2n2 +1) / (3n + 4). Here n2 represents a two dimensional object, which represents area or a graph. Areas or graphs are nothing but a set of continuous points in two dimensions. Thus, it is a field that vary smoothly without breaks or jumps and cannot propagate in true vacuum. Unlike a particle, it is not discrete, but continuous. For n = 1,2,3,…., the value of an diverges as 3/7, 9/10, 19/13, ...... For every value of n, the value for n+1 grows bigger than the earlier rate of divergence. This is because the term n2 in the numerator grows at a faster rate than the denominator. This is not done in physical accumulation or reduction. In division, the quotient always increases or decreases at a fixed rate in proportion to the changes in either the dividend or the divisor or both.

            For example, 40/5 = 8 and 40/4 = 10. The ratio of change of the quotient from 8 to 10 is the same as the inverse of the ratio of change of the divisor from 5 to 4. But in the case of our example: an = (2n2 +1) / (3n + 4), the ratio of change from n = 2 to n = 3 is from 9/10 to 19/13, which is different from 2/3 or 3/2. Thus, the statement:
limn→∞ an = {(2n2 +1) / (3n + 4)} → ∞
is neither mathematically correct (as the values for n+1 is always greater than that of n and never a fixed number) nor can it be applied to discrete particles (since it is indeterminate). According to relativity, wherever speed comparable to light is involved, like that of a free electron or photon, the Lorentz factors invariably comes in to limit the output. There is always length, mass or time correction. But there is no such correcting or limiting factor in the above example. Thus, the present concept of limit violates the principle of relativistic invariance for high velocities and cannot be used in physics.

            All measurements are done at “here-now”. The state at “here-now” is frozen for future reference as the result of measurement. All other unknown states are combined together as the superposition of states. Since zero represents an object that is non-existent at “here-now”, it cannot be used in mathematics except by way of multiplication (explained below). Similarly, infinity goes beyond “here-now”. Hence it can’t be used like other numbers. These violate superposition principle as measurement is sought to be done with something non-existent at “here-now”. For this reason, Indian mathematicians treated division by zero in geometry differently from that in physics.

Bhaaskaraachaarya (1114 AD) followed the geometrical method and termed the result of division by zero as “khahara”, which is broadly the same as renormalization except for the fact that he has considered non-linear multiplication and division only, whereas renormalization considers linear addition and subtraction of the counter term. However, even he had described that if a number is first divided and then multiplied by zero, the number remains unchanged. Mahaavira (about 850 AD), who followed the physical method in his book “Ganita Saara Samgraha”, holds that a number multiplied by zero is zero and remains unchanged when it is divided by, combined with or diminished by zero. The justification for the same is as follows:

            Numbers accumulate or reduce in two different ways. Linear accumulations and reductions are addition and subtraction. Non-linear accumulation and reduction are multiplication and division. Since mathematics is possible only between similars, in the case of non-linear accumulation and reduction, first only the similar part is accumulated or reduced. Then the mathematics is redone between the two parts. For example, two areas or volumes can only be linearly accumulated or reduced, but cannot be multiplied or divided. But areas or volumes can be multiplied or divided by a scalar quantity, i.e., a number. Suppose the length of a field is 5 meters and breadth 3 meters. Both these quantities are partially similar as they describe the same field. Yet, they are dissimilar as they describe different spreads of the same field. Hence we can multiply these. The area is 15 sqmts. If we multiply the field by 2, it means that either we are increasing the length or the breadth by a factor of two. The result 15 x 2 = 30 sqmts can be arrived at by first multiplying either 5 or 3 with 2 and then multiplying the result with the other quantity: (10 x 3 or 5 x 6). Of course, we can scale up or down both length and breadth. In that case, the linear accumulation has to be done twice before we multiply them.

Since zero does not exist at “here-now” when the numbers representing the objects are perceived, it does not affect addition or subtraction. During multiplication by zero, one component of the quantity is increased to zero, i.e., moves away from “here-now” to a superposition of states. Thus, the result becomes zero for the total component, as we cannot have a Schrödinger’s undead cat before measurement in real life. In the case of division by zero, the “non-existent” part is sought to be reduced from the quantity, which amounts to collapse reversal leaving it unchanged. Thus, physically, division by zero leaves the number unchanged.

When Fermi wrote the three part Hamiltonian: H = HA + HR + HI, where HA was the Hamiltonian for the atom, HR the Hamiltonian for radiation and HI the Hamiltonian for interaction, he was somewhat right. Only he should have written H was the Hamiltonian for the atom and HA was the Hamiltonian for the nucleus. We call these three (HA, HR, HI) as “Vaya”, “Vayuna” and “Vayonaadha” respectively. Of these, the first has fixed dimension, the second has variable dimension and the third does not have any dimension. It represents energy that “binds” the other two. Different forces cannot be linearly additive, but can co-exist. Since the three parts of the Hamiltonians do not belong to the same class, they can only coexist, but cannot accumulate or reduce through interchange. When Dirac wrote HI as HI –Δm, so that Δm, which was thought to be infinite could be cancelled by –Δm, he was clearly wrong. Interaction is the effect of energy on mass and it is always not the same as mass. This can be proved by examining the mass of quarks.

Since in the quark model the proton has three quarks, the masses of the “Up” and “Down” quarks were thought to be about ⅓ the mass of a proton. But this view has since been discarded. The quoted masses of quarks are now model dependent, and the mass of the bottom quark is quoted for two different models. In other combinations they contribute different masses. In the pion, an “up” and an “anti-down” quark yield a particle of only 139.6 MeV of mass energy, while in the rho vector meson, the same combination of quarks has a mass energy of 770 MeV. The difference between a pion and a rho is the spin alignment of the quarks. The pion is a pseudo-scalar meson with zero angular momentum. The values for these masses have been obtained by dividing the observed energy by c2. Thus, it is evident that different spin alignment in the “inner space” of the particle generates different pressure on the “outer space” of the particle, which is expressed as different mass. This shows the role of dimension and also proves that mass is confined field and charge is mass unleashed. This also explains why neutron is heavier than the proton. According to our calculation, it has a negative charge of –1/11, which means, it is deficient by +10/11. It searches out for the complementary charge for attaining equilibrium. Since positive charge spreads out from the center of mass; the neutron generates a higher pressure on its outer space than the proton. This is revealed as the higher mass. Thus, the very concept of Δm is erroneous. Since mass has dimensions, and interactions are possible only after the dimensions are broken through, let us examine dimension.


It can be generally said that the electrons determine atomic size, i.e., its dimensions. Most of quantum physics dealing with extra large or compact dimensions have not defined dimension precisely. In fact in most cases, like in the description of phase-space-portrait, the term dimension has been used for vector quantities in exchange for direction. Similarly; the M theory which requires 11 undefined dimensions, defines strings as one dimensional loop. Dimension is the differential perception of the “inner space” of an object from its “outer space”. When the relation between the two remain fixed for all “outer space”, i.e., irrespective of orientation, the object is called a particle with characteristic discreteness. In other cases, it behaves like a field with characteristic continuity.

For perception of the spread of the object, the electromagnetic radiation emitted by the object must interact with that of our eyes. Since electric and magnetic fields move perpendicular to each other and both are perpendicular to the direction of motion, we can perceive the spread of any object only in these three directions. Measuring the spread uniquely is essentially measuring the invariant space occupied by any two points on it. This measurement can be done only with reference to some external frame of reference. For the above reason, we arbitrarily choose a point that we call origin and use axes that are perpendicular to each other (analogous to e.m. waves) and term these as x-y-z coordinates (length-breadth-height making it 3 dimensions or right-left, forward-backward and up-down making it 6 dimensions). Mathematically a point has zero dimensions. A line has one dimension. An area has two dimensions and volume has three dimensions. Thus, a one dimensional loop is mathematically impossible, as a loop implies curvature, which requires a minimum of two dimensions. Thus, the “mathematics” of string theory violates all mathematical principles.

Let us now consider the “physics” of string theory. It was developed with a view to harmonize General Relativity with Quantum theory. It is said to be a high order theory where other models, such as super-gravity and quantum gravity appear as approximations. Unlike super-gravity, string theory is said to be a consistent and well-defined theory of quantum gravity, and therefore calculating the value of the cosmological constant from it should, at least in principle, be possible. On the other hand, the number of vacuum states associated with it seems to be quite large, and none of these features three large spatial dimensions, broken super-symmetry, and a small cosmological constant. The features of string theory which are at least potentially testable - such as the existence of super-symmetry and cosmic strings - are not specific to string theory. In addition, the features that are specific to string theory - the existence of strings - either do not lead to precise predictions or lead to predictions that are impossible to test with current levels of technology.

There are many unexplained questions relating to the strings. For example, given the measurement problem of quantum mechanics, what happens when a string is measured? Does the uncertainty principle apply to the whole string? Or does it apply only to some section of the string being measured? Does string theory modify the uncertainty principle? If we measure its position, do we get only the average position of the string? If the position of a string is measured with arbitrarily high accuracy, what happens to the momentum of the string? Does the momentum become undefined as opposed to simply unknown? What about the location of an end-point? If the measurement returns an end-point, then which end-point? Does the measurement return the position of some point along the string? (The string is said to be a Two dimensional object extended in space. Hence its position cannot be described by a finite set of numbers and thus, cannot be described by a finite set of measurements.) How do the Bell’s inequalities apply to string theory? We must get answers to these questions first before we probe more and spend (waste!) more money in such research. These questions should not be put under the carpet as inconvenient or on the ground that some day we will find the answers. That someday has been a very long period indeed!

The point, line, plane, etc. have no physical existence, as they do not have physical extensions. As we have already described, a point vanishes in all directions. A line vanishes along y and z axes. A plane vanishes along z axis. Since we can perceive only three dimensional objects, an object that vanishes partially or completely cannot be perceived. Thus, the equations describing these “mathematical structures” are unphysical and cannot explain physics by themselves. Only when they represent some specific aspects of an object, do they have any meaning. Thus, the description that the two-dimensional string is like a bicycle tyre and the three-dimensional object is like a doughnut, etc, and that the Type IIA coupling constant allows strings to expand into two and three-dimensional objects, is nonsense.

 This is all the more true for “vibrating” strings. Once it starts vibrating, it becomes at least two dimensional. A transverse wave will automatically push the string into a second dimension. It cannot vibrate length-wise, because then it will no more be indiscernible. Further, no pulse could travel lengthwise in a string that is not divisible. There has to be some sort of longitudinal variation to propose compression and rarefaction; but this variation is not possible without subdivision. To vibrate in the right way for the string theory, they must be strung very, very, tight. But why are the strings vibrating? Why are some strings vibrating one way and others vibrating in a different way? What is the mechanism? Different vibrations should have different mechanical causes. What causes the tension? No answers! One must blindly accept these “theories”. And we thought blind acceptance is superstition!

Strings are not made up of sub-particles; they are absolutely indivisible. Thus, they should be indiscernible and undifferentiated. Ultimate strings that are indivisible should act the same in the same circumstances. If they act differently, then the circumstances must differ. But nothing has been told about these different circumstances. The vast variation in behavior is just another postulate. It has to be blindly accepted! And that is science!

            The extra-dimension hypothesis started with a nineteenth century novel that described “flat land”, a two dimensional world. In 1919, Kaluza proposed a fourth spatial dimension and linked it to relativity. It allowed the expression of both the gravitational field and the electromagnetic field - the only two of the major four that were known at the time. Using the vector fields as they have been defined since the end of the 19th century, the four-vector field could contain only one acceleration. If one tried to express two acceleration fields simultaneously, one got too many (often implicit) time variables showing up in denominators and the equations started imploding. The calculus, as it has been used historically, could not flatten out all the accelerations fast enough for the mathematics to make any sense. What Kaluza did was to push the time variable out of the denominator and switch it into another x variable in the numerator. Minkowski's new “mathematics” allowed him to do. He termed the extra x-variable as the fourth spatial dimension, without defining the term. It came as a big relief to Einstein, who was struggling not only to establish the “novelty” of his theory over the “mathematics” of Poincare, who discovered the equation e = mc2 five years before him, but also to include gravity in SR. Since then, the fantasy has grown bigger and bigger. But like all fantasies, the extra-dimensions could not be proved in any experiment.

Some people have suggested the extra seven dimensions of M theory to be time dimensions. The basic concept behind these extra fields is rate of change concept of calculus. Speed is rate of change of displacement. Velocity is rate of change of speed. Acceleration is the rate of change of velocity. In all such cases, the equations can be written as Δx/Δt or ΔΔx, Δx/Δt2 or ΔΔΔx, etc. In all these cases, the time variable increases inversely with the space variable. Some suggested extending it further like Δx/Δt3 or ΔΔΔΔx and so on, i.e., rate of change of acceleration and rate of change of that change and so on. But in that case it can be extended ad infinitum implying infinite number of dimensions. String theory and M-theory continued to pursue this method. They had two new fields to express. Hence they had (at least) two new variables to be transported into the numerators of their equations. Every time they inserted a new variable, they had to insert a new field. Since they inserted the field in the numerator as another x-variable, they assumed that it is another space field and termed it as an extra dimension. But it can be transported to the denominator as an inverse time variable also.

Let us look at speed. It is no different from velocity. Both speed and velocity are the effects of application of force. Speed is the displacement that arises when a force is applied to a body and where the change in the direction of the body or the force acting on it, is ignored. When we move from speed to velocity, the direction is imported into the description depending upon the direction from which the force is applied. This makes velocity a vector quantity. In Newton’s second law, f = ma, which is valid only for constant-mass systems, the term ‘f’ has not been qualified. Once an externally applied force acts on the body, the body is displaced. Thereafter, either the force loses contact with the body and ceases to act on it or moves with the body (it is possible only if it is moving in the same direction as the body, but at a higher velocity). Newton has not taken this factor into account. If the force ceases to act on the body, then assuming no other force is acting on the body, the body should move only due to inertia, which is constant. Thus, the body should move at constant velocity and the equation should be f = mv.

The rate of change arises because of application of additional force. The force may be applied instantaneously like the firing of a bullet or continuously like a train engine pulling the bogies. In both cases the bodies move with constant velocity due to inertia. Friction changes the velocity, which, in the second case, is compensated by application of additional force of the engine. When velocity changes to so-called acceleration, nothing new happens. It requires only the application of additional force to change the constant velocity due to inertia. This additional force need not be of another kind. Thus, this is a new cycle of force and inertia changing the speed of the body and the nature of force and displacement is irrelevant for this description. Whether it is a horse-pulled car or steam engine, diesel engine, electric engine or rocket propelled body, the result is the same.

Now let us import time to the equations of this motion. Time is an independent variable. Motion is related to space, which is also an independent variable. Both co-exist, but being independent variables, they operate independent of each other. A body can be in the same position or move 10 meters or a light year in a nano-second or in a billion years. Here the space coordinates and time coordinates do not vary according to any fixed rules. They are operational descriptions and not existential descriptions. They can vary for the same body under different circumstances, but it does not directly affect the existence, physics or chemistry of the body or other bodies (it may affect due to wear and tear, but that is an existential matter). Acceleration is defined as velocity per second per second or velocity per time squared. This is written mathematically as v / t2. Squaring is possible only if there is non-linear accumulation (multiplication) of the same quantity. Non-linearity arises when the two quantities are represented by difference axes, which also implies that they move along different directions. In the case of both velocity and acceleration, time moves in the same direction from past to present to future. Thus, the description “time squared” is neither a physical nor mathematical description. Hence acceleration is essentially no different from velocity or speed.

Dimension is an existential description. Change in dimension changes the existential description of the body irrespective of time and space. Since everything is in a state of motion with reference to everything else at different rates of displacement, these displacements could not be put into any universal equation. Any motion of a body can be described only with reference to another body. Poincare and other have shown that even three body equations cannot be solved. Our everyday experience shows that the motion of a body with reference to other bodies can measure different distances over the same time interval and same distance over different time intervals. Hence any standard equation for motion including time variables for all bodies or a class of bodies is totally absurd. Photon and other radiation that travel at uniform velocity, are mass less – hence are not “bodies”.

The three or six dimensions described earlier are not absolute terms, but are related to the order of placement of the object in the coordinate system of the field in which the object is placed. Since the dimension is related to the spread of an object, i.e., the relationship between its “totally confined inner space” and its “outer space”, since the outer space is infinite, and since the outer space does not affect inner space without breaking the dimension, the three or six dimensions remain invariant under mutual transformation of the axes. If we rotate the object so that x-axis changes to the y-axis or z-axis, there is no effect on the structure (spread) of the object, i.e. the relative positions between different points on the body and their relationship to the space external to it remain invariant. Based on the positive and negative directions (spreading out from or contracting towards) the origin, these describe six unique functions of position, i.e. (x,0,0), (-x,0,0), (0,y,0), (0,-y,0), (0,0,z), (0,0,-z), that remain invariant under mutual transformation. Besides these, there are four more unique positions, namely (x, y), (-x, y), (-x, -y) and (x, -y) where x = y for any value of x and y, which also remain invariant under mutual transformation. These are the ten dimensions and not the so-called mathematical structures. Since time does not fit in this description, it is not a dimension. These are described in detail in a book “Vaidic Theory of Numbers” written by us and published on 30-06-2005. Unless the dimensional boundary is broken, the particle cannot interact with other particles. Thus, dimension is very important for all interactions.

            While the above description applies to rigid body structures, it cannot be applied to fluids, whose dimensions depend on their confining particle or body. Further, the rigid body structures have a characteristic resistance to destabilization of their dimension by others (we call it vishtambhakatwa). Particles with this characteristic are called fermions (we call it dhruva, which literally means fixed structure). This resistance to maintain its position, which is based on its internal energy and the inertia of restoration, is known as the potential energy of the particle. Unless this energy barrier is broken, the particle cannot interact with other particles. While discussing what an electron is, we have shown the deficiencies in the concepts of electronegativity and electron affinity. We have discussed the example of NaCl to show that the belief that ions tend to attain the electronic configuration of noble gases is erroneous. Neither sodium nor chlorine shows the tendency to become neon or argon. Their behaviour can be explained by the theory of transition states in micro level and the escape velocity in macro level.

In the case of fluids, the relationship between its “totally confined inner space” and its “outer space” is regulated not only by the nature of their confinement, but also by their response to density gradients and applied forces that change these gradients. Since this relationship between the “outer space” and “inner space” cannot be uniquely defined in the case of fluids, and since the state at a given moment is subject to change at the next moment beyond recognition, the combined state of all such unknown dimensions are said to be in a superposition of states. These are called bosons (we call it dhartra). The massless particles cannot be assigned such characteristics, as dimension is related to mass. Hence such particles cannot be called bosons, but must belong to a different class (we call them dharuna). Photons belong to this class.

            The relationship between the “inner space” and the “outer space” depends on the relative density of both. Since the inner space constitutes a three layer structure, i.e., the core or the nucleus, extra-nucleic part and the outer orbitals, the relationship between these stabilizes in seven different ways (2l + 1). Thus, the effects of these are felt in seven different ways by bodies external to these but in their vicinity. These are revealed as the seven types of gravitation.

Dimension is a feature of mass, which is determined by both volume and density. The volume and density are also features of charges, which, in a given space is called force. Thus, both mass and charge/force are related, because they explain different aspects of the objects. In spherical bodies from stars to protons, density is related to volume and volume is related to radius. Volume varies only with radius, which, in turn, inversely varies with density. Thus, for a given volume with a given density, increase or decrease in volume and density are functions of radius, i.e., proximity or distance between the center of mass and its boundary. When due to some reason the equilibrium volume or density is violated, the broken symmetry gives rise to the four plus one fundamental forces of nature.

We consider radioactive decay a type of fundamental interaction. These interactions are nothing but variable interactions between the nucleus representing mass (vaya) and the boundary (vayuna) determined by the diameter, mediated by the charge – the interacting force (vayonaadha). We know that the relationship between the centre and the boundary is directly related to diameter. We also know that scaling up or down the diameter keeping the mass constant is inversely proportional to the density of the body. Bodies with different density co-exist at different layers, but are not coupled together. Thus, the mediating force can be related to each of these interactions between the centre and the boundary. These are the proximity-proximity variables for strong interaction that bring the centre of mass and the boundary towards each other (we call such interactions antaryaama), proximity-distance variables for weak interaction where only the boundary shifts (vahiryaama), distance-proximity variables for electromagnetic interaction where the boundary interacts with the centre of mass of other particles (upayaama) and distance-distance variables for radioactive disintegration where a part of the mass is ejected (yaatayaama). These four are direct contact interactions (dhaarana) which operate from within the body. Thus, for formation of atoms with higher and lower mass numbers, only the nucleus (and not the full body) interacts with the other particles. Once the centre of mass is determined, the boundary is automatically fixed. Gravitational interaction (udyaama), which stabilizes the orbits of two bodies around their common barycentre at the maximum possible distance (urugaaya pratishthaa), belong to a different class altogether, as it is interaction between the two bodies as a whole.

            Action is said to be an attribute of the dynamics of a physical system. Physical laws specify how a physical quantity varies over infinitesimally small changes in time, position, or other independent variables in its domain. It is also said to be a mathematical function, which takes the trajectory (also called path or history), of the system as its argument and has a real number as its result. Generally, action takes different values for different paths. Classical mechanics postulates that the path actually followed by a physical system is that for which the action is minimized, or, is stationary. These statements are evidently self-contradictory. A stationary path is position and not action. The particle and its forces/fields may be useful “mathematical concepts”, but they are approximations to reality and do not physically exist. There is a fundamental flaw in such description because it considers the effect of the four fundamental forces described above not together, but separately. For example, while discussing Coulomb’s law, it will be shown that when Rutherford proposed his atomic model, he assumed that the force inside the atom is an electrostatic force. Thus, his equations treat the scattering as due to the Coulomb force, with the nucleus as a point-charge. Both his equations and his size estimates are still used though they have been updated (but have never been seriously recalibrated, much less reworked). This equation matches data up to a certain kinetic energy, but fails after that. Later physicists have assigned interaction with the strong force in addition to the weak force to explain this mismatch. We have discussed the fallacies in this explanation while discussing electroweak theory. But even there, gravity and radioactive disintegration have been ignored.

Since all actions take place after application of energy, which is quantized, what the above descriptions physically mean is that; action is the effect of application of force that leads to displacement. Within the dimensional boundary, it acts as the four fundamental forces of Nature that are responsible for formation of particles (vyuhana). Outside the dimensional boundary, it acts as the gravitational interaction that moves the bodies in fixed orbits (prerana). After displacement, the force ceases to act on the particle and the particle moves on inertia. The particle then is subjected to other forces, which changes its state again. This step-by-step interaction with various forces continues in a chain reaction (we call it dhaara). The effects of the four forces described in the previous para are individually different: total confinement (aakunchana), loose confinement (avakshepana), spreading from high concentration to low concentration (prasaarana) and disintegration (utkshepana). Thus, individually these forces can continuously displace the particle only in one direction. They cannot change the state of any particle beyond this. The change of state is possible only when all these forces act together on the body. Since these are inherent properties of the body, they can only be explained as transformation of the same force into these four forces. Only that way we can unite all forces.

Gravity between two bodies stabilizes their orbits based on the mass-energy distribution over an area at the maximum possible distance (urugaaya pratisthaa). It is mediated by the field that stabilizes the bodies in proportion to their dimensional density over the area. Thus, it belongs to a different class where the bodies interact indirectly through the field (aakarshana). When it stabilizes proximally, it is called acceleration due to gravity. When it stabilizes at a distance, it is known as gravitation. Like the constant for acceleration due to gravity g varies from place to place, the G also varies from system to system, though it is not locally apparent. This shows that not only the four fundamental forces of Nature, but also gravitation is essential for structure formation, as without it, even the different parts of the body will not exist in a stable configuration.

The concept can be further explained as follows: Consider two forces of equal magnitude but opposite in direction acting on a point (like the centre of mass and the diameter that regulate the boundary of a body). Assuming that no other forces are present, the system would be in equilibrium and it would appear as if no force is acting on it. Now suppose one of the forces is modified due to some external interaction. The system will become unstable and the forces of inertia, which were earlier not perceptible, would appear as a pair of two oppositely directed forces. The magnitude of the new forces would not be the same as the earlier forces, because it would be constantly modified due to changing mass-energy distribution within the body. The net effect on the body due to the modified force would regulate the complementary force in the opposite direction. When these effects appear between the centre of mass and the boundary of a body, these are termed as the four fundamental forces of Nature: strong force and radioactive disintegration form one couple and weak force and electromagnetic force form the other less strong couple. The net effect of the internal dynamics of the body (inner space dynamics) is expressed as its charge outside it.

However, if the bodies have different mass, the forces exerted by them on the external field would not be equal.  Thus, they would be propelled to different positions in the external field, where the net density over the area would be equal for both. Obviously this would be in proportion to their masses. Thus, the barycenter, which represents the center of mass of the system, is related to proportionate mass between the two bodies. The barycenter is one of the foci of the elliptical orbit of each body. It changes continuously due to the differential velocity of the two bodies. For example, the mass of Jupiter is approximately 1/1047 of that of the Sun. The barycenter of the Sun-Jupiter system lies above the Sun’s surface at about 1.068 solar radii from the Sun’s center, which amounts to about 742800 km. Both the Sun and Jupiter revolve around this point. At perihelion (closest point), Jupiter is 741 million km or 4.95 astronomical units (AU) from the Sun. At aphelion (farthest point) it is 817 million km or 5.46 AU. That gives Jupiter a semi-major axis of 778 million km or 5.2 AU and a mild eccentricity of 0.048. This shows the relationship between relative mass and barycenter point that balances both bodies. This balancing force that stabilizes the orbit is known as gravity.
Assuming that gravity is an attractive force, let us take the example of the Sun attracting Jupiter towards its present position S, and Jupiter attracting the Sun towards its present position J. The two forces are in the same line and balance. If both bodies are relatively stationary objects or moving with uniform velocity with respect to each other, the forces, being balanced and oppositely directed, would cancel each other. But since both are moving with different velocities, there is a net force. The forces exerted by each on the other will take some time to travel from one to the other. If the Sun attracts Jupiter toward its previous position S’, i.e., when the force of attraction started out to cross the gulf, and Jupiter attracts the Sun towards its previous position J’, then the two forces give a couple. This couple will tend to increase the angular momentum of the system, and, acting cumulatively, it will soon cause an appreciable change of period. The cumulative effect of this makes the planetary orbits to wobble as shown below.
All planets go round the Sun in circular orbits with radius r, whose center is the Sun itself. Due to the motion of the Sun, the center of the circle shifts in a forward direction, i.e., the direction of the motion of the Sun by ∆r making the new position r0+∆r in the direction of motion. Consequently, the point in the opposite direction shifts to a new position r0-∆r because of the shifted center. Hence, if we plot the motion of the planets around the Sun and try to close the orbit, it will appear as if it is an ellipse, even though it is never a closed shape. The picture below depicts this phenomenon.
An ellipse with a small eccentricity is identical to a circular orbit, in which the center of the circle has been slightly shifted. This can be seen more easily when we examine in detail the transformations of shapes from a circle to an ellipse. However, when a circle is slightly perturbed to become an ellipse, the change of shape is usually described by the gradual transformation from a circle to the familiar elongated characteristic shape of an ellipse. In the case of the elliptical shape of an orbit around the sun, since the eccentricity is small, this is equivalent to a circle with a shifted center, because in fact, when adding a small eccentricity, the first mathematical term of the series expansion of an ellipse appears as a shift of the central circular field of forces.  It is only the second term of the series expansion of an ellipse, which flattens the orbit into the well-known elongated shape.


            Before we re-examine the Lorentz force law in light of the above description, we must re-examine the mass energy equivalence equation. The equation e = mc2 is well established and cannot be questioned. But its interpretation must be questioned for the simple reason that it does not conform to mathematical principles. But before that let us note some facts Einstein either overlooked or glossed over.

It is generally accepted that Space is homogeneous. We posit that space only “looks” homogeneous over very large scales, because what we perceive as space is the net effect of radiation reaching our eyes or the measuring instrument. Since mass-energy density at different points in space varies, it cannot be homogenous. Magnetic force acts only between magnetic substances and not between all substances at the same space. Gravity interacts only with mass. Whether inside a black hole or in open space, it is only a probability amplitude distribution and it is part of the fields that exist in the neighborhood of the particles. Thus, space cannot be homogeneous.

For the same reason, we cannot accept that space is isotropic. Considering the temperature of the cosmic background radiation (-2.730 K) as the unit, the absolute zero, which is a notch below the melting point of Helium at -2720 K, is exactly 100 times less than the freezing point of water. Similarly, the interiors of stars and galaxies are a maximum of 1000 times hotter than the melting point of carbon, i.e., 35000 K. The significance of these two elements is well known and can be discussed separately. The ratio of 100:1000 is also significant. Since these are all scattered in space – hence affect its temperature at different points - space cannot be isotropic either. We have hot stars and icy planets and other Kuiper Belt Objects (KBO’s) in space. If we take the average, we get a totally distorted picture, which is not the description of reality.

Space is not symmetric under time translation either. Just like space is the successive interval between all objects in terms of nearness or farness from a designated point or with reference to the observer, time is the interval between successive changes in the states of the objects in terms of nearness or farness from a designated epoch or the time of measurement. Since all objects in space do not continuously change their position with respect to all other objects, space is differentiated from time, which is associated with continuous change of state. If we measure the spread of the objects, i.e., the relationship between its “inner space” and its “outer space” from two opposite directions, there is no change in their position. Thus the concept of negative direction of space is valid. Time is related to change of state, which materializes because of the interaction of bodies with forces. Force is unidirectional. It can only push. There is nothing as pull. It is always a complementary push from the opposite direction. (Magnetism acts only between magnetic substances and not universally like other forces. Magnetic fields do not obey the inverse square law. It has a different explanation). Consider an example:
A + B → C + D.
Here a force makes A interact with B to produce C and D. The same force doesn’t act on C and D as they don’t exist at that stage. If we change the direction of the force, B acts on A. Here only the direction of force and not the interval between the states before and after application of force (time) will change and the equation will be:
B + A → C + D and not B + A  ← C + D.
Hence it does not affect causality. There can be no negative direction for time or cause and effect. Cause must precede effect.

Space is not symmetric under a “boost” either. That the equations of physics work the same in moving coordinate system as in the stationary system has nothing to do with space. Space in no way interacts with or affects it.

            Transverse waves are always characterized by particle motion being perpendicular to the wave motion. This implies the existence of a medium through which the reference wave travels and with respect to which the transverse wave travels in a perpendicular direction. In the absence of the reference wave, which is a longitudinal wave, the transverse wave can not be characterized as such. All transverse waves are background invariant by its very definition. Since light is propagated in transverse waves, Maxwell used a transverse wave and aether fluid model for his equations. Feynman has shown that Lorentz transformation and invariance of speed light follows from Maxwell’s equations. Einstein’s causal analysis in SR is based on Lorentz’s motional theory where a propagation medium is essential to solve the wave equation. Einstein’s ether-less relativity is not supported by Maxwell’s Equations nor the Lorentz Transforms, both of which are medium (aether) based. Thus, the non-observance of aether drag (as observed in Michelson-Morley experiments) cannot serve to ultimately disprove the aether model.  The equations describing spacetime, based on Einstein’s theories of relativity, are mathematically identical to the equations describing ordinary fluid and solid systems.  Yet, it is paradoxical that physicists have denied aether model while using the formalism derived from it. They don’t realize that Maxwell used transverse wave model, whereas aether drag considers longitudinal waves. Thus, the notion that Einstein’s work is based on “aether-less model” is a myth. All along he used the aether model, while claiming the very opposite.

If light consists of particles, as Einstein had suggested in his 1911 paper, the principle of constancy of the observed speed of light seems absurd. A stone thrown from a speeding train can do far more damage than one thrown from a train at rest; since the speed of the particle is not independent of the motion of the object emitting it. And if we take light to consist of particles and assume that these particles obey Newton’s laws, then they would conform to Newtonian relativity and thus automatically account for the null result of the Michelson-Morley experiment without recourse to contracting lengths, local time, or Lorentz transformations. Yet, Einstein resisted the temptation to account for the null result in terms of particles of light and simpler, familiar Newtonian ideas, and introduced as his second postulate something that was more or less obvious when thought of in terms of waves in an aether.

Maxwell’s view that - the sum total of the electric field around a volume of space is proportional to the charges contained within - has to be considered carefully. Charge always flows from higher concentration to the lower concentration till the system acquires equilibrium. But then he says about “around a volume of space” and “charges contained within.” This means a confined space, i.e., an object and its effects on its surrounding field. It is not free or unbound space.

Similarly, his view that - the sum total of the magnetic field around a volume of space is always zero, indicating that there are no magnetic charges (monopoles) - has to be considered carefully. With a bar magnet, the number of field lines “going in” and those “going out” cancel each other out exactly, so that there is no deficit that would show up as a net magnetic charge. But then we must distinguish between the field lines “going in” and “going out”. Electric charge is always associated with heat and magnetic charges with the absence of heat or confinement of heat. Where the heat component dominates, it pushes out and where the magnetic component dominates, it confines or goes in. This is evident from the magnetospheric field lines and reconnections of the Earth-Sun and the Saturn-Sun system. This is the reason why a change over time in the electric field or a movement of electric charges (current) induces a proportional vorticity in the magnetic field and a change over time in the magnetic field induces a proportional vorticity in the electric field, but in the opposite direction. In what is called free space, these conditions do not apply, as charge can only be experience by a confined body. We don’t need the language of vector calculus to state these obvious facts.

In the example of divergence, usually it is believed that if we imagine the electric field with lines of force, divergence basically tells us how the lines are “spreading out”. For the lines to spread out; there must be something to “fill the gaps”. These things would be particles with charge. But there are no such things in empty space, so it is said that the divergence of the electric field in empty space is identically zero. This is put mathematically as: div E = 0 and div B = 0.

The above statement is correct mathematics, but wrong physics. Since space is not empty, it must have something. There is nothing in the universe that does not contain charge. After all, even quarks and leptons have charge. Neutrons have a small residual negative charge (1/11 of electron as per our calculation). Since charges cannot be stationary unless confined, i.e., unless they are contained in or by a body, they must always flow from higher concentration to lower concentration. Thus, empty space must be full of flowing charge as cosmic rays and other radiating particles and energies. In the absence of sufficient obstruction, they flow in straight lines and not in geodesics.

This does not mean that convergence in space is a number or a scalar field, because we know that, mean density of free space is not the same everywhere and density fluctuations affect the velocity of charge. As an example, let us dump huge quantities of common salt or gelatin powder on one bank of the river water flowing with a constant velocity. This starts diffusing across the breadth of the pool, imparting a viscosity gradient. Now if we put a small canoe on the river, the canoe will take a curved path just like light passing by massive stars bend. We call this “vishtambhakatwa”. The bending will be proportional to the viscosity gradient. We do not need relativity to explain this physics. We require mathematics only to calculate “how much” the canoe or the light pulse will be deflected, but not whether it will be deflected or why, when and where it is deflected. Since these are proven facts, div E = 0 and div B = 0 are not constant functions and a wrong descriptions of physics.

Though Einstein has used the word “speed” for light (“die Ausbreitungs-geschwindigkeit des Lichtes mit dem Orte variiert” - the speed of light varies with the locality”), all translations of his work convert “speed” to “velocity” so that scientists generally tend to think it as a vector quantity. They tend to miss the way Einstein refers to ‘c’, which is most definitely speed. The word “velocity” in the translations is the common usage, as in “high velocity bullet” and not the vector quantity that combines speed and direction. Einstein held that the speed varies with position, hence it causes curvilinear motion. He backed it up in his 1920 Leyden Address, where he said: “According to this theory the metrical qualities of the continuum of space-time differ in the environment of different points of space-time, and are partly conditioned by the matter existing outside of the territory under consideration. This space-time variability of the reciprocal relations of the standards of space and time, or, perhaps, the recognition of the fact that ‘empty space’ in its physical relation is neither homogeneous nor isotropic, compelling us to describe its state by ten functions (the gravitation potentials gμν), has, I think, finally disposed of the view that space is physically empty”. This is a complex way of telling the obvious.

Einsteinian space-time curvature calculations were based on vacuum, i.e. on a medium without any gravitational properties (since it has no mass). Now if a material medium is considered (which space certainly is), then it will have a profound effect on the space-time geometry as opposed to that in vacuum. It will make the gravitational constant differential for different localities. We hold this view. We do not fix any upper or lower limits to the corrections that would be applicable to the gravitational constant. We make it variable in seven and eleven groups. We also do not add a repulsive gravitational term to general relativity, as we hold that forces only push.

            Since space is not empty, it must have different densities at different points. The density is a function of mass and change of density is a function of energy. Thus, the equation: e = mc2 does not show mass energy equivalence, but the density gradient of space. The square of velocity has no physical meaning except when used to measure an area. The above equation does not prove mass energy convertibility, but only  shows the energy requirement to spread a designated quantity of mass over a designated area, so that the mean density can be called a particular type of field or space.


The interactions we discussed while defining dimension appear to be different from those of strong/weal/electromagnetic interactions. The most significant difference involves the weak interactions. It is thought to be mediated by the high energy W and Z bosons. Now, we will discuss this aspect.

The W boson is said to be the mediator in beta decay by facilitating the flavor change or reversal of a quark from being a down quark to being an up quark: d → u + W-. The mass of a quark is said to be about 4MeV and that of a W boson, about 80GeV – almost the size of an iron atom. Thus, the mediating particle outweighs the mediated particle by a ratio of 20,000 to 1. Since Nature is extremely economical in all operations, why should it require a boson to flip a quark over? There is no satisfactory explanation for this.

The W- boson then decays into an electron and an antineutrino: W- → e + v. Since the neutrinos and anti-neutrinos are said to be mass-less and the electron weighs about 0.5MeV, there is a great imbalance. Though the decay is not intended to be an equation, a huge amount of energy coming from nowhere, and then disappearing into nothing, needs explanation. We have shown that uncertainty is not a law of Nature, but is a result of natural laws relating to measurement that reveal a kind of granularity at certain levels of existence that is related to causality. The explanations of Dirac and others in this regard are questionable.

Glashow, Weinberg, and Salam “predicted” the W and Z bosons using an SU (2) gauge theory. But the bosons in a gauge theory must be mass-less. Hence one must assume that the masses of the W and Z bosons were “predicted” by some other mechanism to give the bosons its mass. It is said that the mass is acquired through Higgs mechanism - a form of spontaneous symmetry breaking. But it is an oxymoron. Spontaneous symmetry breaking is symmetry that is broken spontaneously. Something that happens spontaneously requires no mechanism or mediating agent. Hence the Higgs mechanism has to be spontaneous action and not a mechanism. This does not require a mediating agent – at least not the Higg’s boson. Apparently, the SU (2) problem has been sought to be solved by first arbitrarily calling it a symmetry, then point to the spontaneous breaking of this symmetry without any mechanism, and finally calling that breaking the Higgs mechanism! Thus, the whole exercise produces only a name!

A parity violation means that beta decay works only on left-handed particles or right handed anti-particles. Glashow, Weinberg, and Salam provided a theory to explain this using a lot of complicated renormalized mathematics, which showed both a parity loss and a charge conjugation loss. However, at low energies, one of the Higgs fields acquires a vacuum expectation value and the gauge symmetry is spontaneously broken down to the symmetry of electromagnetism. This symmetry breaking would produce three mass-less Goldstone bosons but they are said to be “eaten” by three of the photon-like fields through the Higgs mechanism, giving them mass. These three fields become the W-, W+, and Z bosons of the weak interaction, while the fourth gauge field which remains mass-less is the photon of electromagnetism.

All the evidence in support of the Higgs mechanism turns out to be evidence that, huge energy packets near the predicted W and Z masses exist. In that case, why should we accept that because big particles equal to W and Z masses exist for very short times, the SU (2) gauge theory can’t be correct in predicting zero masses. And that the gauge symmetry must be broken, so that the Higgs mechanism must be proved correct without any mechanical reason for such breaking? There are other explanations for this phenomenon. In fact, if the gauge theory requires to be bypassed with a symmetry breaking, it means that it is not a good theory to begin with. Normally, if equations yield false predictions - like these zero boson masses - the “mathematics” must be wrong. Because mathematics is done at “here-now” and zero is the absence of something at “here-now”. One can’t use some correction to it in the form of a non-mechanical “field mechanism”. Thus, Higgs mechanism is not a mechanism at all. It is a spontaneous symmetry breaking, and there is no evidence for any mechanism in something that is spontaneous.

Since charge is perceived through a mechanism, a broken symmetry that is gauged may mean that the vacuum is charged. But charge is not treated as mechanical in QED. Even before the Higgs field was postulated, charge was thought to be mediated by virtual photons. Virtual photons are non-mechanical ghostly particles. They are supposed to mediate forces spontaneously, with no energy transfer.  This is mathematically and physically not valid. Charge cannot be assigned to the vacuum, since that amounts to assigning characteristics to the void. One of the first postulates of physics is that extensions of force, motion, or acceleration cannot be assigned to “nothing”. For charge to be mechanical, it would have to have extension or motion. All virtual particles and fields are imaginary assumptions. Higgs’ field, like Dirac’s field, is a mathematical imagery.

The proof for the mechanism is said to have been obtained in the experiment at the Gargamelle bubble chamber, which photographed the tracks of a few electrons suddenly starting to move - seemingly of their own accord. This is interpreted as a neutrino interacting with the electron by the exchange of an unseen Z boson. The neutrino is otherwise undetectable. Hence the only observable effect is the momentum imparted to the electron by the interaction. No neutrino or Z is detected. Why should it be interpreted to validate the imaginary postulate? The electron could have moved due to many other reasons.

It is said that the W and Z bosons were detected in 1983 by Carlo Rubbia. This experiment only detected huge energy packets that left a track that was interpreted to be a particle. It did not tell that it was a boson or that it was taking part in any weak mediation. Since large mesons can be predicted by other simpler methods (stacked spins), this particle detection is not proof of weak interaction or of the Higgs mechanism. It is only indication of a large particle or two.

In section 19.2, of his book “The Quantum Theory of Fields, Weinberg says: “We do not have to look far for examples of spontaneous symmetry breaking. Consider a chair. The equations governing the atoms of the chair are rotationally symmetric, but a solution of these equations, the actual chair, has a definite orientation in space”. Classically, it was thought that parity was conserved because spin is an energy state. To conserve energy, there must be an equal number of left-handed and right-handed spins. Every left-handed spin cancels a right-handed spin of the same size, so that the sum is zero. If they were created from nothing - as in the Big Bang - they must also sum up to nothing. Thus, it is assumed that an equal number of left-handed and right-handed spins, at the quantum level.

It was also expected that interactions conserve parity, i.e., anything that can be done from left to right, can also be done from right to left. Observations like beta decay showed that parity is not conserved in some quantum interactions, because some interactions showed a preference for one spin over the other. The electroweak theory supplied a mystical and non-mechanical reason for it. But it is known that parity is not conserved always. Weinberg seems to imply that because there is a chair facing west, and not one facing east, there is a parity imbalance: that one chair has literally lopsided the entire universe! This, he explains as a spontaneously broken symmetry!

A spontaneously broken symmetry in field theory is always associated with a degeneracy of vacuum states. For the vacuum the expectation value of (a set of scalar fields) must be at a minimum of the vacuum energy. It is not certain that in such cases the symmetry is broken, because there is the possibility that the true vacuum is a linear superposition of vacuum states in which the summed scalar fields have various expectation values, which would respect the assumed symmetry. So, a degeneracy of vacuum states is the fall of these expectation values into a non-zero minimum. This minimum corresponds to a state of broken symmetry.

Since true vacuum is non-perceptible; hence nothingness; with only one possible state – zero – it would have no expectation values above zero (which is the logical assumption), However, Weinberg assumed that the vacuum can have a range of non-zero states, giving both it and his fields a non-zero energy. Based on this wrong assumption, Weinberg manipulated these possible ranges of energies, assigning a possible quantum effective action to the field. Then he started looking at various ways it might create parity or subvert parity. Since any expectation value above zero for the vacuum is wholly arbitrary and only imaginary, he could have chosen either parity or non-parity. In view of Yang and Lee’s finding, Weinberg choose non-parity. This implied that his non-zero vacuum degenerates to the minimum. Then he applied this to the chair! Spontaneous symmetry breaking actually occurs only for idealized systems that are infinitely large. So it is beyond our comprehension how a chair is an idealized system that is infinitely large!

According to Weinberg, the appearance of broken symmetry for a chair arises because it has a macroscopic moment of inertia I, so that its ground state is part of a tower of rotationally excited states whose energies are separated by only tiny amounts, of order h2/I. This gives the state vector of the chair an exquisite sensitivity to external perturbations, so that even very weak external fields will shift the energy by much more than the energy difference of these rotational levels. As a result, any rotationally asymmetrical external field will cause the ground state or any other state of the chair with definite angular momentum numbers to rapidly develop components with other angular momentum quantum numbers. The states of the chair that are relatively stable with respect to small external perturbations are not those with definite angular momentum quantum numbers, but rather those with a definite orientation, in which the rotational symmetry of the underlying theory is broken.

Weinberg declares that he is talking about symmetry, but actually he is talking about decoherence. He is trying to explain why the chair is not a probability or an expectation value and why its wave function has collapsed into a definite state. Quantum mathematics works by proposing a range of states. This range is determined by the uncertainty principle. Weinberg assigned a range of states to the vacuum and then extended that range based on the non-parity knowledge of Yang and Lee. But the chair is not a range of states: it is a state – the ground state. To degenerate or collapse into this ground state, or decohere from the probability cloud into the definite chair we see and experience, the chair has to interact with its surroundings. The chair is most stable when the surroundings are stable (having “a definite orientation”); so the chair aligns itself to this definite orientation. Weinberg argues that in doing so, it breaks the underlying symmetry. Thus, Weinberg does not know what he is talking about!

Weinberg believes that the chair is not just probabilistic as a matter of definite position. Apparently, he believes it is probabilistic in spin orientation also. He even talks about the macroscopic moment of inertia. This is extremely weird, because the chair has no macroscopic angular motion. The chair may be facing east or west, but there is no indication that it is spinning, either clockwise or counter clockwise. Even if it were spinning, there is no physical reason to believe that a chair spinning clockwise should have a preponderance of quanta in it spinning clockwise. QED has never shown that it is impossible to propose a macro-object spinning clockwise, with all constituent quanta spinning counterclockwise. However, evidently Weinberg is making this assumption without any supporting logic, evidence or mechanism. Spin parity was never thought to apply to macro-objects. A chair facing or spinning in one direction is not a fundamental energy state of the universe, and the Big Bang doesn’t care if you have five chairs spinning left and four spinning right. The Big Bang didn’t create chairs directly out of the void, so we don’t have to conserve chairs!

Electroweak theory, like all quantum theories, is built on gauge fields. These gauge fields have built-in symmetries that have nothing to do with the various conservation laws. What physicists tried to do was to choose gauge fields that matched the symmetries they had found or hoped to find in their physical fields. QED began with the simplest field U (1), but the strong force and weak force had more symmetries and therefore required SU (2) and SU (3). Because these gauge fields were supposed to be mathematical fields (which is mathematically illegitimate) and not physical fields, and because they contained symmetries of their own, physicists soon got tangled up in the gauge fields. Later experiments would show that the symmetries in the so-called mathematical fields didn’t match the symmetries in nature. However, the gauge field would have to be broken somehow - either by adding ghost fields or by subtracting symmetries by “breaking” them. This way, they landed up with 12 gauge bosons, only three of which are known to exist, and only one of which has been well-linked to the theory. The eight gluons are completely theoretical, and only fill slots in the gauge theory. The three weak bosons apparently exist, but no experiment has tied them to beta decay. The photon is the only boson known to exist as a mediating “particle”, and it was known long before gauge theory entered the picture.

Quantum theory has got even this one boson – the photon - wrong, since the boson of quantum theory is not a real photon: it is a virtual photon! QED couldn’t conserve energy with a real photon, so the virtual photon mediates charge without any transfer of energy. The virtual photon creates a zero-energy field and a zero-energy mediation. The photon does not bump the electron, it just whispers a message in its ear. So, from a theoretical standpoint, the gauge groups are not the solution, they are part of the problem. We should be fitting the mathematics to the particles, not the particles to the mathematics. Quantum physicists claim over and over that their field is mainly experimental, but any cursory study of the history of the field shows this is not true. Quantum physics has always been primarily “mathematical”. A large part of 20th century experiment was the search for particles to fill out the gauge groups, and the search continues, because they are searching blind folded in a dark room for the proverbial black cat that does not exist.

Weinberg’s book proves the above statement beyond any doubt. 99% of the book is couched in leading mathematics that takes the reader through a mysterious maze. This “mathematics” has its own set of rules that defy logical consistency. It is not a tool to measure how much a system changes when some of its parameters change. It is a vehicle. You climb in and it takes you where it wants to go! Quantum physicists never look at a problem without first loading it down with all the mathematics they know. The first thing they do is write everything as integrals and/or partial derivatives, whether they are needed to be so written or not. Then they bury their particles under matrices and action and Lagrangians and Hamiltonians and Hermitian operators and so on - as many stuff as they can apply to make it thoroughly incomprehensible. Only then do they begin calculating. Weinberg admits that Goldstone bosons “were first encountered in specific models by Goldstone and Nambu.” Note that the bosons were first encountered not in experiments They were encountered in the mathematics of Goldstone and Nambu. As a “proof” of their existence, Weinberg offers us a first equation in which action is invariant under a continuous symmetry, and in which a set of Hermitian scalar fields are subjected to infinitesimal transformations. This equation also includes it, a finite real matrix. To solve it, he also needs the spacetime volume and the effective potential.

In equation 21.3.36, we get the mass of the W particle: W = ev/2sinθ, where e is the electron field, v is the vacuum expectation value, and the angle is the electroweak mixing angle. Weinberg develops v right out of the Fermi coupling constant, so that it has a value here of 247 GeV.
v ≈ 1/√GF
The angle was taken from elastic scattering experiments between muon neutrinos and electrons, which gave a value for θ of about 28o.

All these are of great interest due to the following reasons:
·        There is no muon neutrino in beta decay, so the scattering angle of electrons and muon neutrinos don’t tell us anything about the scattering angles of protons and electrons, or electrons and electron antineutrinos. The electron antineutrino is about 80 times smaller than a muon neutrino, so it is hard to see how the scattering angles could be equivalent. It appears this angle was chosen after the fact, to match the data. Weinberg even admits it indirectly. The angle wasn’t known until 1994. The W was discovered in 1983, when the angle was unknown.
·        Fermi gave the coupling value to the fermions, but Weinberg gives the derived value to the vacuum expectation. This means that the W particle comes right out of the vacuum, and the only reason it doesn’t have the full value of 247 GeV is, the scattering angle and its relation to the electron. We were initially shocked in 1983 to find 80 GeV coming from nowhere in the bubble chamber, but now we have 247 GeV coming from nowhere. Weinberg has magically burrowed 247 GeV from the void to explain one neutron decay! He gives it back 10-25 seconds later, so that the loan is paid back. But 247 GeV is not a small quantity in the void. It is very big.

Weinberg says, the symmetry breaking is local, not global. It means he wanted to keep his magic as localized as possible. A global symmetry breaking might have unforeseen side-effects, warping the gauge theory in unwanted ways. But a local symmetry breaking affects only the vacuum at a single “point”. The symmetry is broken only within that hole that the W particle pops out of and goes back into. If we fill the hole back fast enough and divert the audience’s gaze with the right patter, we won’t have to admit that any rules were broken or that any symmetries really fell. We can solve the problem at hand, keep the mathematics we want to keep, and hide the spilled milk in a 10-25s rabbit hole.

Bryon Roe’s Particle Physics at the New Millennium deals with the same subject in a much more weird fashion. He clarifies: “Imagine a dinner at a round table where the wine glasses are centered between pairs of diners. This is a symmetric situation and one doesn’t know whether to use the right or the left glass. However, as soon as one person at the table makes a choice, the symmetry is broken and glass for each person to use is determined. It is no longer right-left symmetric. Even though a Lagrangian has a particular symmetry, a ground state may have a lesser symmetry”.

There is nothing in the above description that could be an analogue to a quantum mechanical ground state. Roe implies that the choice determines the ground state and the symmetry breaking. But there is no existential or mathematical difference between reality before and after the choice. Before the choice, the entire table and everything on it was already in a sort of ground state, since it was not a probability, an expectation, or a wave function. For one thing, prior choices had been made to bring it to this point. For another, the set before the choice was just as determined as the set after the choice, and just as real. De-coherence did not happen with the choice. It either happened long before or it was happening all along. For another, there was no symmetry, violation of which would have quantum effects. As with entropy, the universe doesn’t keep track of things like this: there is no conservation of wine glasses any more than there is a conservation of Weinberg’s chairs. Position is not conserved, nor is direction. Parity is a conservation of spin, not of position or direction. Roe might as well claim that declination, or lean, or comfort, or wakefulness, or hand position is conserved. Should we monitor chin angles at this table as well, and sum them up relative to the Big Bang?

Roe gives some very short mathematics for the Goldstone boson getting “eaten up by the gauge field” and thereby becoming massive, as follows:
L = Dβ*Dβ φ - μ 2φ*φ - λ(φ*φ)2 - (¼)FβνFβν
where Fβν = ∂νAβ - ∂βAν; Dβ = ∂β - igAβ ; and Aβ → Aβ + (1/g)∂βα(x)
Let φ1 ≡ φ1’ + ⟨0|φ1|0⟩ ≡ φ 1’ + v;v = √μ2/λ) and substitute:
New terms involving A are
(½)g2v2AνAν - gvAννφ 2

He says: “The first term is a mass term for Aν. The field has acquired mass!” But the mathematics suddenly stops. He chooses a gauge so that φ2 = 0, which deletes the last term above. But then he switches to a verbal description: “One started with a massive scalar field (one state), a massless Goldstone boson (one state) and a massless vector boson (two polarization states). After the transform there is a massive vector meson Aμ, with three states of polarization and a massive scalar boson, which has one state. Thus, the Goldstone boson has been eaten up by the gauge field, which has become massive”. But where is the Aμ in that derivation? Roe has simply stated that the mass of the field is given to the bosons, with no mathematics or theory to back up his statement. He has simply jumped from Aν to Aμ with no mathematics or physics in between!

The mathematics for positive vacuum expectation value is in section 21.3, of Weinberg’s book - the crucial point being equation 21.3.27. This is where he simply inserts his positive vacuum expectation value, by asserting that μ2< 0 making μ imaginary, and finding the positive vacuum value at the stationary point of the Lagrangian. (In his book, Roe never held that μ2< 0). This makes the stationary point of the Lagrangian undefined and basically implies that the expectation values of the vacuum are also imaginary. These being undefined and unreal, thus unbound, Weinberg is free to take any steps in his “mathematics”. He can do anything he wants to. He therefore juggles the “equalities” a bit more until he can get his vacuum value to slide into his boson mass. He does this very ham handedly, since his huge Lagrangian quickly simplifies to W = vg/2, where v is the vacuum expectation value. It may be remembered that g in weak theory is 0.65, so that the boson mass is nearly ⅔v.

Weinberg does play us some tricks here, though he hides his tricks a bit better than Roe. Roe gives up on the mathematics and just assigns his field mass to his bosons. Weinberg skips the field mass and gives his vacuum energy right to his boson, with no intermediate steps except going imaginary. Weinberg tries to imply that his gauged mathematics is giving him the positive expectation value, but it isn’t. Rather, he has cleverly found a weak point in his mathematics where he can choose whatever value he needs for his vacuum input, and then transfers that energy right into his bosons.

What is the force of the weak force? In section 7.2 of his book, Roe says that “The energies involved in beta decay are a few MeV, much smaller than the 80 GeV of the W intermediate boson.” But by this he only means that the electrons emitted have kinetic energies in that range. This means that, as a matter of energy, the W doesn’t really involve itself in the decay. Just from looking at the energy involved, no one would have thought it required the mediation of such a big particle. Then why did Weinberg think it necessary to borrow 247 GeV from the vacuum to explain this interaction? Couldn’t he have borrowed a far smaller amount? The answer to this is that by 1968, most of the smaller mesons had already been discovered. It therefore would have been foolhardy to predict a weak boson with a weight capable of being discovered in the accelerators of the time. The particles that existed had already been discovered, and the only hope was to predict a heavy particle just beyond the current limits. This is why the W had to be so heavy. It was a brilliant bet, and it paid off.


            Now, let us examine the Lorentz force law in the light of the above discussion. Since the theory is based on electrons, let us first examine what is an electron! This question is still unanswered, even though everything else about the electron, what it does, how it behaves, etc., is common knowledge.

From the time electrons were first discovered, charged particles like the protons and electrons have been arbitrarily assigned plus or minus signs to indicate potential, but no real mechanism or field has ever been seriously proposed. According to the electro-weak theory, the current carrier of charge is the messenger photon. But this photon is a virtual particle. It does not exist in the field. It has no mass, no dimension, and no energy. In electro-weak theory, there is no mathematics to show a real field. The virtual field has no mass and no energy. It is not really a field, as a continuous field can exist between two boundaries that are discrete. A boat in deep ocean in a calm and cloudy night does not feel any force. It can only feel the forces with reference to another body or sky. With no field to explain the atomic bonding, early particle physicists had to explain the bond with the electrons. Till now, the nucleus is not fully understood. Thus the bonding has been assigned to the electrons. But how far it is a correct theory?

The formation of an ionic bond proceeds when the cation, whose ionization energy is low, releases some of its electrons to achieve a stable electron configuration. But the ionic bond is used to explain the bonding of atoms and not ions. For instance, in the case of NaCl, it is a Sodium atom that loses an electron to become a Sodium cation. Since the Sodium atom is already stable, why should it need to release any of its electrons to achieve a “stable configuration” that makes it unstable? What causes it to drop an electron in the presence of Chlorine? There is no answer. The problem becomes even bigger when we examine it from the perspective of Chlorine. Why should Chlorine behave differently? In stead of dropping an electron to become an ion, Chlorine adds electrons. Since as an atom Chlorine is stable, why should it want to borrow an electron from Sodium to become unstable? In fact, Chlorine cannot “want” an extra electron, because that would amount to a stable atom “wanting” to be unstable. Once Sodium becomes a cation, it should attract a free electron, not Chlorine. So there is no reason for Sodium to start releasing electrons. There is no reason for a free electron to move from a cation to a stable atom like chlorine. But there are lots of reasons for Sodium not to release electrons. Free electrons do not move from cations to stable atoms.

This contradiction is sought to be explained by “electron affinity”. The electron affinity of an atom or molecule is defined as the amount of energy released when an electron is added to a neutral atom or molecule to form a negative ion. Here affinity has been defined by release of energy, which is an effect and not the cause! It is said that Ionic bonding will occur only if the overall energy change for the reaction is exothermic. This implies that the atoms tend to release energy. But why should they behave like that? All the present theory tells us is that there is release of energy during the bonding. But that energy could be released in any number of mechanical scenarios and not due to electron-affinity alone. Modern physicists have no answer for this.

It is said that all elements tend to become noble gases, so that they gain or lose electrons to achieve this. But there is no evidence for it. If this logic is accepted, then Chlorine should wants another electron to be more like Argon. Hence it really should want another proton, because another electron won’t make Chlorine into Argon. It will only make Chlorine an ion, which is unstable. Elements do not destabilize themselves to become ions. On the other hand, ions take on electrons to become atoms. It is the ions that want to be atoms, not the reverse. If there is any affinity, it is for having the same number of electrons and protons. Suicide is a human tendency – not an atomic tendency. Atoms have no affinity for becoming ions. The theory of ionic bonding suggests that the anion (an ion that is attracted to the anode during electrolysis), whose electron affinity is positive, accepts the electrons with a negative sign to attain a stable electronic configuration! And so are electrons! And no body pointed out such a fraud! Elements do not gain or lose electrons; they confine and balance the charge field around them, to gain even more nuclear stability.

            Current theory only tells that atoms should have a different electronegativity to bond without explaining the cause for such action. Electronegativity cannot be measured directly. Given the current theory, it also does not follow any logical pattern on the Periodic Table. It generally runs from a low to a peak across the table with many exceptions (Hydrogen, Zinc, Cadmium, Terbium, Ytterbium, and the entire 6th period, etc). To calculate Pauling electronegativity for an element, it is necessary to have the data on the dissociation energies of at least two types of covalent bonds formed by that element. That is a post hoc definition. In other words, the data has been used to formulate the mathematics. The mathematics has no predictive qualities. It has no theoretical or mechanical foundation. Before we define electronegativity, let us define what is an electron. We will first explain the basic concept before giving practical example to prove the concepts.

Since the effect of force on a body sometimes appears as action at a distance and since all action at a distance can only be explained by the introduction of a field, we will first consider fields to explain these. If there is only one body in a field, it reaches an equilibrium position with respect to that field. Hence, the body does not feel any force. Only when another body enters the field, the interaction with it affects the field, which is felt by both bodies. Hence any interaction, to be felt, must contain two bodies separated by a field. Thus, all bodies that take part in interactions are three-fold structures (we call it tribrit). Only in this way we can explain the effect of one body on the other in a field. It may be noted that particles with electric charge create electric fields that flow from higher concentration to lower concentration. When the charged bodies are in motion, they generate a magnetic field that closes in on itself. This motion is akin to that of a rudderless boat flowing from high altitude to low altitude with river current and creating bow-shock effect that closes in after the boat passed.

            All particles or bodies are discrete structures that are confined within their dimension which differentiates their “inner space” from their “outer space”. The “back ground structure” on which they are positioned is the field. The boundaries between particles and fields are demarcated by density variations. But what happens when there is uniform density between the particles and the field – where the particle melts into the field? The state is singular, indistinguishable in localities, uncommon and unusual from our experience making it un-describable – thus, un-knowable. We call this state of uniform density (sama rasa) singularity (pralaya – literally meaning approaching dissolution). We do not accept that singularity is a point or region in spacetime in which gravitational forces cause matter to have an infinite density – where the gravitational tides diverge – because gravitational tides have never been observed. We do not accept that singularity is a condition when equations do not give a valid value, and can sometimes be avoided by using a different coordinate system, because we have shown that division by zero leaves the number unchanged and renormalization is illegitimate mathematics. We do not accept that events beyond the Singularity will be stranger than science fiction, because at singularity, there cannot be any “events”.

            Some physicists have modeled a state of quantum gravity beyond singularity and call it the “big bounce”. Though we do not accept their derivation and their ”mathematics”, we agree in general with the description of the big bounce. They have interpreted it as evidence for colliding galaxies. We refer to that state as the true “collapse” and its aftermath. Law of conservation demands that for every displacement caused by a force, there must be generated an equal and opposite displacement. Since application of force leads to inertia, for every inertia of motion, there must be an equivalent inertia of restoration. Applying this principle to the second law of thermodynamics, we reach a state, where the structure formation caused by differential density dissolves into a state of uniform density. We call that state singularity. Since at that stage there is no differentiation between the state of one point and any other point, there cannot be any perception, observer or observable. There cannot be any action, number or time. Even the concept of space comes to an end as there are no discernible objects that can be identified and their interval described. Since this distribution leaves the largest remaining uncertainty, (consistent with the constraints for observation), this is the true state of maximum entropy. It is not a state of “heat death” or “state of infinite chaos”, because it is a state mediated by negative energy.

            Viewed from this light, we define objects into two categories: macro objects that are directly perceptible (bhaava pratyaya) and quantum or micro objects that are indirectly perceptible through some mechanism (upaaya pratyaya). The second category is further divided into two categories: those that have differential density that makes them perceptible indirectly through their effects (devaah) and those that form a part of the primordial uniform density (prakriti layaah) making them indiscernible. These are the positive and negative energy states respectively but not exactly like those described by quantum physics. This process is also akin to the creation and annihilation of virtual particles though it involves real particles only. We describe the first two states of the objects and their intermediate state as “dhruva, dharuna and dhartra” respectively.

            When the universe reaches a state of singularity as described above, it is dominated by the inertia of restoration. The singular state (sama rasa) implies that there is equilibrium everywhere. This equilibrium can be thought of in two ways: universal equilibrium and local equilibrium. The latter implies that every point is in equilibrium. Both the inertia of motion and inertia of restoration cannot absolutely cancel each other. Because, in that event the present state could not have been reached as no action ever would have started. Thus, it is reasonable to believe that there is a mismatch between the two, which causes the inherent instability at some point. Inertia of motion can be thought of as negative inertia of restoration and vice versa. When the singularity approaches, this inherent instability causes the negative inertia of restoration to break the equilibrium. This generates inertia of motion in the uniformly dense medium that breaks the equilibrium over a large area. This is the single and primary force that gives rise to other secondary and tertiary etc, forces.

            This interaction leads to a chain reaction of breaking the equilibrium at every point over a large segment resembling spontaneous symmetry breaking and density fluctuations followed by the bow-shock effect. Thus, the inertia of motion diminishes and ultimately ceases at some point in a spherical structure. We call the circumference of this sphere “naimisha” - literally meaning controller of the circumference. Since this action measures off a certain volume from the infinite expanse of uniform density, the force that causes it is called “maayaa”, which literally means “that by which (everything is) scaled”. Before this force operated, the state inside the volume was the same as the state outside the volume. But once this force operates, the densities of both become totally different. While the outside continues to be in the state of singularity, the inside is chaotic. While at one level inertia of motion pushes ahead towards the boundary, it is countered by the inertia of restoration causing non-linear interaction leading to density fluctuation. We call the inside stuff that cannot be physically described, as “rayi” and the force associate with it “praana” – which literally means source of all displacements. All other forces are variants of this force. As can be seen, “praana” has two components revealed as inertia of motion and inertia of restoration, which is same as inertia of motion in the reverse direction. We call this second force as “apaana”. The displacements caused by these forces are unidirectional. Hence in isolation, they are not able to form structures. Structure formation begins when both operate on “rayi” at a single point. This creates an equilibrium point (we call it vyaana) around which the surrounding “rayi” accumulate. We call this mechanism “bhuti” implying accumulation in great numbers.

            When “bhuti” operates on “rayi”, it causes density variation at different points leading to structure formation through layered structures that leads to confinement. Confinement increases temperature. This creates pressure on the boundary leading to operation of inertia of restoration that tries to confine the expansion. Thus, these are not always stable structures. Stability can be achieved only through equilibrium. But this is a different type of equilibrium. When inertia of restoration dominates over a relatively small area, it gives a stable structure. This is one type of confinement that leads to the generation of the strong, weak, electro-magnetic interactions and radioactivity. Together we call these as “Yagnya” which literally means coupling (samgati karane). Over large areas, the distribution of such stable structures can also bring in equilibrium equal to the primordial uniform density. This causes the bodies to remain attached to each other from a distance through the field. We call this force “sootra”, which literally means string. This causes the gravitational interaction. Hence it is related to mass and inversely to distance. In gravitational interaction, one body does not hold the other, but the two bodies revolve around their barycenter.

            When “Yagnya” operates at negative potential, i.e., “apaana” dominates over “rayi”, it causes what is known as the strong nuclear interaction, which is confined within the positively charged nucleus. Outside the confinement there is a deficiency of negative charge, which is revealed as the positive charge. We call this force “jaayaa”, literally meaning that which creates all particles. This force acts in 13 different ways to create all elementary particles (we are not discussing it now). But when “Yagnya” operates at positive potential, i.e., “praana” dominates over “rayi”, it causes what is known as the weak nuclear interaction. Outside the confinement there is a deficiency of positive charge, which is revealed as a negative charge. This negative charge searches for positive charge to attain equilibrium. This was reflected in the Gargamelle bubble chamber, which photographed the tracks of a few electrons suddenly starting to move. This has been described as the W boson. We call this mechanism “dhaaraa” – literally meaning sequential flow, since it starts a sequence of actions with corresponding reactions (the so-called W+ and W- bosons).

            Till this time, there is no structure: it is only density fluctuation. When the above reactions try to shift the relatively denser medium, the inertia of restoration is generated and tries to balance between the two opposite reactions. This appears as charge (lingam), because in its interaction with others, it either tries to push them away (positive charge – pum linga) or confine them (negative charge – stree linga). Since this belongs to a different type of reaction, the force associated with it is called “aapah”. When the three forces of “jaayaa”, “dhaaraa” and “aapah” act together, it leads to electromagnetic interaction (ap). Thus, electromagnetic interaction is not a separate force, but only accumulation of the other forces. Since electric current behaves in a bipolar way, i.e., stretching out, whereas magnetic flow always closes in, there must be two different sources of their origin and they must have been coupled with some other force. This is the physical explanation of electro-magnetic forces. Depending upon temperature gradient, we classify the electrical component into four categories (sitaa, peeta, kapilaa, ati-lohitaa) and the magnetic forces into four corresponding categories (bhraamaka, swedaka, draavaka, chumbaka).

While explaining uncertainty, we had shown that if we want to get any information about a body, we must send some perturbation towards it to rebound through the intervening field, where it gets modified. We had also shown that for every force released, there is an equivalent force released in the opposite direction. Let us take a macro example first. Planets move more or less in the same plane around the Sun like boats float on the same plane in a river (which can be treated as a field).

The river water is not static. It flows in a specific rhythm like the space weather. When a boat passes, there is a bow shock effect in water in front of the boat and the rhythm is temporarily changed till reconnection of the resultant wave. The water is displaced in a direction perpendicular to the motion of the boat. However, the displaced water is pushed back by the water surrounding it due to inertia of restoration. Thus, it moves backwards of the boat charting in a curve. Maximum displacement of the curve is at the middle of the boat.
We can describe this as if the boat is pushing the water away, while the water is trying to confine the boat. The interaction will depend on the mass and volume (that determines relative density) and the speed of the boat on the one hand and the density and velocity of the river flow on the other. These two can be described as the potentials for interaction (we call it saamarthya) of the boat and the river respectively. The potential that starts the interaction first by pushing the other is called the positive potential and the other that responds to this is called the negative potential. Together they are called charge (we call it lingam). When the potential leads to push the field, it is the positive charge. The potential that confines the positive charge is negative charge. In an atom, this negative potential is called an electron. The basic cause for such potential is instability of equilibrium due to the internal effect of a confined body. It can arise due to various causes (collectively we call these ashanaayaa vritti) and generates spin. This depends upon the magnitude of the instability, which explains the electron affinity. The consequent reaction is electronegativity.

The Solar system is inside a big bubble, which forms a part of its heliosphere. The planets are within this bubble. The planets are individually tied to the Sun through gravitational interaction. They also interact with each other. In the boat example, the river flows within two boundaries and the riverbed affects its flow. The boat acts with a positive potential. The river acts with a negative potential. In the Solar system, the Sun acts with a positive potential. The heliosphere acts with a negative potential. In an atom, the proton acts with a positive potential. The electron acts with a negative potential.

While discussing Coulomb’s law we have shown that interaction between two positive charges leads to explosive results. Thus, the protons explode like solar flares and try to move out in different directions, which are moderated by the neutrons in the nucleus and electrons in the boundary. The number of protons determine the number of explosions – hence the number of boundary electrons. Each explosion in one direction matches by another equivalent disturbance in the opposite direction. This determines the number of electrons in the orbital. The neutrons are like planets in the solar system. This is confined by the negative potential of the giant bubble in the Solar system, which is the equivalent of electron in atoms. Since the flares appear at random directions, the position of the electron cannot be precisely determined. In the boat example, the riverbed acts like the neutrons. The core of the nucleus is like the giant bubble. The water near the boat that is most disturbed acts similarly. The electrons are like the heliosphere. The river boundaries act similarly. The net effect of all such interactions in a higher atom appears as below:

            The atomic radius is a term used to describe the size of the atom, but there is no standard definition for this value. Atomic radius may refer to the ionic radius, covalent radius, metallic radius, or van der Waals radius. In all cases, the size of the atom is dependent on how far out the electrons extend. Thus, electrons can be described as the outer boundary of the atom that confines the atom. It is like the “heliopause” of the solar system, which confines the solar system and differentiates it from the inter-stellar space. There are well defined planetary orbits (like the electron shell), which lack a physical description except for the backdrop of the solar system. These are like the electron shells. This similarity is only partial, as each atomic orbital admits up to two otherwise identical electrons with opposite spin, but planets have no such companion (though the libration points 1 and 2 or 4 and 5 can be thought of for comparison). The reason for this difference is the nature of mass difference (volume and density) dominating in the two systems.

            Charge neutral gravitational force that arises from the center of mass (we call it Hridayam), stabilizes the inner (Sun-ward or nuclear-ward) space between the Sun and the planet and nucleus and the electron shells. The charged electric and magnetic fields dominates the field (from the center to the boundary) and confine and stabilize the inter-planetary field or the extra-nuclear field (we call it “Sootraatmaa”, which literally means “self-sustained entangled strings”). While in the case of Sun-planet, most of the mass is concentrated at the center as one body, in the case of nucleus, protons and neutrons with comparable masses interact with each other destabilizing the system continuously. This affects the electron arrangement. The mechanism (we call it “Bhuti”), the cause and the macro manifestation of these forces and spin will be discussed separately.

We have discussed the electroweak theory earlier. Here it would suffice to say that electrons are nothing but outer boundaries of the extra nuclear space and like the planetary orbits, have no physical existence. We may locate the planet, but not its orbit. If we mark one segment of the notional orbit and keep a watch, the planet will appear there periodically, but not always. Similarly, we cannot measure both the position and momentum of the electron simultaneously. Each electron shell is tied to the nucleus individually like planets around the Sun. This is proved from the Lamb shift and the over-lapping of different energy levels. The shells are entangled with the nucleus like the planets are not only gravitationally entangled with the Sun, but also with each other. We call this mechanism “chhanda”, which literally means entanglement.

 Quantum theory now has 12 gauge bosons, only three of which are known to exist, and only one of which has been well-linked to the electroweak theory. The eight gluons are completely theoretical, and only fill slots in the gauge theory. But we have a different explanation for these. We call these eight as “Vasu”. Since interaction requires at least two different units, each of these could interact with the other seven. Thus, we have seven types of “chhandas”. Of these, only three (maa, pramaa, pratimaa) are involved in fixed dimension (dhruva), fluid dimension (dhartra) and dimension-less particles (dharuna). The primary difference between these bodies relate to density, (apaam pushpam) which affects and is affected by volume. A fourth “chhanda” (asreevaya) is related to the confining fields (apaam). We will discuss these separately.

We can now review the results of the double slit experiment and the diffraction experiment in the light of the above discussion. Let us take a macro example first. Planets move more or less in the same plane around the Sun like boats float on the same plane in a river (which can be treated as a field). The river water is not static. It flows in a specific rhythm like the space weather. After a boat passes, there is bow shock effect in the water and the rhythm is temporarily changed till reconnection. The planetary orbits behave in a similar way. The solar wind also behaves with the magnetosphere of planets in a similar way. If we take two narrow angles and keep a watch for planets moving past those angles, we will find a particular pattern of planetary movement. If we could measure the changes in the field of the Solar system at those points, we will also note a fixed pattern. It is like boats crossing a bridge with two channels underneath. We may watch the boats passing through a specific channel and the wrinkled surface of water. As the boats approach the channels, a compressed wave precedes each boat. This wave will travel through both channels. However, if the boats are directed towards one particular channel, then the wave will proceed mostly through that channel. The effect on the other channel will be almost nil showing fixed bands on the surface of water. If the boats are allowed to move unobserved, they will float through either of the channels and each channel will have a 50% chance of the boat passing through it. Thus, the corresponding waves will show interference pattern.

Something similar happens in the case of electrons and photons. The so-called photon has zero rest mass. Thus, it cannot displace any massive particles, but flows through the particles imparting only its energy to them. The space between the emitter and the slits is not empty. Thus, the movement of the mass-less photon generates similar reaction like the boats through the channels. Since the light pulse spherically spreads out in all directions, it behaves like a water sprinkler. This creates the wave pattern as explained below:

Let us consider a water sprinkler in the garden gushing out water. Though the water is primarily forced out by one force, other secondary forces come into play immediately. One is the inertia of motion of the particles pushed out. The second is the interaction between particles that are in different states of motion due to such interactions with other particles. What we see is the totality of such interactions with components of the stream gushing out at different velocities in the same general direction (not in the identical direction, but in a narrow band). If the stream of gushing out water falls on a stationary globe which stops the energy of the gushing out water, the globe will rotate. It is because the force is not enough to displace the globe from its position completely, but only partially displaces its surface, which rotates it on the fixed axis.

Something similar happens when the energy flows generating a bunch of radiations of different wave lengths. If it cannot displace the particle completely, the particle rotates at its position, so that the energy “slips out” by it moving tangentially. Alternatively, the energy moves one particle that hits the next particle. Since energy always moves objects tangentially, when the energy flows by the particle, the particle is temporarily displaced. It regains its position due to inertia of restoration – elasticity of the medium - when other particles push it back. Thus, the momentum only is transferred to the next particle giving the energy flow a wave shape as shown below.

The diffraction experiment can be compared to the boats being divided to pass in equal numbers through both channels. The result would be same. It will show interference pattern. Since the electron behaves like the photon, it should be mass-less.

It may be noted that the motion of the wave is always within a narrow band and is directed towards the central line, which is the equilibrium position. This implies that there is a force propelling it towards the central line. We call this force inertia of restoration (sthitisthaapaka samskaara), which is akin to elasticity. The bow-shock effect is a result of this inertia. But after reaching the central line, it over-shoots due to inertia of motion. The reason for the same is that, systems are probabilistically almost always close to equilibrium. But transient fluctuations to non-equilibrium states could be expected due to inequitable energy distribution in the system and its environment independently and collectively. Once in a non-equilibrium state, it is highly likely that both after and before that state, the system was closer to equilibrium. All such fluctuations are confined within a boundary. The electron provides this boundary. The exact position of the particle cannot be predicted as it is perpetually in motion. But it is somewhere within that boundary only. This is the probability distribution of the particle. It may be noted that the particle is at one point within this band at any given time and not smeared out in all points. However, because of its mobility, it has the possibility of covering the entire space at some time or the other. Since the position of the particle could not be determined in one reading, a large number of readings are taken. This is bound to give a composite result. But this doesn’t imply that such readings represent the position of the particle at any specific moment or at all times before measurement.

The “boundary conditions” can be satisfied by many different waves (called harmonics – we call it chhanda) if each of those waves has a position of zero displacement at the right place. These positions where the value of the wave is zero are called nodes. (Sometimes two types of waves - traveling waves and standing waves - are distinguished by whether the nodes of the wave move or not.) If electrons are waves, then the wavelength of the electron must “fit” into any orbit that it makes around the nucleus in an atom. This is the “boundary condition” for a one electron atom. Orbits that do not have the electron’s wavelength “fit” are not possible, because wave interference will rapidly destroy the wave amplitude and the electron would not exist anymore. This “interference” effect leads to discrete (quantized) energy levels for the atom. Since light interacts with the atom by causing a transition between these levels, the color (spectrum) of the atom is observed to be a series of sharp lines. This is precisely the pattern of energy levels that are observed to exist in the Hydrogen atom. Transitions between these levels give the pattern in the absorption or emission spectrum of the atom.


            In view of the above discussion, the Lorentz force law becomes simple. Since division by zero leaves the quantity unchanged, the equation remains valid and does not become infinite. The equation shows mass-energy requirement for a system to achieve the desired charge density. But what about the radius “a” for the point electron and the 2/3 factors in the equation:

            The simplest explanation for this is that no one has measured the mass or radius of the electron, though its charge has been measured. This has been divided by c2 to get the hypothetical mass. As explained above, this mass is not the mass of the electron, but the required mass to achieve charge density equal to that of an electron, which is different from that of the nucleus and the extra-nucleic field like the heliosheath that is the dividing line between the heliosphere and the inter-stellar space. Just like solar radiation rebounds from termination shock, emissions from the proton rebound from the electron shell, that is akin to the stagnation region of the solar system.

Voyager 1 spacecraft is now in a stagnation region in the outermost layer of the bubble around our solar system – called termination shock. Data obtained from Voyager over the last year reveal the region near the termination shock to be a kind of cosmic purgatory. In it, the wind of charged particles streaming out from our sun has calmed, our solar system’s magnetic field is piled up, and higher-energy particles from inside our solar system appear to be leaking out into interstellar space. Scientists previously reported the outward speed of the solar wind had diminished to zero marking a thick, previously unpredicted “transition zone” at the edge of our solar system. During this past year, Voyager’s magnetometer also detected a doubling in the intensity of the magnetic field in the stagnation region. Like cars piling up at a clogged freeway off-ramp, the increased intensity of the magnetic field shows that inward pressure from interstellar space is compacting it. At the same time, Voyager has detected a 100-fold increase in the intensity of high-energy electrons from elsewhere in the galaxy diffusing into our solar system from outside, which is another indication of the approaching boundary.

This is exactly what is happening at the atomic level. The electron is like the termination shock at heliosheath that encompasses the “giant bubble” encompassing the Solar system, which is the equivalent of the extra-nuclear space. The electron shells are like the stagnation region that stretches between the giant bubble and the inter-stellar space. Thus, the radius a in the Lorentz force law is that of the associated nucleus and not that of the electron. The back reaction is the confining magnetic pressure of the electron on the extra-nucleic field. The factor 2/3 is related to the extra-nucleic field, which contributes to the Hamiltonian HI. The balance 1/3 is related to the nucleus, which contributes to the Hamiltonian HA. We call this concept “Tricha saama”, which literally means “tripling radiation field”. We have theoretically derived the value of π from this principle. The effect of the electron that is felt outside - like the bow shock effect of the Solar system - is the radiation effect, which contributes to the Hamiltonian HR. To understand physical implication of this concept, let us consider the nature of perception.


            Before we discuss perception of bare charge and bare mass, let us discuss about the modern notion of albedo. Albedo is commonly used to describe the overall average reflection coefficient of an object. It is the fraction of solar energy (shortwave radiation) reflected from the Earth or other objects back into space. It is a measure of the reflectivity of the earth’s surface. It is a non-dimensional, unit-less quantity that indicates how well a surface reflects solar energy. Albedo (α) varies between 0 and 1. A value of 0 means the surface is a “perfect absorber” that absorbs all incoming energy. A value of 1 means the surface is a “perfect reflector” that reflects all incoming energy. Albedo generally applies to visible light, although it may involve some of the infrared region of the electromagnetic spectrum.

Neutron albedo is the probability under specified conditions that a neutron entering into a region through a surface will return through that surface. Day-to-day variations of cosmic-ray-produced neutron fluxes near the earth's ground surface are measured by using three sets of paraffin-moderated BF3 counters, which are installed in different locations, 3 m above ground, ground level, and 20 cm under ground. Neutron flux decreases observed by these counters when snow cover exists show that there are upward-moving neutrons, that is, ground albedo neutron near the ground surface. The amount of albedo neutrons is estimated to be about 40 percent of total neutron flux in the energy range 1-10 to the 6th eV.

Albedos are of two types: “bond albedo” (measuring total proportion of electromagnetic energy reflected) and “geometric albedo” (measuring brightness when illumination comes from directly behind the observer). The geometric albedo is defined as the amount of radiation relative to that from a flat Lambert surface which is an ideal reflector at all wavelengths. It scatters light isotropically - in other words, an equal intensity of light is scattered in all directions; it doesn’t matter whether you measure it from directly above the surface or off to the side. The photometer will give you the same reading. The bond albedo is the total radiation reflected from an object compared to the total incident radiation from the Sun. The study of albedos, their dependence on wavelength, lighting angle (“phase angle”), and variation in time comprises a major part of the astronomical field of photometry.

The albedo of an object determines its visual brightness when viewed with reflected light. A typical geometric ocean albedo is approximately 0.06, while bare sea ice varies from approximately 0.5 to 0.7. Snow has an even higher albedo at 0.9. It is about 0.04 for charcoal. There cannot be any geometric albedo for gaseous bodies. The albedos of planets are tabulated below:
Geometric Albedo
Bond Albedo
0.343 +/-0.032

The above table shows some surprises. Generally, change in the albedo is related to temperature difference. In that case, it should not be almost equal for both Mercury, which is a hot planet being nearer to the Sun, and the Moon, which is a cold satellite much farther from the Sun. In the case of Moon, it is believed that the low albedo is caused by the very porous first few millimeters of the lunar regolith. Sunlight can penetrate the surface and illuminate subsurface grains, the scattered light from which can make its way back out in any direction. At full phase, all such grains cover their own shadows; the dark shadows being covered by bright grains, the surface is brighter than normal. (The perfectly full moon is never visible from Earth. At such times, the moon is eclipsed. From the Apollo missions, we know that the exact sub-solar point - in effect, the fullest possible moon - is some 30% brighter than the fullest moon seen from earth. It is thought that this is caused by glass beads formed by impact in the lunar regolith, which tend to reflect light in the direction from which it comes. This light is therefore reflected back toward the sun, bypassing earth).  

The above discussion shows that the present understanding of albedo may not be correct. Ice and snow, which are very cold, show much higher albedo than ocean water. But both Mercury and Moon show almost similar albedo even though they have much wide temperature variations. Similarly, if porosity is a criterion, ice occupies more volume than water, hence more porous. Then why should ice show more albedo than water. Why should Moon’s albedo be equal to that of Mercury, whose surface appears metallic, whereas the Moon’s surface soil is brittle. The reason is, if we heat up lunar soil, it will look metallic like Mercury. In other words, geologically, both Moon and Mercury belong to the same class as if they share the same DNA. For this reason, we generally refer to Mercury as the off-spring of Moon. The concept of albedo does not take into account the bodies that emit radiation.

We can see objects using solar or lunar radiation. But till it interacts with a body, we cannot see the incoming radiation. We see only the reflective radiation – the radiation that is reflected after interacting with the field set up by our eyes. Yet, we can see both the Sun and the Moon that emit these radiations. Based on this characteristic, the objects are divided into three categories:
  • Radiation that shows self-luminous bodies as well as other bodies (we call it swa-jyoti). The radiation itself has no colors – not perceptible to the eye. Thus, space is only black or white.
  • Reflected colorless radiation that shows not only the emission from reflecting bodies (not the bodies themselves), but also other bodies (para jyoti), and
  • Reflecting bodies that only show themselves by such radiation in different colors, but cannot reveal other bodies by such reflection (roopa jyoti).
  • Non-reflecting bodies that do not radiate (ajyoti). These are dark bodies.

Of these, the last category has 99 varieties including black holes and neutron stars.


Before we discuss about dark matter and dark energy, let us discuss some more aspects about the nature of radiation. Gamma rays and x-rays are clubbed together at the lower end of the electromagnetic radiation spectrum. However, in spite of some similarities, the origin of both shows a significant difference. While x-rays originate from the electron shell region, gamma rays originate from the region deep down the nucleus. We call such emissions “pravargya”.

Black holes behave like a black-body – zero albedo. Now, let us apply the photo-electric effect to the black holes – particularly those that exist at the center of galaxies. There is no dearth of high energy photons all around and most of it would have frequencies above the thresh-hold limit. Thus, there should be continuous ejection of not only electrons, but also x-rays. Some such radiations have already been noticed by various laboratories and are well documented. The flowing electrons generate a magnetic field around it, which appears as the sun-spots on the Sun. Similar effects would be noticed in the galaxies also. This shows that the modern notion of black holes needs modification.

We posit that black holes are not caused by gravity, but due to certain properties of heavier quarks – specifically the charm and the strange quarks. We call these effects “jyoti-gou-aayuh” and the reflected sequence “gou-aayuh-jyoti” for protons and other similar bodies like the Sun and planet Jupiter. For neutrons and other similar bodies like the Earth, we call these “vaak-gou-dyouh” and “gou-dyouh-vaak” respectively. We will deal with it separately.

The black-holes are identified due to the characteristic intense x-ray emission activity in its neighborhood implying the existence of regions of negative electric charge. The notion of black holes linked to singularity is self contradictory as hole implies a volume containing “nothing” in a massive substance, whereas the concept of volume is not applicable to singularity. Any rational analysis of the black hole must show that the collapsing star that creates it simply becomes denser. This is possible only due to the “boundary” of the stars moving towards the center, which implies dominance of negative charge. Since negative charge flows “inwards”, i.e., towards the center, it does not emit any radiation beyond its dimension. Thus, there is no interaction between the object and our eyes or other photographic equipment. The radiation that fills the intermediate space is not perceptible by itself. Hence it appears as black. Since space is only black and white, we cannot distinguish it from its surroundings. Hence the name black hole.

Electron shells are a region of negative charge, which always flows inwards, i.e., towards the nucleus. According to our calculation, protons carry a positive charge, which is 1/11 less than an electron. But the atom appears charge neutral as the excess negative charge flows inwards. Similarly, the black holes, which are surrounded by areas with negative charge, are not visible. Then how are the x-rays emitted? Again we have to go back to the Voyager data to answer this question. The so-called event horizon of the black hole is like the stagnation region in the outermost layer of the bubble around stars like the Sun. Here, the magnetic field is piled up, and higher-energy particles from inside appear to be leaking out into interstellar space. The outward speed of the solar wind diminishes to zero marking a thick “transition zone” at the edge of the heliosheath.

 Something similar happens with a black hole. A collapsing star implies increased density with corresponding reduction in volume. The density cannot increase indefinitely, because all confined objects have mass and mass requires volume – however compact. It cannot lead to infinite density and zero volume. There is no need to link these to hypothetical tachyons, virtual particle pairs, quantum leaps and non-linear i-trajectories at 11-dimensional boson-massed fields in parallel universes. On the contrary, the compression of mass gives away the internal energy. The higher energy particles succeed in throwing out radiation from the region of the negative charge in the opposite direction, which appear as x-ray emissions. These negative charges, in turn, accumulate positively charged particles from the cosmic rays (we call this mechanism Emusha varaaha) to create accretion discs that forms stars and galaxies. Thus, we find black holes inside all galaxies and may be inside many massive stars.

On the other hand, gamma ray bursts are generated during super nova explosions. In this case, the positively charged core explodes. According to Coulomb’s law, opposite charges attract and same charges repeal each other. Hence the question arises, how does the supernova, or for that matter any star or even the nucleus, generate the force to hold the positively charged core together. We will discuss Coulomb’s law before answering this question.


Objects are perceived in broadly two ways by the sensory organs. The ocular, auditory and psychological functions related to these organs apparently follow action at a distance principle (homogenous field interaction). We cannot see something very close to the eye. There must be some separation between the eye and the object because it need a field to propagate the waves. The tactile, taste and olfactory functions are always contact functions (discrete interaction). This is proved by the functions of “mirror neurons”. Since the brain acts like the CPU joining all data bases, the responses are felt in other related fields in the brain also. When we see an event without actually participating in it, our mental activity shows as if we are actually participating in it. Such behavior of the neurons is well established in medical science and psychology.

In the case of visual perception, the neurons get polarized like the neutral object and create a mirror image impression in the field of our eye (like we prepare a casting), which is transmitted to the specific areas of brain through the neurons, where it creates the opposite impression in the sensory receptacles. This impression is compared with the stored memory of the objects in our brain. If the impression matches, we recognize the object as such or note it for future reference. This is how we see objects and not because light from the object reaches our retina. Only a small fraction of the incoming light from the object reaches our eyes, which can’t give full vision. We don’t see objects in the dark because there is no visible range of radiation to interact with our eyes. Thus, what we see is not the object proper, but the radiation emitted by it, which comes from the area surrounding its confinement - the orbitals. The auditory mechanism functions in a broadly similar way, though the exact mechanism is slightly different.

But when we feel an object through touch, we ignore the radiation because neither our eyes can touch nor our hands can see. Here the mass of our hand comes in contact with the mass of the object, which is confined. The same principle applies for our taste and smell functions. Till the object and not the field set up by it touches our tongue or nose (through convection or diffusion as against radiation for ocular perception), we cannot feel the taste or smell. Mass has the property of accumulation and spread. Thus, it joins with the mass of our skin, tongue or nose to give its perception. This way, what we see is different from what we touch. These two are described differently by the two perceptions. Thus we can’t get accurate inputs to model a digital computer.

From the above description, it is clear that we can weigh and measure the dimensions of mass through touch, but cannot actually see it. This is bare mass. Similarly, we can see the effect of radiation, but cannot touch it. In fact, we cannot see the radiation by itself. This is bare charge. These characteristics distinguish bare charge from bare mass.


Astrophysical observations are pointing out to huge amounts of “dark matter” and “dark energy” that are needed to explain the observed large scale structure and cosmic dynamics. The emerging picture is a spatially flat, homogeneous Universe undergoing the presently observed accelerated phase. Despite the good quality of astrophysical surveys, commonly addressed as Precision Cosmology, the nature and the nurture of dark energy and dark matter, which should constitute the bulk of cosmological matter-energy, are still unknown. Furthermore, up till now, no experimental evidence has been found at fundamental level to explain the existence of such mysterious components. Let us examine the necessity for assuming the existence of dark matter and dark energy.

The three Friedmann models of the Universe are described by the following equation:
                                      Matter density  curvature   dark energy
                                                    8 πG            kc2         Λ
                                          H2 = -------- ρ --   ----   +   -----, where,
                                                       3               R2           3
H = Hubble’s constant.      ρ = matter density of the universe.         c = Velocity of light
k = curvature of the Universe.    G = Gravitational constant.   Λ = cosmological constant.
R = radius of the Universe.

In this equation, ‘R’ represents the scale factor of the Universe, and H is Hubble’s constant, which describes how fast the Universe is expanding. Every factor in this equation is a constant and has to be determined from observations - not derived from fundamental principles. These observables can be broken down into three parts: gravity (which is treated as the same as matter density in relativity), curvature (which is related to but different from topology) and pressure or negative energy given by the cosmological constant that holds back the speeding galaxies. Earlier it was generally assumed that gravity was the only important force in the Universe, and that the cosmological constant was zero. Thus, by measuring the density of matter, the curvature of the Universe (and its future history) was derived as a solution to the above equation. New data has indicated that a negative pressure, called dark energy, exists and the value of the cosmological constant is non-zero. Each of these parameters can close the expansion of the Universe in terms of turn-around and collapse. Instead of treating the various constants in real numbers, scientists prefer the ratio of the parameter to the value that matches the critical value between open and closed Universes. For example, if the density of matter exceeds the critical value, the Universe is assumed as closed. These ratios are called as Omega (subscript M for matter, Λ for the cosmological constant, k for curvature). For reasons related to the physics of the Big Bang, the sum of the various Omega is treated as equal to one. Thus: ΩM + ΩΛ  + Ωk = 1.

The three primary methods to measure curvature are luminosity, scale length and number. Luminosity requires an observer to find some standard ‘candle’, such as the brightest quasars, and follow them out to high red-shifts. Scale length requires that some standard size be used, such as the size of the largest galaxies. Lastly, number counts are used where one counts the number of galaxies in a box as a function of distance. Till date all these methods have been inconclusive because the brightest, size and number of galaxies changes with time in a ways that, cosmologists have not yet figured out. So far, the measurements are consistent with a flat Universe, which is popular for aesthetic reasons. Thus, the curvature Omega is expected to be zero, allowing the rest to be shared between matter and the cosmological constant.

To measure the value of matter density is a much more difficult exercise. The luminous mass of the Universe is tied up in stars. Stars are what we see when we look at a galaxy and it is fairly easy to estimate the amount of mass tied up in self luminous bodies like stars, planets, satellites and assorted rocks that reflect the light of stars and gas that reveals itself by the light of stars. This contains an estimate of what is called the baryonic mass of the Universe, i.e. all the stuff made of baryons - protons and neutrons. When these numbers are calculated, it is found that Ω for baryonic mass is only 0.02, which shows a very open Universe that is contradicted by the motion of objects in the Universe. This shows that most of the mass of the Universe is not seen, i.e. dark matter, which makes the estimate of ΩM to be much too low. So this dark matter has to be properly accounted for in all estimates.
ΩM = Ωbaryons + Ωdark matter

Gravity is measured indirectly by measuring motion of the bodies and then applying Newton’s law of gravitation. The orbital period of the Sun around the Galaxy gives a mean mass for the amount of material inside the Sun’s orbit. But a detailed plot of the orbital speed of the Galaxy as a function of radius reveals the distribution of mass within the Galaxy. Some scientists describe the simplest type of rotation as wheel rotation. Rotation following Kepler’s 3rd law is called planet-like or differential rotation. In this type of rotation, the orbital speeds falls off as one goes to greater radii within the Galaxy. To determine the rotation curve of the Galaxy, stars are not used due to interstellar extinction. Instead, 21-cm maps of neutral hydrogen are used. When this is done, one finds that the rotation curve of the Galaxy stays flat out to large distances, instead of falling off. This means that the mass of the Galaxy increases with increasing distance from the center.

There is very little visible matter beyond the Sun’s orbital distance from the center of the Galaxy. Hence the rotation curve of the Galaxy indicates a great deal of mass. But there is no light out there indicating massive stars. Hence it is postulated that the halo of our Galaxy is filled with a mysterious dark matter of unknown composition and type.

The equation: ΩM + ΩΛ  + Ωk = 1 appears tantalizingly similar to the Fermi’s description of the three part Hamiltonian for the atom: H = HA + HR + HI. Here, H is 1. ΩM, which represents matter density is similar to HA, the bare mass as explained earlier. ΩΛ, which represents the cosmological constant, is similar to HR, the radiating bare charge. Ωk, which represents curvature of the universe, is similar to HI, the interaction. This indicates, as Mr. Mason A. Porter and Mr. Predrag Cvitanovic had found out, that the macro and the micro worlds share the same sets of mathematics. Now we will explain the other aberrations.

Cosmologists tell us that the universe is homogeneous on the average, if it is considered on a large scale. The number of galaxies and the density of matter turn out to be uniform over sufficiently great volumes, wherever these volumes may be taken. What this implies is that, the overall picture of the recessing cosmic system is observed as if “simultaneously”. Since the density of matter decreases because of the cosmological expansion, the average density of the universe can only be assumed to be the same everywhere provided we consider each part of the universe at the same stage of expansion. That is the meaning of “simultaneously”. Otherwise, a part would look denser, i.e., “younger” and another part less dense. i.e., “older” depending on the stage of expansion we are looking at. This is because light propagates at a fixed velocity. Depending upon our distance from the two areas of observation, we may be actually looking at the same time objects with different stages of evolution. The uniformity of density can only be revealed if we can take a snap-shot of the universe. But the rays that are used for taking the snap-shot have finite velocities. Thus, they can get the signals from distant points only after a time lag. This time lag between the Sun and the earth is more than 8 minutes. In the scale of the Universe, it would be billions of years. Thus, the “snap-shot” available to us will reveal the Universe at different stages of evolution, which cannot be compared for density calculations. By observing the farthest objects - the Quasars - we can know what they were billions of years ago, but we cannot know what they look like now.

Another property of the universe is said to be its general expansion. In the 1930’s, Edwin Hubble obtained a series of observations that indicated that our Universe began with a Creation event. Observations since 1930s show that clusters and super-clusters of galaxies, being at distances of 100-300 mega-parsec (Mpc), are moving away from each other. Hubble discovered that all galaxies have a positive red-shift. Registering the light from the distant galaxies, it has been established that the spectral lines in their radiation are shifted to the red part of the spectrum. The farther the galaxy; the greater the red-shift! Thus, the farther the galaxy, velocity of recession is greater creating an illusion that we are right at the center of the Universe. In other words, all galaxies appear to be receding from the Milky Way.

By the Copernican principle (we are not at a special place in the Universe), the cosmologists deduce that all galaxies are receding from each other, or we live in a dynamic, expanding Universe. The expansion of the Universe is described by a very simple equation called Hubble’s law; the velocity of the recession v of a galaxy is equal to a constant H times its distance d (v = Hd). Where the constant is called Hubble’s constant and relates distance to velocity in units of light years.

The problem of dark matter and dark energy arose after the discovery of receding galaxies, which was interpreted as a sign that the universe is expanding. We posit that all galaxies appear to be receding from the Milky Way because they are moving with different velocities while orbiting the galactic center. Just like some planets in the solar system appearing to be moving away at a very fast rate than others due to their motion around the Sun at different distances with different velocities, the galaxies appear to be receding from us. In cosmic scales, the observation since 1930 is negligible and cannot give any true indication of the nature of such recession. Thus, the mass density calculation of the universe is wrong. As we have explained in various forums, gravity is not a single force, but a composite force of seven. The seventh component closes in the galaxies. The other components work in pairs and can explain the Pioneer anomaly, the deflection of Voyager beyond Saturn’s orbit and the Fly-by anomalies. We will discuss this separately.

Extending the principle of bare mass further, we can say that from quarks to “neutron stars” and “black holes”, the particles or bodies that exhibit strong interaction; i.e., where the particles are compressed too close to each other or less than 10-15 m apart, can be called bare mass bodies. It must be remembered that the strong interaction is charge independent: for example, it is the same for neutrons as for protons. It also varies in strength for quarks and proton-neutrons. Further, the masses of the quarks show wide variations. Since mass is confined field, stronger confinement must be accompanied with stronger back reaction due to conservation laws. Thus, the outer negatively charged region must emit its signature intense x-ray in black holes and strangeness in quarks. Since similar proximity like the proton-neutrons are seen in black holes also, it is reasonable to assume that strong force has a macro equivalent. We call these bodies “Dhruva” – literally meaning the pivot around which all mass revolves. This is because, be they quarks, nucleons or black-holes, they are at the center of the all bodies. They are not directly perceptible. Hence it is dark matter. It is also bare mass without radiation.

When the particles are not too close together, i.e., intermediate between that for the strong interaction and the electromagnetic interaction, they behave differently under weak interaction. The weak interaction has distinctly different properties. This is the only known interaction where violation of parity (spatial symmetry), and violation of the symmetry (between particles and anti-particles) has been observed. The weak interaction does not produce bound states (nor does it involve binding energy) – something that gravity does on an astronomical scale, the electromagnetic force does at the atomic level, and the strong nuclear force does inside nuclei. We call these bodies “Dhartra” – literally meaning that which induces fluidity. It is the force that constantly changes the relation between “inner space” and “outer space” of the particle without breaking its dimension. Since it causes fluidity, it helps in interactions with other bodies. It is also responsible for Radio luminescence.

            There are other particles that are not confined in any dimension. They are bundles of energy that are intermediate between the dense particles and the permittivity and permeability of free space – bare charge. Hence they are always unstable. Dividing them by c2 does not indicate their mass, but it indicates the energy density against the permittivity and permeability of the field, i.e., the local space, as distinguished from free space. They can move out from the center of mass of a particle (gati) or move in from outside (aagati), when they are called its anti-particle. As we have already explained, the bare mass is not directly visible to naked eye. The radiation or bare charge per se is also not visible to naked eye. When it interacts with any object, then only that object becomes visible. When the bare charge moves in free space, it illuminates space. This is termed as light. Since it is not a confined dense particle, but moves through space like a wave moving through water, its effect is not felt on the field. Hence it has zero mass. For the same reason, it is its own anti-particle.

Some scientists link electric charge to permittivity and magnetism to permeability. Permittivity of a medium is a measure of the amount of charge of the same voltage it can take or how much resistance is encountered when forming an electric field in a medium. Hence materials with high permittivity are used as capacitors. Since addition or release of energy leads the electron to jump to a higher or a lower orbit, permittivity is also linked to rigidity of a substance. The relative static permittivity or dielectric constant of a solvent is a relative measure of its polarity, which is often used in chemistry. For example, water (very polar) has a dielectric constant of 80.10 at 20 °C while n-hexane (very non-polar) has a dielectric constant of 1.89 at 20 °C. This information is of great value when designing separation.

Permeability of a medium is a measure of the magnetic flux it exhibits when the amount of charge is changed. Since magnetic field lines surround the object effectively confining it, some scientists remotely relate it to density. This may be highly misleading, as permeability is not a constant. It can vary with the position in the medium, the frequency of the field applied, humidity, temperature, and other parameters, such as the strength of the magnetic field, etc. Permeability of vacuum is treated as 1.2566371×10−60); the same as that of hydrogen, even though susceptibility χm (volumetric SI) of vacuum is treated as 0, while that of hydrogen is treated as −2.2×10−9. Permeability of air is taken as 1.00000037. This implies vacuum is full of hydrogen only.

This is wrong because only about 81% of the cosmos consists of hydrogen and 18% helium. The temperature of the cosmic microwave back-ground is about - 2.73k, while that of the interiors of galaxies goes to millions of degrees of k. Further, molecular hydrogen occurs in two isomeric forms. One with its two proton spins aligned parallel to form a triplet state (I = 1, α1α2, (α1β2 + β1α2)/21/2, or β1β2 for which MI = 1, 0, −1 respectively) with a molecular spin quantum number of 1 (½+½). This is called ortho-hydrogen. The other with its two proton-spins aligned anti-parallel form a singlet (I = 0, (α1β2 – β1α2)/21/2 MI = 0) with a molecular spin quantum number of 0 (½-½). This is called para-hydrogen. At room temperature and thermal equilibrium, hydrogen consists of 25% para-hydrogen and 75% ortho-hydrogen, also known as the “normal form”.

The equilibrium ratio of ortho-hydrogen to para-hydrogen depends on temperature, but because the ortho-hydrogen form is an excited state and has a higher energy than the para-hydrogen form, it is unstable. At very low temperatures, the equilibrium state is composed almost exclusively of the para-hydrogen form. The liquid and gas phase thermal properties of pure para-hydrogen differ significantly from those of the normal form because of differences in rotational heat capacities. A molecular form called protonated molecular hydrogen, or H+3, is found in the inter-stellar medium, where it is generated by ionization of molecular hydrogen from cosmic rays. It has also been observed in the upper atmosphere of the planet Jupiter. This molecule is relatively stable in the environment of outer space due to the low temperature and density. H+3 is one of the most abundant ions in the Universe. It plays a notable role in the chemistry of the interstellar medium. Neutral tri-atomic hydrogen H3 can only exist in an excited from and is unstable.


Coulomb’s law states that the electrical force between two charged objects is directly proportional to the product of the quantity of charge on the objects and is inversely proportional to the square of the distance between the centers of the two objects. The interaction between charged objects is a non-contact force which acts over some distance of separation. In equation form, Coulomb’s law is stated as:
where Q1 represents the quantity of charge on one object in Coulombs, Q2 represents the quantity of charge on the other object in Coulombs, and d represents the distance between the centers of the two objects in meters. The symbol k is the proportionality constant known as the Coulomb’s law constant. To find a electric force on one atom, we need to know the density of the electromagnetic field said to be mediated by photons relative to the size of the atom, i.e. how many photons are impacting it each second and sum up all these collisions. However, there is a difference in this description when we move from micro field to macro field. The interactions at the micro level are linear – up and down quarks or protons and electrons in equal measure. However, different types of molecular bonding make these interactions non-linear at macro level. So a charge measured at the macro level is not the same as a charge measured at the quantum level.

It is interesting to note that according to the Coulomb’s law equation, interaction between a charged particle and a neutral object (where either Q1 or Q2 = 0) is impossible as in that case the equation becomes meaningless. But it goes against everyday experience. Any charged object - whether positively charged or negatively charged - has an attractive interaction with a neutral object. Positively charged objects and neutral objects attract each other; and negatively charged objects and neutral objects attract each other. This also shows that there are no charge neutral objects and the so-called charge neutral objects are really objects in which both the positive and the negative charges are in equilibrium. Every charged particle is said to be surrounded by an electric field - the area in which the charge exerts a force. This implies that in charge neutral objects, there is no such field – hence no electric force should be experienced. It is also said that particles with nonzero electric charge interact with each other by exchanging photons, the carriers of the electromagnetic force. If there is no field and no force, then there should be no interaction – hence no photons. This presents a contradiction.

Charge in Coulomb’s law has been defined in terms of Coulomb’s. One Coulomb is one Ampere second. Electric current is defined as a measure of the amount of electrical charge transferred per unit time through a surface (the cross section of a wire, for example). It is also defined as the flow of electrons. This means that it is a summed up force exerted by a huge number of quantum particles. But it is measured at the macro level. Charge has not been specifically defined except that it is a quantum number carried by a particle which determines whether the particle can participate in an interaction process. This is a vague definition. The degree of interaction is determined by the field density. But density is a relative term. Hence in certain cases, where the field density is more than the charge or current density, the charge may not be experienced outside the body. Such bodies are called charge neutral bodies. Introduction of a charged particle changes the density of the field. The so-called charge neutral body reacts to such change in field density, if it is beyond a threshold limit. This limit is expressed as the proportionality constant in Coulomb’s law equation. This implies that, a charged particle does not generate an electric field, but only changes the intensity of the field, which is experienced as charge. Thus, charge is the capacity of a particle to change the field density, so that other particles in the field experience the change. Since such changes lead to combining of two particles by redistribution of their charge to create a third particle, we define charge as the creative competence (saamarthya sarva bhaavaanaam).

Current density is the time rate of change of charge (I=dQ/dt). Since charge is measured in coulombs and time is measured in seconds, an ampere is the same as a coulomb per second. This is an algebraic relation, not a definition. The ampere is that constant current, which, if maintained in two straight parallel conductors of infinite length of negligible circular cross-section, and placed one meter apart in vacuum, would produce between these conductors a force equal to 2 × 10−7 newton per meter of length. This means that the coulomb is defined as the amount of charge that passes through an almost flat surface (a plane) when a current of one ampere flows for one second. If the so-called circular cross-section is not negligible, i.e., if it is not a plane or a field, this definition will not be applicable. Thus, currents flow in planes or fields only. Current is not a vector quantity, as it does not flow in free space through diffusion or radiation. Current is a scalar quantity as it flows only through convection – thus, within a fixed area – not in any fixed direction. The ratio of current to area for a given surface is the current density. Despite being the ratio of two scalar quantities, current density is treated as a vector quantity, because its flow is dictated according to fixed laws by the density and movement of the external field. Hence, it is defined as the product of charge density and velocity for any location in space.

The factor d2 shows that it depends on the distance between the two bodies, which can be scaled up or down. Further, since it is a second order term, it represents a two-dimensional field. Since the field is always analogous, the only interpretation of the equation is that it is an emission field. The implication of this is that it is a real field with real photons with real energy and not a virtual field with virtual photons or messenger photons described by QED and QCD, because that would violate conservation laws: a quantum cannot emit a virtual quantum without first dissolving itself. Also complex terminology and undefined terms like Hamiltonians, tensors, gauge fields, complex operators, etc., cannot be used to real fields. Hence, either QED and QCD are wrong or Coulomb’s law is wrong. Alternatively, either one or the other or both have to be interpreted differently.

            Where the external field remains constant, the interaction between two charges is reflected as the non-linear summation (multiplication) of the effect of each particle on the field. Thus, if one quantity is varied, to achieve the same effect, the other quantity must be scaled up or down proportionately. This brings in the scaling constant, which is termed as k - the proportionality constant relative to the macro density. Thus, the Coulomb’s law gives the correct results. But this equation will work only if the two charges are contained in spherical bodies, so that the area and volume of both can be scaled up or down uniformly by varying the radius of each. Coulomb’s constant depends on the Bohr radius. Thus, in reality, it is not a constant, but a variable. This also shows that the charges are emissions in a real field and not mere abstractions. However, this does not prove that same charge repels and opposite charges attract.

It is interesting to note that the charge of electron has been measured by the oil drop experiment, but the charge of protons and neutrons have not been measured as it is difficult to isolate them. Historically, proton has been assigned charge of +1 and neutron charge zero on the assumption that the atom is charge neutral. But the fact that most elements exist not as atoms, but molecules, shows that the atoms are not charge neutral. We have theoretically derived the charges of quarks as -4/11 and +7/11 instead of the generally accepted value of - 13 or + 23. This makes the charges of protons and neutrons +10/11 and -1/11 respectively. This implies that both proton and neutron have a small amount of negative charge (-1 + 10/11) and the atom as a whole is negatively charged. This residual negative charge is not felt, as it is directed towards the nucleus.

            According to our theory, only same charges attract. Since proton and electron combined has the same charge as the neutron, they co-exist as stable structures. Already we have described the electron as like the termination shock at heliosheath that encompasses the “giant bubble” encompassing the Solar system, which is the macro equivalent of the extra-nuclear space. Thus, the charge of electron actually is the strength of confinement of the extra-nuclear space. Neutron behaves like the solar system within the galaxy – a star confined by its heliospheric boundary. However, the electric charges  (-1/11 for proton + electron and –1/11 for neutron) generate a magnetic field within the atom. This doubling in the intensity of the magnetic field in the stagnation region, i.e., boundary region of the atom, behaves like cars piling up at a clogged freeway off-ramp. The increased intensity of the magnetic field generates inward pressure from inter-atomic space compacting it. As a result, there is a 100-fold increase in the intensity of high-energy electrons from elsewhere in the field diffusing into the atom from outside. This leads to 13 different types of interactions that will be discussed separately.

When bare charges interact, they interact in four different ways as flows:
  • Total (equal) interaction between positive and negative charges does not change the basic nature of the particle, but only increases their mass number (pushtikara).
  • Partial (unequal) interaction between positive and negative charges changes the basic nature of the particle by converting it into an unstable ion searching for a partner to create another particle (srishtikara).
  • Interaction between two negative charges does not change anything (nirarthaka) except increase in magnitude when flowing as a current.
  • Interaction between two positive charges become explosive (vishphotaka) leading to fusion reaction at micro level or supernova explosion at macro level with its consequent release of energy.

Since both protons and neutrons carry a residual negative charge, they do not explode, but co-exist. But in a supernova, it is positively charged particles only, squeezed over a small volume, forcing them to interact. As explained above, it can only explode. But this explosion brings the individual particles in contact with the surrounding negative charge. Thus, higher elements from iron onwards are created in such explosion, which is otherwise impossible.


The micro and the macro replicate each other. Mass and energy are not convertible at macro and quantum levels, but are inseparable complements. They are convertible only at the fundamental level of creation (we call it jaayaa). Their inter se density determines whether the local product is mass or energy. While mass can be combined in various proportions, so that there can be various particles, energy belongs to only one category, but appears differently because of its different interaction with mass. When both are in equilibrium, it represents the singularity. When singularity breaks, it creates entangled pairs of conjugates that spin. When such conjugates envelop a state resembling singularity, it gives rise to other pairs of forces. These are the five fundamental forces of Nature – gravity that generates weak and electromagnetic interaction, which leads to strong interaction and radioactive disintegration. Separately we will discuss in detail the superposition of states, entanglement, seven-component gravity and fractional (up to 1/6) spin. We will also discuss the correct charge of quarks (the modern value has an error component of 3%) and derive it from fundamental principles. From this we will theoretically derive the value of the fine structure constant (7/960 at the so-called zero energy level and 7/900 at 80 GeV level).


No comments:

Post a Comment

let noble thoughts come to us from all around