KNOWLEDGE DRIVEN TECHNOLOGY &
MANAGEMENT
THE PROBLEM:
Technology is the application of
knowledge for practical purposes. Hence it should be guided by theory. But the
technological advancements in various sectors has led to data-driven
discoveries in the belief that if enough data is gathered, one can achieve a
“God’s eye view”. Data is not synonymous
with knowledge. By combining lots of data, we generate something big and
different, but unless we have knowledge about the mixing procedure to generate
the desired effect, it may create the Frankenstein’s monster - a tale of unintended consequences. Already
physics is struggling with misguided concepts like extra-dimensions that are
yet to be discovered even after a century. Weirdness of the concepts of superposition and entanglement are
increasingly being questioned with macro examples. The LHC experiment has
finally ruled out super-symmetry. Demand for downgrading the status of the
Heisenberg’s uncertainty postulate is gaining momentum. Yet, fantasies like dark
energy or vacuum energy, where theory and observation differ by a factor of 1057
to 10120, get Nobel Prize! Theoreticians are vanishing. Technologists
are being called scientists. Increase of trial and error based technology that
lack the benefit of foresight, are leading to more nonlinearly non-green technology
necessitating Minamata Mercury
Convention (for reducing Mercury poisoning) type conferences to prescribe do’s and don’ts for some industries.
Technology has become the biggest polluter.
With increasing broadband access,
wireless connectivity and content, dependence on gadgets like smart phones,
tablets, etc, is growing. Apart from its impact on vegetation (browning),
birds, and the ecosystem in general, the impact this human–machine bond will have
on our lives is yet to be fully assessed. The current trend is to create a product out of an idea (not
necessity), for which technology is invented later. The necessary recommendation
algorithms are compartmentalized in different branches of science. For example,
to find the accelerating expansion of the universe and define the nature of
dark energy, researchers used baryon acoustic oscillations as the yardstick. It
was created from sound waves that rippled through the universe when it was
young and hot and became imprinted in the distribution of galaxies as it
cooled. In sync with the idea, Google+ and Apple’s Siri came up with learning
algorithms that respond to one’s voice. Apple’s new iPhone fingerprint sensor
is directed at the machine knowing our bodies. Such devices start by
recognizing one’s thumb or voice; then other’s voices, the way they move, etc.
If such devices put such information together with information about one’s
location and his engagement calendar, it will be an integral part of our life. Social
media is changing the kinship diagram through emotionless physical
relationships. Network Administrators and algorithms regulate ‘date’ vetting. Human
beings are increasingly submitting themselves to machines and becoming
mechanized.
As the available resources get
depleted and demand for more intelligent solutions and services using nano-technology
increases, there is pressure for more re-generative and ‘intelligent’ – GREEN
and SMART– technologies emphasizing the need for knowledge collaboration in engineering. Green technology
encompasses a continuously evolving group of methods and materials, from
techniques for generating non-exhausting energy sources like solar or wind or
tidal power to non-toxic clean products (based on their production process or
supply chain) that are environmental-friendly and biodegradable. It involves
energy efficiency, recycling, safety and health concerns, renewable resources,
etc. Yet, it has to fight the ever increasing greed for easy money. For
example; as the world gold prices surges, small-scale ‘artisanal’ gold mining
has become the world’s leading source of mercury pollution. Miners use mercury to
separate flecks of gold from rocks, sediment and slurry and then dump or burn
the excess. It exposes ground water and air to mercury poisoning. But to
motivate the miners to adopt green alternatives is nearly impossible. Recycling
without the knowledge of its adverse side-effects is causing more pollution
world wide. But the greed for higher Return on Investment is eulogized as
prosperity and advancement.
“SMART” stands for “Self-Monitoring, Analysis and Reporting Technology”. It
gets input from somewhere, applies some ‘intelligence’ or ‘brainpower’ to it and
the result is innovative. For example, regular glasses used in spectacles are
shaped in such a way as to bend light for correct vision - to make the world
appear sharper and clearer. Photo-chromatic lenses contain molecules that react
to certain kinds of light and change tint in sunshine. Though it seems intelligent,
these are just physical reactions. By adding a camera and a computer to a pair
of glasses, many innovations can be made. A video camera at the corners of the
spectacles that feed into a tiny pocket computer that light up parts of an LED
array in the lenses can enable the wearer to see objects in greater detail. It could
include optical character recognition for reading newspaper headlines. The
glasses use cameras and some software to interpret the data and put zoomed in
images on a screen in front of the wearer’s eyes. This is only exemplary.
Artificial Intelligence (AI) is
the current buzz word. AI is of two types called narrow (ANI) and general (AGI) artificial intelligence. ANI is the
intelligent function at one narrow task like playing a chess game or searching
the web and is increasingly ubiquitous in our world. ANI may outsmart humans
only in the area in which it is specialized - hence not a big transformative
concern. But AGI, which is potentially intelligent across a broad range
of domains, is cause for concern. We mix the different sensory inputs
by our intelligence and apply our freewill to determine net response, but an
AGI would probably think or mix differently in unexpected ways. If we command a
super-intelligent robot to make us happy, it might cram electrodes into the
pleasure centers of our brains. If we command it to win at chess, it may calculate
all possible moves endlessly. This absurd logic holds because AI lacks our instincts
and the notions of absurdity and justification of mixing inputs. It does what
we program it to do, but without freewill. Once the embryo starts
breathing, it breathes perpetually till death, but the child also has limited
free-will and uses his instincts. After being switched on, computers obey
commands, but have no free-will or instincts. Since these cannot be
preprogrammed, AI can never be
conscious.
Knowledge is not data, but the ‘awareness’
of exposure/result of measurement associated with any object, energy or
interaction stored in memory as an invariant concept that can be retrieved even
in the absence of fresh inputs or impulses. It describes through a language the
defining characteristics of some previously known thing – physical properties
and chemical interactions - by giving it a name that remain the same as a
concept at all times – thus immune to spatiotemporal variations - till it is
modified by fresh inputs. The variations of the object, energy or interaction
under different specific circumstances and the predetermined result thereof
form part of knowledge. In a mathematical format, it depicts the right hand
side of each equation or inequality representing determinism. Once the
parameters represented by the left hand side are chosen and the special
conditions represented by the equality sign are met, the right hand side
becomes deterministic. In ancient times, it was technically covered under the
term Aanwikshiki, which literally
means describable facts about the invariant nature of everything.
Engineering and Management which
deal with the efficient use of objects or persons; are related to left hand
side of an equation – free-will; which presupposes knowledge of the
deterministic behavior of objects or humans that can be chosen or effectively
directed to create something or function in a desired manner in a maximally
economic and regenerative way. This was called Trayi – literally the three aspects of behavior of mass, energy and
radiation in their three states of solid, fluid and plasma in all combinations
– physical and chemical properties (protestation, loyalty and expectation for
humans). The responsive mechanism was called Danda Neeti – principles of inducement through reward and
punishment (essentially material addition or reduction). The regenerative
mechanism was called Vaartaa –
problem solving. These four basic
tenets, equally valid for both technology
and management, are also immutable - invariant in time, space and
culture leading to deterministic consequences. Lack of knowledge of the deterministic
behavior to guide choice of the freewill components has led engineering
and management astray. The fast changing technology and
management principles point to their inherent deficiencies that need immediate
remedy. Knowledge guidance is the
only way out.
There is a pressing need for
knowledge to take the lead for greener technology keeping in view
sustainability, cradle-to-cradle design, source reduction, viability,
innovation, etc. Hence it is necessary that pure science guide technology in
the right direction in ALL sectors. Till date all efforts in this regard have
been sector specific such as energy, chemicals, medical, real estate, hardware,
etc. As a result, green and smart technology has been reduced to transferring
problems in a discrete manner – they solve problem in one area (for example by
recycling something) ignoring the effect of the new process or its by-products
on other areas. It is high time to discuss a global strategy to meet the new
challenges.
THE PARADIGM SHIFT:
Earlier, some individual
scientists with their over-towering genius developed a postulate and took the
lead in Universities or Research Institutions to develop suitable experimental
setups to test those. These days,
individual scientists have to network and collaborate across State and National
boundaries to take advantage of State and International funding. They generate incredibly
massive data without any postulate. Communication technology has made the
efforts of individual researchers coalesce into a seamless whole merging
identities of who contributed what. The 2013 Nobel Prize in physics was the
result of many ideas that were floated around in early 1960’s by at least six scientists.
During that time lots of new particles were being discovered and it was a fair
bet that some particle would be discovered in the vacant 124-126 GeV range. Hence
it was proposed as a gamble. The model tested at the LHC was not that of Higgs
and Englert, who got the prize, but one for which Weinberg and Salam had already
won a Nobel! The general mechanism was first postulated by Philip Anderson a
couple years before Higgs and Englert. Already there are protests against the
decision.
As individual efforts became
obscured and team efforts took-over, more and more data are accumulated making their
storage and analysis a big problem. When subatomic particles are smashed
together at the LHC, they create showers of both known and unknown new
particles whose signatures are recorded by four detectors. The LHC captures 5
trillion bits of data (more information than all of the world’s libraries combined)
every second. After the application of filtering algorithms, more
than 99 percent of those data are discarded, but still the four
detectors produce 25 petabytes (25×1015 bytes) of data per year that
must be stored and analyzed. These are processed on a vast computing grid of
160 data centers around the world, a distributed network that is capable of
transferring as much as 10 gigabytes per second at peak performance. Are
these data really necessary? Can we be sure that useful data are not being discarded
while filtering, particularly when do not know what we are searching for or are
searching selectively? Is there no other way to formulate theory? Is the outcome cost-effective?
The unstructured streams of
digital potpourri are no longer stored in a single computer - it is distributed
across multiple computers in large data centers or even in the “cloud”. It demands
developing rigorous scientific methodologies and different data-processing
requirements - not only flexible databases, massive computing power and
sophisticated algorithms, but also a holistic (not reductionist) approach to get
any meaningful information. One possible solution to this dilemma is to embrace
a new paradigm. In addition to distributed storage, why not analyze the data in
a distributed manner as well! Each unit (or node) in a network of computers
perform a small piece of the computation! Each partial solution is then
integrated to find the full result. For example, at LHC, one complete copy of
the raw data (after filtering) is stored at the CERN in Switzerland. A
second copy is divided into batches that are then distributed to data centers
around the world. Each center analyzes a chunk of data and transmits the
results to regional computers before moving on to the next batch. But this
lacks the holistic approach. The reports of the six blind men about the body
parts of the elephant are individually correct. But unless someone has seen an
elephant, he cannot make any sense out of it.
THE BIG-DATA CHALLENGE:
The demand for ever-faster
processors, while important, is not the primary focus anymore. Processing speed
has become completely irrelevant now. The challenge is not how to solve
problems with a single, ultra-fast processor, but how to solve them with a
large number of slower processors. Yet, many problems in big-data cannot be
adequately addressed by adding more parallel processing. These problems are
more sequential, where each step depends on the outcome of the preceding step.
Sometimes the work can be split-up among a bunch of processors, but that is not
easy always. Time taken to complete one task is not always inversely
proportional to the number of persons. Often the software is not written to
take full advantage of the extra processors. Failure of Just-in-time and super-efficiency
in management has led to world-wide economic crisis. We may be approaching a similar
crisis in the scientific and technological field. The Y2K problem was a
precursor to what could happen.
Addressing the storage-capacity
challenges of big-data involves building more memory and managing fast movement
of data. Identifying correlated dimensions is exponentially more difficult than
looking for a needle in a haystack. When one does not know the correlations one
is looking for, one must compare each of the ‘n’ pieces of data with every
other piece, which takes n-squared operations. The amount of data is roughly doubling
every year (Moore’s Law). If in our algorithm, for each doubling of data, we have
to do two-squared times computing, then in the following year, we have to do 16
times (four squared) as much computing. By next year, our computers will only
be twice as fast and in two years our computers will only be four times as fast.
Thus, we are exponentially falling behind in our ability to store and analyze the
collected data. There are non-technical problems also. The analytical tools of
the future require not only the right mix of physics, chemistry, biology, mathematics,
statistics, computer science, etc., but also the team leader to take a holistic
approach – free of reductionism. In the big-data scenario, mathematicians and
statisticians should normally become intellectual leaders. But mathematics is
more focused on abstract work and do not encourage people to develop leadership
skills – it tends to rank people linearly to determine an individual pecking
order introducing bias. Engineers are used to working on teams focused on
solving problems, but they cannot visualize new theories.
Although smaller studies via distributed
processing provide depth and detail at a local level, they are also limited to
a specific set of queries and reflect the particular methodology of the
investigator, which makes the results more difficult to reproduce or reconcile
with broader models. The big impacts on the ecosystem including effects of
global warming cannot be studied with short-term, smaller studies. But in the
big-data age of distributed computing; the most important decision to be taken
is: how to conduct distributed science across a network of researchers - not
merely “interdisciplinary research”, but a state of “trans-disciplinary
research” - free from the reductionist approach? Machines are not going to
organize data-science research. Researchers have to turn petabytes of data into
scientific knowledge. But who is leading data-science right now? There is a
leadership crisis! There is a conceptual crisis!
Today’s big data is noisy,
unstructured, and dynamic rather than static. It may also be corrupted or
incomplete. Many important data are not shared till its theoretical, economical
or intellectual property right aspects are fully exploited. Sometimes data is
fudged. Data should comprise of vectors – a string of numbers and coordinates. But
now researchers need new mathematical tools, such as text recognition, or data
compression by selecting key words and their synonyms, etc., in order to glean
useful information from and intelligently curate the data-sets. For this either
we need a more sophisticated way to translate it into vectors, or we need to
come up with a more generalized way of analyzing it. Several promising
mathematical tools are being developed to handle this new world of big, multimodal
data.
THE NEW APPROACH:
One solution suggested is based
on consensus algorithms. It’s a mathematical optimization system. Algorithms
with past data are useful for creating an effective SPAM filter on a single
computer, with all the data in one place. But when the problem becomes too
large for a single computer, a consensus optimization approach works better. In
this process, the data-set is chopped into bits and distributed across several “agents”
each of which analyze their bit and produce a model based on the data they have
processed - something similar in concept to the Amazon’s Mechanical Turk
crowd-sourcing methodology. The program learns from the feedback, aggregating
the individual responses into its working model to make better predictions in
the future. In this system, the process is iterative, creating a feedback loop.
Although each agent’s model can be different, all the models must agree in the
end - hence “consensus algorithms”. The initial consensus is shared with all
agents, which update their models and reach a second consensus, and so on. The
process repeats until all the agents agree.
Another prospect is quantum
computing, which is fundamentally different from parallel processing. A classical
computer stores information as bits that can be either 0s or 1s. A quantum
computer could exploit a weird property called superposition of states. If we
flip a regular coin, it will land on heads or tails. There is zero probability
that it will be both heads and tails. But if it is a quantum coin, it is said
to exist in an indeterminate state of both heads and tails until we look to see
the outcome. Thereafter, it collapses - assumes a fixed value. This is a wrong
description of reality. The result of measurement is always related to a time
t, and is frozen for use at later times t1, t2, etc, when
the object has evolved further and the result of measurement does not depict
its true state. Thus, we can only know the value that existed at the moment of
observation or measurement. Scientists impose their ignorance of the true state
of the system at any moment on the object or the system and describe the combined
unknown states together as superposition of all possible states. It is
physically unachievable.
The quantum computers, if built,
will be best suited to simulate quantum mechanical systems or to factor large
numbers to break codes in classical cryptography. Quantum computing might be
able to assist big-data by searching very large, unsorted data-sets in a
fraction of the time for parallel processors. However, to really make it work, we
would need a quantum memory that can be accessed while in a quantum superposition,
but the very act of accessing the memory would collapse or destroy the
superposition. Some claim to have developed a conceptual prototype of quantum
RAM (Q-RAM), along with an accompanying program called Q-App (pronounced
“quapp”) targeted to machine learning. The system could find patterns within
data without actually looking at any individual records, thereby preserving the
quantum superposition (questionable idea). One is supposed to access the common
features of billions of items in his database at the same time, without individually
accessing them. With the cost of sequencing human genomes (where a single
genome is equivalent to 6 billion bits) dropping, and commercial genotyping
services rising, there is a great push to create such a database. But knowing
about malaria without knowing who is having it, is useless for treatment
purposes.
Another approach is integrating
across very different data sets. No matter how much we speed up the computers
or computers together, the real issues are at the data level. For example, a
raw data-set could include thousands of different tables scattered around the
Web, each one listing similar data, but each using different terminology and
column headers, known as “schema”. The problem can be overcome with a header to
describe the state. We must understand the relationship between the schemas
before the data in all those tables can be integrated. That, in turn, requires
breakthroughs in techniques to analyze the semantics of natural language. What
if our algorithm needs to understand only enough of the surrounding text to
determine whether, for example, a table includes specific data so that it can
then integrate the table with other, similar tables into one common data set? It
is one of the toughest problems in AI. But Panini has already done it with the Pratyaahaara style of the 14 Maheshwari Sootras.
One widely used approach is the topological
data analysis (TDA), which is an outgrowth of machine learning - a way of
getting structured data out of unstructured data so that machine-learning
algorithms can act directly on it. It is a mathematical version of Occam’s
razor: While there may be millions of possible reconstructions for a fuzzy,
ill-defined image, the sparsest (simplest) version is probably the best fit.
Compressed sensing was born out of this serendipitous discovery. With
compressed sensing, one can determine which bits are significant without first
having to collect and store them all. This allows us to acquire medical images
faster, make better radar systems, or even take pictures with single pixel
cameras. The idea was there since Euler, who puzzled over a conundrum: is it
possible to walk across seven bridges connecting four geographical regions,
crossing each bridge just once, and yet end up at one’s original starting
point? The relevant issue was the number
of bridges and how they were connected. Euler reduced the four land regions to
nodes connected by the bridges represented by lines. To cross all the bridges
only once, each land region would need an even number of bridges. Since that
was not the case, such a journey was impossible. A similar story is told in
B-Schools. If 32 teams play a knock-out tournament, how many games will be
played totally? One reasoned that in every game, one team will be defeated.
Only one team will remain undefeated till the end. Thus, the total number of
games is 31. This is the essence of compressed sensing. Using compressed
sensing algorithms, it is possible to sample only 100 out of 1000 pixels in an
image, and still be able to reconstruct it in full resolution - provided the
key elements of sparsity (which usually denotes an image’s complexity or lack
thereof) and grouping (or holistic measurements) are present.
Taking these ideas, mathematicians
are representing big data-sets as a network of nodes and edges, creating an
intuitive map of data based solely on the similarity of data points. This uses
distance as an input that translates into a topological shape or network. The
more similar the data points are, the closer they will be to each other on the
resulting map. The more different they are, the further apart they will be on
the map. This is the essence of TDA. Many of the methods in machine learning
are most effective when working with data matrices, like an Excel spreadsheet,
but what if our data set does not fit that framework?
TDA is all about the connections.
In a social network, relationships between people can be mapped: with clusters
of names as nodes and connections as edges illustrating how they are connected.
There will be clusters relating to family, friends, colleagues, etc. But it is
not always discernible. From friendship to love is not a linear relationship. It
is possible to extend the TDA approach to other kinds of data-sets, such as
genomic sequences. One can lay the sequences out next to each other and count
the number of places where they differ. That number becomes a measure of how
similar or dissimilar they are and one can encode that as a distance function. This
is supposed to reveal the underlying shape of the data. A shape is a collection
of points and distances between those points in a fixed order. But such a map will
not accurately represent the defining features. If we represent a circle by a
hexagon with six nodes and six edges, it may be recognizable as a circular
shape, but we have to sacrifice roundness. A child grows with age, but the rate
of growth is not uniform in every part of the body. Some features develop only
after certain stage. If the set at lower representations has topological
features in it, that is not a sure indication that there are features in the
original data also. The visual representation of the flat surface of Earth does
not belie its curvature. Topological methods are also a lot like casting a
two-dimensional shadow of a three-dimensional object on the wall: they enable
us to visualize a large, high-dimensional data set by projecting it down into a
lower dimension. The danger is that, as with the illusions created by shadow
puppets, one might be seeing patterns and images that are not really there. There
is a joke that topologists can not tell the difference between their rear end
and a coffee cup because the two are topologically equivalent.
Some researchers emphasize the
need to develop a broad spectrum of flexible tools that can deal with many
different kinds of data. For example, many users are shifting from traditional
highly structured relational databases, broadly known as SQL, which represent
data in a conventional tabular format, to a more flexible format dubbed NoSQL.
It can be as structured or unstructured as we need it to be, depending on the
application. Another method favored by many is the maximal information
coefficient (MIC), which is a measure of two-variable dependence designed
specifically for rapid exploration of many-dimensional data-sets. It was claimed that MIC possesses a
desirable mathematical property called equitability that mutual information
lacks. It has been disputed that MIC does not address the issues of
equitability, but rather focuses on the statistical power. MIC is said to be
less powerful than a recently developed statistic called distance correlation
(dCor) and a different statistic, HHG, both of which have their own problems
and are not satisfactory either.
In
all these, we are missing the woods for the trees. We do not need massive data
– we need theories out of the data. Higgs boson is said to validate Standard
Model, which does not include gravity – hence incomplete. The graviton, also
predicted by SM but described differently in string theory, is yet to be
discovered. That the same experiment disproves Super symmetry (SUSY), which
united gravity with SM, questions it and points to science beyond SM.
A report published in July, 2013 in the Proceedings of the National
Academy of Sciences USA, shows that to make healthy sperm, mice must have
genes that enable the sense of taste. Sperm have been shown to host
bitter-taste receptors and smell receptors, which most likely sense chemicals
released by the egg. But the idea that such proteins might function in sperm
development is new. Elsewhere researchers have found taste and smell receptors in
the body that help to sense toxins, pick up messages from gut bacteria or foil
pathogens. This opens up a whole world of alternative uses of these genes. When
we assign functions to genes, it is a very narrow view of biology. Probably for
every molecule that we assign a specific function to, it is doing other things
in other contexts. If anyone bothered to read the ancient system of Medicine
Ayurveda or properly interpret Mundaka Upanishadic dictums “annaat praano”, they will be surprised
to rediscover the science that is indicated through the latest data. We are not
discussing it here due to space constraint. There are many such examples. Instead of looking outward to data, let us look
inward and study the objects and develop the theories (many of which have
become obsolete) afresh based on
currently available data. The mind-less data-chase must stop.
THE WAY AHEAD:
Theory
without technology is lame. Technology without theory is blind. Both need each
other. But theory must guide technology and not the opposite. Nature provides
everything for our sustenance. We should try to understand Nature and harmonize
our actions to natural laws. While going for green technology, we must focus on
the product that we use, than on the packaging that we discard. As per a recent
study, in London
people waste 60% of the food they buy to eat while others go hungry. Necessity
and not idea should lead to creation of a product. Minimizing waste is also
green. Only products that are really not essential for our living need
advertisement. The concept of every business is show business must change.
Product liability laws should be strengthened; specifically in FMCG sector. But
what is the way out when economic and military considerations drive research? We
propose an approach as follows:
·
The cult of incomprehensibility and reductionism
that rules science must end and trans-disciplinary research values inculcated.
Theory must get primacy over technology. There should be more seminars to
discuss theory with feedback from technology. Most of the data collected at
enormous cost are neither necessary nor cost effective. This methodology must
change.
·
The superstitious belief in ‘established
theories’ must end and truth should replace fantasy. We have given alternative
explanations of ten-dimensions, time dilation, wave-particle duality,
superposition, entanglement, dark-energy, dark matter, inflation, etc, before international
scientific forums with macro-examples without cumbersome mathematics, while
pointing out the deficiencies in many ‘established theories’. Those views have
not yet been contradicted.
·
To overcome economic and military pressure,
International Conventions on important areas like the Minamata Mercury Convention should be held regularly for other
problem areas under the aegis of UNESCO or similar International bodies.
·
There is no need to go high-tech in all fields. We
should think out of box. Traditional knowledge is a very good source of
information (most herbal product companies use it successfully). If we analyze
these scientifically without any bias, we can get lot of useful inputs. The
chain of “Amma Canteens” in Tamil Nadu, India, is an excellent example of
green technology. It supplies fresh food at cheap rates with minimum infrastructure,
storage, transportation, pollution, wastages and maximum employment. The focus
is on the locally available product and not the package. There can be many more
such examples and innovations without big-data.
·
General educational syllabus must seek to
address day-to-day problems of the common man. Higher education should briefly
integrate other related branches while focusing on specialization.
·
Technologist is a honorable term. But stop
calling them scientists.
N.B.: Here we have used ancient
concepts with modern data.
No comments:
Post a Comment
let noble thoughts come to us from all around