Morsels of Knowledge Banquets of Ignorance: Scientific Fallacies Exposed



[Humankind] approaches the unattainable truth through a succession of errors. 
– Aldous Huxley, Wordsworth in the Tropics

A state of thoroughly conscious ignorance is the prelude to every real advance of knowledge.
– James Clerk Maxwell

Culture consists, in large measure, of commonly shared sets of assumptions and expectations about reality. It is a kind of lens through which we look at the world, one that is implanted in us in infancy and childhood and that is continually readjusted throughout life. According to the curvature and distortion of our lens, some things appear substantive that are actually only phantoms, while other things that are indeed quite solid and real are, to us, invisible.

We take this cultural astigmatism for granted in religion and politics. In politics, facts are widely understood to be merely incidental to worldviews constructed out of ideological and economic necessity. For example, recently most Americans have been convinced by politicians that crime is the result of too few people being in prisons — this despite the well-known fact that their nation already incarcerates a greater percentage of its citizens than does any other, with no observed effect on the crime rate (unless it is an inverse one). They have likewise become convinced that the poverty of the Third World is due to the sad circumstance that people in certain “backward” countries are “not yet ready for democracy,” are inherently unindustrious, or are overburdened by irrational tradition. Meanwhile nearly everyone (in the U.S., though this is not so much the case elsewhere) studiously ignores the clear fact that the Third World has been — and continues to be — systematically plundered by corporations that routinely use the power of the CIA, the World Bank, and (if necessary) the U.S. military to dominate or destroy indigenous enterprise. While this fact is frequently pointed out by certain “radical” political scientists and by the alternative press, it is rarely mentioned by politicians or by the mainstream media because its widespread acknowledgment would be inimical to the purposes of power. But no one is surprised, because most people believe that this is how politics works — that political worldviews are always shaped more by the self-interest of powerful individuals and groups than by mere facts.

Science is supposed to be different. The goal of the founders of Western science was to create a system of inquiry based on evidence, one in which theories would be continually tested, discarded, and replaced according to the impersonal dictates of fact and reason. Science was meant to stand above culture.

This was, and is, a laudable ideal. But science’s quest for objectivity has always had to contend with two unalterable obstacles: the fact that scientists themselves are human beings with prejudices, fears, and ambitions; and the fact that the practice of science takes place within a cultural context wherein the economic goals of elites, class power relations, and a host of shared unconscious assumptions cast an unavoidable and mostly invisible influence on the proceedings. Science does not stand above culture; it swims in it. In science, as in religion and politics, there are power bases to protect, careers to maintain, masses to convert, empires to build. And so the history of science is full of examples of dogmas constructed and defended, evidence suppressed or twisted, and alternative theories ignored.

Such historical examples as the disbelief of early-19th-century scientists in the existence of meteors, Lord Kelvin’s denunciation of X-rays a hoax, and the unwillingness of 18th-century chemists to abandon the phlogiston theory, make for interesting reading. But we seldom look at contemporary science with the degree of skepticism that such past failings would seem to warrant. Yet when a lengthy series of theoretical presuppositions is necessary to form the concepts which lead to experimental and equipment design in a typical research project, which then yields data that must be processed according to the same theoretical presuppositions in order to make sense, then it should be clear to us that even many of the “observed facts” of modern science are largely hypothetical.

That’s not the impression one gets when reading science articles in news magazines, or when watching television science documentaries, in which we are told repeatedly that scientists now “know” that the Universe began fifteen billion years ago in a Big Bang; that life on this planet evolved first from terrestrial chemical processes and then by way of competition and natural selection; that atoms are made up of tiny charged particles; and so on.

Recently it occurred to me that it might be helpful to make a brief reconnoitring of the boundaries of our collective ignorance. My objective here is not to denigrate the achievements of those who have expanded the territory of the known, but merely to call attention to — and honor — the great ocean of the unknown in which our collective knowledge floats.


In a previous issue of MuseLetter [“Don’t Enshrine the New Physics… Just Yet,” Number 19, July 1993], we explored briefly some of the difficulties with quantum and relativity theories. As we noted there, physicists are fond of pointing out the limitations of the “Newtonian paradigm,” in which space is Euclidean, the Universe is (in principle) entirely predictable, and matter is made up of billiard-ball atoms. The quantum and relativity theories of the early twentieth century are often hailed as having liberated the human mind from mechanistic and dualistic assumptions, and as confirming the mystical worldviews of Eastern religions.

Unfortunately, physicists hardly ever express to the general public their perplexity and inability to reconcile the fundamental contradictions between current theories — though among themselves they occasionally admit that “Physics is now faced with a crisis in which… further changes will have to take place, which will probably be as revolutionary compared to relativity and the quantum theory as these theories are compared to classical physics.” (David Bohm) In light of a statement like this from one of the most eminent scientists of our century, one cannot help but feel a certain bemused skepticism at the attempts of some science popularizers to create a mythic worldview for the masses out of a “new” physics that is already beginning to look a bit tattered and worn around the edges.


Viktor Schauberger (1885-1958) was an Austrian engineer, inventor, and natural scientist who, through observation and experiment, came to believe that it would be possible to create a life-enhancing system of technology working on principles entirely different from those presently understood. Virtually all of our current technologies are powered by the liberation of energy through the breaking down of complex materials into simpler ones through combustion and explosion, which produce expansion and heat. Schauberger believed that these processes represent only the destructive side of Nature, and that we have ignored Nature’s creative forces — which are characterized by centripetal, hyperbolic, spiral movement, the lowering of temperature, and the creation of new complex forms. He maintained that our technologies should be going with the flow of Nature rather than forcing actions that are contrary to it. Schauberger designed and built an “implosion generator” which was said to have attained “negative friction”; he also invented water purification systems, hydroturbines, and (reputedly) anti-gravity vehicles. However, his work was largely ignored during his lifetime. At present, about a half dozen groups worldwide are seeking to develop and implement technologies based on Schauberger’s pioneering ideas.


One of the greatest difficulties faced by astronomers is that in most cases they cannot directly probe the objects of their study; rather, they must analyze infinitesimal traces of radiation that have presumably traveled thousands or millions of years to arrive here from stars, galaxies, quasars, and even more exotic objects lying at unimaginable distances. With so little to go on, their analyses of these traces must inevitably incorporate some of the very hypotheses they seek to validate. This can lead to problems.

Lying at the foundation of modern cosmology is the observed spectrographic “red shift” of light from distant objects, which has been interpreted to mean that these objects are moving away from us. According to current views, the objects with the greatest red shifts are furthest distant and are receding fastest, which means that the Universe is expanding in every direction. Hence it must have originated in a huge explosion — the famous Big Bang.

But not everyone subscribes to this interpretation. H. Arp was formerly listed as one of the top twenty astronomers in the world, until he began cataloging apparently-associated celestial objects with differing red shifts. He began to openly suggest that at least some red shifts are not a measure of recessional velocity and distance. His reputation plummeted. I.E. Segal’s chronometric theory of the cosmos predicts a quadratic rather than a linear relationship between red shift and distance, which would do away with the expanding Universe altogether. Some red-shift measurements do indicate such a quadratic relationship. H. Alfven, a Nobel Prize winner in physics, posits a Universe shaped more by electromagnetic than gravitational forces; his theory rules out the possibility that the Universe could ever have had a diameter less than one-tenth its present one. Hence no Big Bang. The upshot of all of this is that we really do not know when or how — or if! — the Universe began; nor do we know what forces are primarily responsible for shaping it; nor do we know for certain how far away distant objects are or whether they are moving toward or away from us.

Closer to home, all of the recently popular theories of the Moon’s origin have been discredited, with no new one taking their place; meanwhile, serious questions have been raised about how the Sun generates its energy, where comets come from, why Mars lost its former atmosphere and lakes of water, and whether the “dark matter” between stars and galaxies may contain the seeds of life.


The geological record is formed from layers of rock that geologists liken to pages in a book. Unfortunately, that book is far from complete. In fact, of the ten major geological periods, only five or fewer are represented on two thirds of the Earth’s land surface. In some places the “periods” occur in the wrong order. And most fossils used for dating rock layers overlap from a few to all ten layers. The result: the “geological column” by which we construct Earth history is largely hypothetical.

Most geologists interpret this “column” on the basis of the theory of uniformitarianism, according to which the origin of major land features is to be attributed to mechanisms similar to those we see acting today, with small effects accumulating over vast stretches of time. However, there is growing evidence to suggest that the planet’s surface may have been shaped to a large extent by ancient global cataclysms, some of extraterrestrial origin — that is, by collisions with comets and other interstellar debris. With the dinosaur extinctions now widely attributed to a comet impact, cosmic catastrophism as it applies to the geologic past is on the upsurge. But the idea that similar bombardments could have occurred since the origin of humankind is still officially unthinkable.

Geologists believe that the age of rocks can be found by the precise measurement of the radioactive minerals and decay products embedded within them. A similar analysis of a radioactive isotope of carbon is used by archaeologists to tell the age of less-ancient organic materials. Critics of radiometric dating have questioned the assumptions on which these methods rest. (For example, does the radioactive decay rate remain constant despite changes in temperature, cosmic ray influx, and pressure? Evidence suggests that it does not.) Critics also cite instances in which objects of known age, such as freshly-cooled volcanic rocks or just-felled trees, have yielded wildly inaccurate radiometric dates in the thousands or millions of years. But if the radiometric techniques are essentially useless, then how seriously are we to take the interminably repeated assertion that scientists “know” that the Earth is four-and-a-half billion years old?

The Earth’s magnetic field is collapsing. At the present rate it will fall to zero in about 1200 years. No one knows why.


The problems in biology are so numerous and basic that it is hard to know where to begin, and we cannot hope to do more than name a few of the most glaring ones. Biology is, of course, the science of life; but biologists are generally averse to telling us just what life is. The strategy that is currently popular is to try to erase the conceptual boundary between life and non-life, though even the simplest living cell has characteristics profoundly different from those of any non-living entity. The difficulty comes because many scientists assume that biology should be reducible to chemistry and physics; they abhor the idea that living things might possess some fundamental principle not present in non-living matter. And yet all attempts to generate life out of chemicals (that is, to reproduce the processes that must have — according to theory — brought about the beginnings of life on Earth) have fallen far short of their goal.

A host of difficulties surround Darwinian and neo-Darwinian theories of evolution. In its essence, the word evolution simply means “directional change over time.” There is little question that evolution in this sense has taken place in the biological world. But what kind of evolution, and what has driven it? The idea that chance genetic mutations could add up constructively seems far fetched, since few if any beneficial mutations have ever been seen to occur in Nature. And then there are structures, like the vertebrate eye, which simply would not have functioned until an entire complex of individual features was in place, though none of these by itself would have conferred any advantage to the organism.

Neo-Darwinian theorists treat the idea of natural selection with a kind of religious awe, but critics point out that it is essentially tautological: we say that the fittest survive, but how are we to define “the fittest,” except as “those who survive?” Moreover, natural selection implies fierce, unending competition. Yet, as entomologist P.S. Messenger puts it, “Actual competition is difficult to see in nature.” Nature instead produces unending examples of cooperation. Differing species, and members of the same species, go well out of their way to avoid competition wherever possible.

The science of genetics has gone a long way toward explaining how the physical characteristics of organisms are passed along through generations. But genetics is still unable to explain cell differentiation in embryos, the ability of simple organisms to regenerate lost limbs and organs, and the transmission of instinct.

The battle of evolutionary biologists with the Bible-based creationists has unfortunately served mostly to harden the ranks of the former against admissions that serious problems such as these exist. Our overwhelming ignorance is masked by sweeping declarations about the creative powers of natural selection, and we are deprived of the insights that might come from an honest assessment of the limits of our knowledge of life’s origin and development. Meanwhile, unorthodox but promising ideas — such as Fred Hoyle’s cosmic evolutionism (the proposal that life was seeded on Earth from comets), Ludwig von Bertalanffy’s theory of living systems, and Rupert Sheldrake’s theory of formative causation — are typically given short shrift.


Human origins are no less mysterious than those of other living things. In the past few decades, new techniques — such as the tracing of mutations in mitochondrial DNA — have offered intriguing clues as to the timing of our early ancestors’ significant migrations. But these techniques are not without difficulties.

Meanwhile, we humans exhibit a host of biological features and behavioral characteristics shared by none of our primate relatives, and these beg for explanations. Why our bipedal stance, limb proportions, smooth skin, brain size, tendency to perspire, and ability to breathe voluntarily? Why are humans uniquely prone to lower back pain, varicose veins, hemorrhoids, sunburn, acne, dandruff, and swollen adenoids?

British science writer Elaine Morgan, in her books The Aquatic Ape: A Theory of Human Evolution, and The Scars of Evolution: What Our Bodies Tell Us About Human Origins, has offered a promising proposal — that during our early evolution we humans passed through a long phase of adaptation to the shallow water of lakes, rivers, and sea coasts. As Morgan points out in her books, the features that separate us from other primates are precisely ones that appear in aquatic mammals such as manatees, dolphins, sea lions, and whales. Lower back pain, varicose veins, and hemorrhoids may derive from our shift from walking in shallow water to walking on land; acne and dandruff from our continuing to secrete furwaterproofing sebum in an aquatic environment in which we became furless; and swollen adenoids from our descended larynx, characteristic of an aquatic mouth-breather.

Morgan’s hypothesis also goes a long way toward explaining the dearth of early human fossil remains. During past Ice Ages the level of the oceans was up to three hundred feet lower than at present. If human beings were shore-dwellers, most of their remains were likely buried underwater.

Morgan’s proposals are still considered eccentric by the establishment.


For decades archaeologists have maintained that the first people to set foot in the Americas crossed a land bridge over the Bering Sea roughly 12,000 years ago. But in several instances human artifacts or remains have turned up in deposits that are much older. While most experts continue to discount these anomalies, others are quietly beginning to concede that human beings may have been living in the Americas for twenty to fifty thousand years — or longer.

According to standard history, Native American cultures evolved in isolation from the rest of the world. For over a century, renegade archaeologists have theorized that the Mayans and Aztecs were influenced by Egyptian, Phoenician, or Chinese explorers. Most such theories perished for lack of incontrovertible evidence. During the past two decades, however, Barry Fell of Harvard, and others, have published descriptions of coins, petroglyphs, and other artifacts that seem to prove that Celts, Basques, Libyans, Arabians, Romans, Egyptians, Hebrews, and Chinese all visited North America at one time or another. The scientific establishment remains unconvinced.

In recent years amateur Egyptologist John Anthony West has called attention to the remarkable weathering patterns on the Sphinx on the Gizeh plateau. Geologists he has consulted are in general agreement that the weathering was caused by water and indicates an age of seven to ten thousand years. Since no other Egyptian monument shows similar weathering (despite similarity of materials), this would seem to indicate that the Sphinx was built in an era long predating the pharaohs. Professional Egyptologists are adamant that the Sphinx is less than five thousand years old and was constructed by the pharaoh Khephren, and ask: If the Sphinx is an artifact of an earlier civilization, where is the corroborating evidence for that civilization’s existence? Good question. But the archaeologists offer no alternative explanation for the Sphinx’s deep water channels.


Just as biology deals with life by explaining it away, psychology often treats consciousness (the natural object of its study) as an epiphenomenon, or even — in the case of behaviorism — as something to be ignored altogether. Despite some progress in the past decade, we still have no generally accepted theory of consciousness and we still do not know how memories are stored and accessed.

Because psychology is forced to exist within the mechanistic framework of the rest of science (excepting quantum physics), most psychologists avoid consideration of “paranormal” phenomena such as precognition, telepathy, clairvoyance, etc, which resist rationalistic explanations. Nevertheless, both anecdotal and experimental evidence for such psychic phenomena persists, albeit in maddeningly elusive forms.

These days the idea that the mind can influence the body via the immune system is becoming widely accepted. But extraordinary mind-body phenomena are difficult to understand in terms of known biophysical processes. How is it that, in a single individual with multiple-personality disorder, one personality may suffer from severe food allergies to which another personality is immune?

According to conventional views in psychology, higher mental phenomena are supposed to take place in the cerebral cortex, a late evolutionary development. How, then, is one to explain cases of hydrocephalus in which the cortex is only a millimeter or so thick, but the affected individual shows no obvious mental impairment? British Neurologist John Lorber, in research published during the 1980s, cited the case of a Sheffield University student with an IQ of 126 who had virtually no brain. What does this do to our beliefs about the relationship between the brain and consciousness?

The notion that human consciousness is affected by geomagnetic fields is still at the fringe of scientific acceptability. Two surveys (Stevenson, 1970; and Braud and Dennis, 1989) suggest that paranormal experiences coincide with days of minimal geomagnetic activity; another (Raps, Avi, et al., 1992) shows a high correlation between solar activity (which seems to influence the geomagnetic field) and the outbreak of psychiatric illnesses. Given that the Earth’s magnetic field is diminishing, should we prepare ourselves for the widespread occurrence of psychic — and psychiatric — phenomena?


Readers already familiar with the study of scientific anomalies will know that we have hardly scratched the surface. In virtually every field, widely-accepted views are plagued by internal contradictions; and in many cases these problems are hardly peripheral, but pertain to bedrock issues. Moreover, they tend to compound one another: a scientist in one discipline (such as astronomy), in order to clear up a problem, will often rely on “facts” from another discipline (such as physics), believing that the conclusions he reaches thereby are solidly supported — when in reality they may be resting upon the flimsiest of foundations. This process snowballs from discipline to discipline, specialist relying upon specialist.

When one begins to see the same pattern of anomaly, dogma, and denial in one field after another, the overall picture one gets is of a scientific world-view that is virtually a house of cards. We have created a system of knowledge consisting of millions of observed facts arranged in such a way as to give an essentially false view of the nature of reality. My point is not that science has made no valuable contributions — it has! — but that we always need to see those contributions in context and to appreciate their limitations and the tradeoffs we have made for their sake. The legendary Lao Tze reputedly wrote, “To know how little one knows is to have genuine knowledge.” Ironically, we in the industrialized world — who pride ourselves on living in an “information society” — are perhaps further from having genuine knowledge than were people in most “primitive” cultures throughout history.

The only sane course of action in this circumstance would be to take an attitude of extreme skepticism with regard to all scientific explanations, admit our ignorance, and adopt a stance of great caution with regard to actions we might take with respect to Nature and society based on scientific theories.

Our technological transformation of Nature and our destruction of traditional cultures during the past five centuries have rested upon three pillars — economic greed, religious zealotry, and scientific hubris. It could be argued that each is a twisted manifestation of (or substitute for) a healthy human drive. Our task in creating a new, life-affirming culture must be to carefully remove all three of these props and to replace them with a single sound taproot reaching deep into the heart of the soil and the soul.

Reprinted from Museletter Number 30, June 1994. 

If you appreciate this article, please consider a digital subscription to New Dawn.


RICHARD HEINBERG is the author of Memories and Visions of Paradise: Exploring the Universal Myth of a Lost Golden Age (Quest Books: 1995), Celebrate the Solstice: Honoring the Earth’s Seasonal Rhythms Through Festival and Ceremony (Quest Books: 1994), and A New Covenant With Nature. Since the publication of this article in his Museletter (now discontinued) Richard has produced many more publications. His website is

The above article appeared in New Dawn 43 (Jul-Aug 1997)

© New Dawn Magazine and the respective author.
For our reproduction notice, click here.

It's only fair to share...Share on Facebook0Share on Google+0Tweet about this on TwitterShare on StumbleUpon0Email this to someone