On scientific models & truth

Figure 1: The Flammarion engraving depicts
a traveller who arrives at the edge of a flat
earth and sticks his head through the firmament.
Ever since the dawn of civilization, humans have always made efforts to understand the events happening around them. Let’s consider the very basic agencies operating in nature, for example the events like droughts, floods, earthquakes, eclipses, sunrises & sunsets, the forces like rivers, fires & winds, and the patterns like constellations, shape, scale and motion of the moon, the sun and the earth etc. Our forefathers either attributed most of these occurrences to the supernatural causes or in some cases had a very shallow and simplistic understanding of the physical mechanisms involved. Of the latter, the most appropriate example would be the Flat Earth Theory. Many ancient cultures subscribed to the Flat Earth cosmography because all the low-resolution evidence present at length scales much smaller than the radius of the Earth yielded a quite practical and useful flat map projection. Same can be said about the idea of the Geocentric Universe. The apparent motion of Sun going around the Earth once in every 24 hours appears to be very true from the point of view of someone standing on the earth. Even though wrong/grossly in error, the predictions based on the coarse resolution evidence of these theories provided practical answers to the relevant questions during those times.  

About supernaturalism, the foundation of which stands on the assumption of the existence of supernatural beings, forces and agents which interact with the physical world, the examples are myriad. Ancient Greeks interpreted the solar eclipses as the moments when the devil swallowed the sun while some other cultures thought of eclipses as the acts of creation where the sun and the moon were coupling, spawning more stars. Poetic and beautiful but not true! 

It is not unreasonable to assert that it was not until sometime between 10th to 14th century that people started to notice the fallacies in their explanations, especially in the supernaturalism and began to think about nature in a logically coherent way. During this period, the Arab scholars (notably Al-Haytham ‘Latin: Alhazen’) were the prime movers behind the development of the scientific process of inquiry. Alhazen’s scientific method was extensively refined by the Greek scientist-philosophers during the Renaissance, leading to the beginning of the great Scientific Revolution and with it, a new view of the nature emerged which was completely autonomous, methodical, rooted in empiricism and distinct from philosophy. This view propounded and reinforced the concepts of physical cause and effect, and dispelled the myth of supernatural intervention as necessary to understand the natural world. Events had ‘physical’ causes now, and the method of scientific reduction gained a strong foothold in almost every field of inquiry. 
Figure 2: Hevelius's Selenographia, showing Alhazen
representing reason, and Galileo representing the senses.

During the Renaissance era, Polish mathematician Nicolaus Copernicus posed a serious challenge to the Geocentric model of the universe, and laid down the foundations for Heliocentrism which ultimately destroyed the man’s arrogant notions of his own importance and to a great extent dispelled the illusion of looking at the world through an anthropocentric lens. In the second half of the 17th century, Newton proved the ‘Heavens’ and the ‘Earth’ are governed by the same set of laws. His work on mechanics, and especially gravitation reinforced the idea that nothing was so special or supernatural about the heavens. Qualitatively speaking, the same force which pulled down on an apple hanging from a tree was now responsible for the movement of the sun and the moon in the sky. No more did the devil engage in the inauspicious dance of the sun and the moon; Newton had rendered him jobless! Heavens became a part of the vocabulary in which supernaturalism did not find any mention. In 1800s, Dalton’s atomic theory proved that the matter is composed of tiny indivisible units called atoms. This discovery coherent with the idea of philosophical ‘atomism’, unleashed and reinforced in a new vigor an era of ‘reductionism’ in the scientific thought. The philosophy of reduction or reductionism lies at the core of the modern scientific method. 

In last 200 years, science has progressed at a rapid pace, advancing our fundamental knowledge and technology at the same time. At present, no one is bothered about the atomic theory of Dalton, or caloric theory of heat. Certain Newtonian principles are also outdated now. For instance, the theory of General Relativity (GR) developed by Einstein changed how we thought about gravity. Newton’s is a ‘law’ of universal gravitation, Einstein’s is a ‘theory’. The reader is cautioned here to not undermine the meaning of the word ‘theory’ with respect to the ‘law’. A ‘theory’ in science is at the top of the hierarchy. While a ‘law’ simply states the observed phenomenon, a ‘theory’ explains it. Newton’s law does not explain what gravity is or how it works. The theory of General Relativity is an effort in such direction. Like Newton’s notion of gravity was replaced by an Einsteinian version, and Quantum Theory overriding the idea of atom as a solid localized sphere, there are many other examples which demonstrate the preliminary nature of scientific theories. Moreover, all theories come with a limited range of validity. For example, in GR events are continuous and deterministic while as in Quantum Mechanics (QM) the events are produced by the interaction of subatomic particles with probabilistic outcomes. The results derived from the application of GR at subatomic scales are wrong, so are the predictions obtained from QM at cosmic scales. Reconciling these two is the holy grail of modern physics. 

Figure 3: Spacetime curvature schematic.
At this point, it is important to pose the most important epistemological question on the nature of scientific knowledge; are scientific models the true (or approximately true) description of reality? From an instrumentalist perspective which I personally adhere to, scientific models are no more than conceptual devices for predicting phenomena. In science, nothing is proven right. Throughout history, we have witnessed that a model or a theory is considered to be valid as long as the available evidence strongly supports it. Once some new phenomena come along which the present theory cannot explain, it is time to move on. But that does not make any of these (theories) ‘right’ or ‘wrong’. In fact, the question of whether a model is right or wrong is trivial. The flat map projection derived from flat-earth cosmography based on the limited, low-resolution evidence that was available during those times was as relevant and practical then, as the idea of curved spacetime is now. The existence of theoretical entities postulated in a good theory need only be useful in predicting phenomena, and not ‘real’ in the literal sense of the word. Therefore, how successful a model is cannot be determined by truth; rather by its utility.

Cite as

Bader, Shujaut H., “On scientific models and truth.” Backscatter, May 6, 2020, www.backscatterblog.blogspot.com/2020/05/on-scientific-models-truth.html

References
  1. Figure 1:  The Flammarion Engraving, Camille Flammarion, L'Atmosphère: Météorologie Populaire (Paris, 1888), pp. 163.
  2. Figure 2: Hevelius's Selenographia, Typ 620.47.452, Houghton Library, Harvard University.
  3. Figure 3: Spacetime curvature schematic. General Relativity Wiki page.


Comments