The topic I wish to explore in this paper should be of special interest to Thomists, for St. Thomas's teachings on the relationships between physical science and mathematics are distinctive and relevant to problems faced by scientists and philosophers in the present day. Yet there are difficulties in addressing this topic from the side of both physics and mathematics, since these disciplines bear little resemblance to those that went under the same names in the thirteenth century. Again, natural philosophy in the Thomistic tradition is in a relatively undeveloped state, having suffered from an overdose of metaphysics, even from metaphysical imperialism, in the past half century. So I am forced to build on foundations I myself have laid in my The Modeling of Nature, where I developed a philosophy of science based on an Aristotelian-Thomistic philosophy of nature, and in the paper I read at this Institute last year, where I elaborated on two subjects not treated in that book, the dispositions of protomatter and the use of mathematical models in the study of nature.1
The following is an overview of the thesis I intend to develop. Simply put, it answers the question I propose in my title in the affirmative. Yes, nature is accessible to the mathematical physicist. This can be seen readily by anyone who accepts mathematical physics as a scientia media or "mixed science" in the Aristotelian-Thomistic sense. Such a science enables one to demonstrate properties of natural bodies and to grasp the natures of inorganic substances, the elements and compounds that make up the universe in which we live. Also, mathematical physics enables us to attain scientific knowledge of physical bodies composed of elements and compounds, such as stars and planets, to the extent that they have natures, and even of subatomic entities that enter into their composition. In the case of subatomic entities the qualification "to the extent that they have natures" opens up the problem of what I call "transient natures," which I shall address in the latter part of this paper.2
The primary instrument of a mathematical physics is a demonstrative syllogism composed of two premises, one physical or natural, the other mathematical. The middle term will generally be a metrical concept, that is, one that expresses the result of a measuring process that applies a number or figure to a physical entity, and so pertains to both mathematics and physics. Such a measurement, even though approximate, is regarded as true if its result falls within the limits of error of the measuring process. The middle term may also take on meanings that are partly the same and partly different in the two premises, thus making use of analogous predication. Again, suppositions may be needed for the demonstration, and these require previous assent or verification independently of their use in the demonstration. These qualifications understood, it is possible for the mathematical physicist to secure strict demonstrations and thus to possess true scientific knowledge in the Aristotelian sense.
Aristotle defines nature as "a principle and cause of being moved or of rest in the thing to which it belongs primarily and in virtue of that thing, but not accidentally" (Physics II.1, 192b21-23). He further identifies it with the two essential principles of natural things he earlier uncovered in the Physics, matter or "underlying nature" (191a8), which I call protomatter, and a "natural form" (192b1) that makes the thing be the kind it is. Aristotle uncovers these principles from an analysis of the way in which substances come to be and pass away in the order of nature. His analysis is based not on mathematics but on ordinary experience, through a study of the qualities of objects as presented to the senses. Protomatter is for him a purely potential and indeterminate principle, whereas natural form is an actualizing or determining principle, one that specifies the object to be a particular natural kind.
In the order of the non-living, following an earlier Greek tradition Aristotle recognized four elements as natural kinds: fire, air, water, and earth. These elements he differentiated from one another by combinations of primary qualities (hot and cold, wet and dry) and motive powers (gravity and levity in different degrees). From them as constituents he attempted to explain compounds or "mixed" bodies and their various secondary qualities, such as their shapes and surface characteristics.
The coming-to-be of a natural substance, for Aristotle, could be brought about by the alteration of its sensible qualities. It was in this way that his commentators came to explain the transmutation of the elements, that is, the natural change of one element into another.3 A schema called the symbolum was commonly used to detail how this came about. This is shown in Fig. 1. Here the two pairs of contraries occupy the two diameters of a circle, with protomatter at its center. The contraries are arranged in such a way that any two pairs can be regarded as having a quality in common with, as well as a quality different from, that on either side of it. Thus the hot-dry combination at the top is so related to the cold-dry on the left that the hot-cold are extremes and dry-dry are common intermediates. Similarly, the hot-dry is so related to the hot-wet on the right that dry-wet are extremes and hot-hot are common intermediates. The corresponding elements, each having its distinctive pair of contraries, can be converted into one another through the underlying substrate in the center, protomatter. Whenever an extrinsic agent so affects the dispositions of protomatter as to cause one or other distinctive pair of primary qualities to become dominant, the element corresponding to that pair of contraries emerges naturally from the potency of protomatter, the previous element recedes back into that potency, and an elemental transformation has taken place.4
For purposes of later reference, it will be convenient to replace the symbolum with the matrix shown in Table I, where the protomatter is not shown but nonetheless is presupposed.
Here the first column lists the elements (Fire, Air, Water, Earth), the second, the presence of heat (1) or its absence cold (0) in the element, and the third, the presence of moisture (1) or its absence (0) in the same. Using this matrix and the binary digits it employs one can characterize the four basic elements that exist above the level of protomatter, that are convertible into one another by being reduced back to protomatter, and then are educed from protomatter with a different combination of properties.
How this is done is diagramed in Fig. 2, slightly different from Fig. 1 but again presupposing protomatter in the background. Here normal transitions occur around the perimeter of the figure when one parameter is varied and the other remains unchanged, as in F to A, A to W, W to E, and E to F. Diagonal transitions, those from F to W and E to A, would generally not be allowed, because both parameters would have to be varied at once, and no sensible quality would be conserved throughout the change.
Three observations may be made about this Aristotelian analysis of elementarity. First, the four elements have never actually been observed in the universe, and in fact are unobservable because they would require a pure admixture of primary qualities, which is never experienced in bodies that come under sense observation. Yet the bridge to the knowledge of these elements is qualitative knowledge. In other words, it is only through the qualities that are known to exist in macroscopic bodies that one is able to reason to the existence of bodies endowed with idealized qualities that, in some way or other, serve to explain the appearances of composed bodies.
Second, both the four elemental bodies and the substrate that is their basic component, protomatter, may be said to be real, although neither is real in the same way as an existent sensible body. Protomatter is real only in the sense of being a potentiality for assuming various natural forms, whereas the four elements are never completely actual in any composite, but are always in some remiss state corresponding to the various degrees of remission of their primary qualities.
Third, qualities observed in the macroscopic domain are explained by idealized qualities, and idealized qualities are explained by a quality-less substrate. The substrate also lacks quantity, and thus is radically unpicturable and unobservable. Yet it is knowable by experience from a knowledge of substance and from an analysis of what happens in substantial change. Thus both the elements and the substrate serve as real explanatory principles of chemical transformation and of the composition of bodies. They also meet the rather stringent requirements of the logic of explanation. Heat is explained by non-heat, quantity by non-quantity, etc., so there is no circularity in the explanatory process.
"Hot-cold" and "wet-dry" may seem to be strange couplets with which to start discussing elementary particles, but they are really not far different from the "up-down" and "top-bottom" couplets used in recent quark theory. To understand this we must move closer to the terminology of modern science. We propose to do so first by sketching in broad strokes the development of two mixed sciences known from antiquity, mechanics and optics, and then the gradual replacement of qualitative terms by their metrical equivalents in these disciplines, thus permitting the use of mathematical equations to provide explanations and predictions of extremely broad ranges of physical and chemical phenomena.
Two Mixed Sciences: Mechanics and Optics
Apart from the science whose proper subject matter is nature, Aristotle recognized a number of mixed sciences, two of which approximate the subject matter of modern physics, namely, mechanics and optics. Mechanics is a science concerned with the forces and weights involved in moving bodies, and also with the study of motion in its quantitative aspects. Optics
is a science concerned with the study of light rays, and for purposes here it is taken in a sense broad enough to include astronomy as the study of light rays coming from heavenly bodies.
Both make extensive use of Euclidean geometry and of simple number theory involving integers and various types of ratios and proportions.
Mechanics took its beginning from the Mechanical Problems once attributed to Aristotle but now known to have been composed after his death by a member of his school. This was concerned with mathematical analyses of the basic machines used for moving bodies, such as the wheel and the lever, and with general principles these analyses provide. A specialization within this discipline known as statics (including hydrostatics) was developed extensively by Archimedes in the third century BC, on the supposition that mathematical figures can have weight and centers of gravity, and so can arrive at positions of equilibrium. Another specialization known as kinematics, which studies the spatio-temporal relationships involved in the motion of bodies, was developed at Merton College in Oxford in the fourteenth century. This is important for having introduced concepts such as instantaneous velocity, which prepared the way for the calculus. In the early seventeenth century Galileo Galilei experimented with motion and formulated a restricted principle of inertia and laws of falling bodies. Around the same time Johann Kepler arrived at his three laws of planetary motion. By the end of the seventeenth century Isaac Newton had synthesized all these results in his Principia, the famous Mathematical Principles of Natural Philosophy (1687), which supplied the basic definitions, laws, and propositions on which most of classical mechanics is based.
Classical mechanics underwent further development throughout the eighteenth and nineteenth centuries, mainly through advances in mathematical formulation. Newton had used geometrical constructions throughout the Principia, although the principles of the calculus were behind his work. The problems he addressed were mainly those of motion on earth and in the solar system, where the straight-line motion of a mass point in empty space sufficed for his basic paradigm. Later advances came from employing differential and integral calculus to address problems in hydrodynamics, acoustics, and stress formation in rigid and deformable bodies. Vibratory motions were studied in detail, first in strings, then in membranes, then in three-dimensional solids, and in these areas vector analysis and complex number theory provided needed mathematical tools. Finally, in the early twentieth century, Albert Einstein saw the importance of time in signal transmission, and so introduced it as a fourth dimension in his new relativistic mechanics. This he proposed in both special and general theories, whose elaboration required additional mathematical techniques for dealing with multi-dimensional spaces.5
Like mechanics, optics had its beginnings as a mixed science among the Greeks, first with Euclid, who wrote an Optics concerned with the direct transmission of light rays, then with Ptolemy, whose Optics also treated their refraction by transparent fluids and their reflection by mirrors of various shapes. Eudoxus, Apollonius, and Hipparchus were mostly concerned with light rays coming from heavenly bodies, and worked out advanced geometrical constructions for the geocentric astronomy that was later formulated by Ptolemy in his Almagest. The Middle Ages saw further developments in the study of lenses, of radiant phenomena such as the rainbow, and theories of vision. In the mid-seventeenth century Newton performed his famous experimentum crucis, showing that sunlight is composed of light rays of different colors with their own angles of refraction. He professed not to know the nature of light, though he seems to have held for a particulate theory. His views were improved on by Christian Huygens, who showed how light was propagated through spherical wavelets. Thomas Young advanced beyond this, proposing in the early nineteenth century that light consists of vibrations that are transverse to the direction of light's motion, with two possible modes at right angles to each other.
The next breakthrough occurred in the mid-nineteenth century through Michael Faraday's study of electricity and magnetism, which led ultimately to the electromagnetic theory of light. This was put in mathematical form by James Clerk Maxwell, who improved on Young's suggestion and proposed in 1862 that "light consists in the transverse undulations of the same medium which is the cause of electric and magnetic phenomena." The equations he used to explain polarized light and other poorly understood phenomena were very similar to the partial differential equations and vector operators then being used in advanced mechanics. Physical optics thus came to merge with analytical mechanics. It likewise found ready application in Einstein's relativistic mechanics when this developed in the early decades of the twentieth century. More importantly still, Max Planck's study of black-body radiation around 1900 led to the discovery that radiation is emitted in very small packets of energy called quanta. This marked the beginning of quantum mechanics -- yet a third "mixed science" destined to be dominant throughout the twentieth century and one that poses special problems of interest to scientists and philosophers alike.
Before turning to quantum mechanics, we must now consider a problem that was touched on earlier but has yet to be addressed. The point has been made that Aristotle's analysis in the Physics was based on ordinary experience, not on mathematics, but rather on the study of sensible qualities, that is, the qualities of bodies as presented to the senses. Is there any way in which such qualities can be made susceptible to mathematical treatment and thus enter into the reasoning processes of a mixed science? Within Thomism it would seem that this question can be answered in the affirmative. The basis for this reply is St. Thomas's teaching that there is a hierarchical ordering among the accidents found in a natural body, with quantity being the most fundamental, and with the remaining accidents coming to substance through quantity as an intermediate (quantitate mediante). Being received into substance through quantity, sensible qualities have a quantitative aspect, and it is on this basis that they can be measured. Being measurable, they themselves can take on the formality of metrical concepts, and so serve as middle terms in the syllogisms or demonstrations of the mixed sciences, as mentioned in the preamble to this paper.6
To explain this, it should be noted that physical qualities may be divided into two types depending on their proximity to sense experience. Some qualities are directly sensible, such as heat, color, sound, odor, and taste, all of which can be sensed immediately by the external organs of the body. Others are reductively sensible in that they can be known only through sensible effects; examples of this type are electricity, magnetism, and chemical affinities. Pertaining to this latter division are also motive and resistive powers, such as gravity and resistance to motion, which were already known to Aristotle.
Qualities in all these categories can be said to be quantified simply because they are present in quantified bodies. Their quantity can be measured in two ways, giving rise to two measures usually associated with physical quality, namely, extensive and intensive measurements. Qualities receive extensive quantification from the extension of the body in which they are present; thus there is a greater amount of heat in a large body than in a small body, assuming both to be at the same temperature. They receive intensive quantification from the degree of intensity of a particular quality in the body. If two bodies are at different temperatures, for example, there is a more intense heat in the body at the higher temperature, and this regardless of the size of either. Generally the intensity of a physical quality can be determined either from an effect, or from a cause, or from a quantitative modality the quality produces in the subject in which it is. An effect would be the change it produces in another subject, a cause would be the agent that produces the intensity in the subject, and a quantitative modality would be some concomitant variation between the intensity of the quality and a quantitative aspect of the subject. The subject bodies in such cases are called instruments, and they can be of various types depending on the quality being measured.
Through the use of instruments numbers can be assigned to qualitative intensities and they can thus enter into mathematical calculations. Along with the numbers, however, units of measurement must usually be specified for qualities, just as they are for quantities. For example, in the International System of Units now standard in science, the unit for mass (m) is the kilogram, that for length (l) is the meter, and that for time (t) is the second. Since these are proposed as the primary dimensional units, units for qualities are preferably expressed in terms of them. Such qualitative dimensions are given in terms of the exponents or powers to which the basic quantities (m, l, t) must be raised for their cumulative product to express the proper dimensional unit of the quality being measured. The only requirement put on these exponents is that they be numerical constants; they may assume zero, negative, and fractional values. These are shown in Table II for a few of the dimensional units that are employed in work on mechanics and electromagnetism.
One of the simplest qualitative units in this table is that for heat temperature (6), which can be measured through the expansion of mercury in a thermometer, actually a measurement of length (l). Heat capacity (7), on the other hand, measures the quantity of heat in a body. It has the dimensionality of energy per unit mass per degree of temperature, given in the table as m0lt-2, which reduces to lt-2 since m0=1. Energy (5), or force times distance or length, has the dimensional unit ml2t-2, so when one divides this by mass (m) and length (l, the dimensional unit for heat temperature), the resulting dimensional unit is lt-2. Working in this way through the table, by applying the standard definitions of the various terms it is possible to verify the dimensional units that are there attached to them.
However, it turns out that not all qualitative measurements need have a dimensional unit. This can be seen in the last three entries in the table, those for specific heat (15), atomic weight (16), and molecular weight (17). For all of these the exponents are zero, indicating that this dimension is a pure ratio. Specific heat, by definition, is the ratio of the heat capacity of a particular substance to the heat capacity of water. Each heat capacity has the dimensional unit explained above, but when the two are placed in simple ratio, these units cancel out and the result is a pure number. The same thing happens with atomic weight and molecular weight, as we are about to explain. Both of these express ratios to a unit weight taken by convention as one-twelfth the mass of the carbon atom (12C = 12.00000). Since the weight or mass units characteristic of various atoms and molecules cancel out when placed in ratio to this unit weight, their respective atomic and molecular weights likewise are pure numerics.
With this we are in a position to return to our third "mixed science," which we here present in its early form as it was first presented by Niels Bohr and others. This made use of Planck's concept of the quantum plus advances in chemistry and spectroscopic analysis, as well as selected portions of classical mechanics and electromagnetism, to present a unified view of elements and compounds as known at the beginning of the twentieth century.
The four-element theory of Aristotle was, of course, the longest lasting theory in the history of science, enduring from the fourth century BC all the way to the early nineteenth century. The "hot-cold" and "wet-dry" couplets of that theory prompted much work with furnaces and solutions in alchemy, and also with Galenic medicine, up to the Renaissance. By the nineteenth century metrical concepts were even available for dealing with heat and fluidity, as can be seen in Table II. But ultimately they proved inadequate for attacking the element problem and had to give way to "gravity-levity" as the preferred couplet for its solution.
In The Modeling of Nature we used the four forces of modern physics to characterize the natures of inorganic substances, and at this stage we introduce the first two of these, gravitational force and electromagnetic force.7 With hindsight we can say that only two basic instruments were required to investigate the effects of these forces, the mass spectrograph for investigating gravitational forces, and the spectroscope for studying the emission and absorption of electromagnetic radiation. Advances in chemistry were also needed: first, the discovery of various laws of chemical combination, leading to demonstrations of the existence of atoms and molecules; second, the systematic study of chemical combinations, leading to the periodic table of the elements. Related to these were advances in physics: first the discovery of the electron as the unit of electric charge; then of the nucleus of the atom and its basic constituents, the proton and the neutron. Some of the physico-mathematical demonstrations implicit in these advances have been sketched in Modeling, and these may be consulted for details.8
Bohr took all this information and synthesized it in a quantized model of the atom in which electrons move around the nucleus in planetary orbits, structured in various shells. Within these shells he postulated that electrons move in stable orbits without emitting or absorbing radiation, as they would in classical electromagnetic theory. Under the influence of strong electrical fields or other external energy, however, electrons can make stepwise jumps from one shell to another. Bohr speculated that when an electron moves farther from the nucleus in this way it absorbs an amount of electromagnetism determined by the different energy levels of the two orbits; when it drops from an outer orbit to an inner one, it emits a similar amount. By formulating a series of rules stating which electron transitions are allowed and which are not, Bohr found that he could explain the emission and absorption spectra of many chemical elements. In effect, he could correlate the wavelength and intensity of the radiation characteristic of a particular element with the jumping of electrons in the atomic model of that element from one orbit or energy state to another.
Further refinements of Bohr's model began with Arnold Sommerfeld's replacement of circular orbits by elliptical orbits. Then, along with that, was the possibility of the electron orbits having various orientations in three-dimensional space, providing an azimuthal quantum number, and its having different angular momenta, giving two more energy states. Another was the introduction of electron spin, that is, the rotation of an electron on its own axis, to make a fourth. In all, therefore, each electron in an atom now had the possibility of existing in four energy states, each denoted by a different quantum number. Yet another was the introduction of a principle by Wolfgang Pauli specifying that no two electrons in an atom could occupy the same energy state at any one time, which was equivalent to stating that no two electrons in an atom can have the same four quantum numbers. How all these developments could be used to provide models of the hydrogen, helium, and sodium atoms respectively has been explained in The Modeling of Nature, from which my next illustration (Fig. 3) is taken.9
These advances, it must be noted, were closely tied to studies of the electromagnetic spectra of the different elements in the periodic table. The four quantum numbers were based on calculations of theoretical physicists using classical mechanics and electromagnetic theory. By what seems a remarkable coincidence, counterparts of their results could be found in four different types of spectral lines of the elements identified by spectroscopists working in different portions of the electromagnetic spectrum. These lines were described by them as strong, principal, diffuse, and fundamental, and designated by four letters (s, p, d, and f), which could be correlated with the four quantum numbers of Bohr's theory.
To complete the picture provided by the Bohr-Sommerfeld atom it is necessary to mention a device related to the spectroscope, namely, the mass spectrograph, invented by the British physicist F. W. Aston in 1918. This instrument was designed to provide accurate measurements of atomic weights. In it, ions or charged atoms are sorted out through the use of electric and magnetic fields in such a way that their paths of travel, and thus the positions at which they impinge on a screen or photographic plate, provide a measure of their masses. Experiments with the spectrograph show that the atoms of naturally occurring elements, although occupying the same place in the periodic table (and hence called "isotopes"), have nuclei of slightly different masses, depending on the number of neutrons within them. The element chlorine, for example, has an atomic number of 17; this specifies its number of peripheral electrons, which must be balanced by 17 protons in its nucleus, since the atom itself is electrically neutral. Normally chlorine has an atomic weight of 35, which means that 18 neutrons must be added to the 17 protons to give the proper weight. Some atoms of chlorine, however, have an atomic weight of 37; in these, therefore, there must be two more neutrons, or 20, to supply the additional weight.
The most remarkable success of the Bohr-Sommerfeld model, however, was the way in which it could explain all of the chemical properties of the various elements in terms of the outermost electronic shells. These accounted for all the known valencies of the chemical elements, and thus gave a theoretical justification for the formulas used to describe chemical changes. The model provided a remarkable heuristic for expanding the science of chemistry, and brought it to the stage of being the most developed of the "mixed sciences" in the twentieth century.
With this I return to Aristotle's concept of nature and take up the second principle he identifies with nature, namely, natural or substantial form. Natural form for him is nature, but it differs from protomatter in that it is a real and actual principle, whereas protomatter, though real, is only a potential principle. Of itself, protomatter is unintelligible, but when actualized by form it becomes intelligible in the substances we know through sense experience. The case is different with natural form, for the human mind grasps it directly and instinctively. Natural form provides the window through which the world of nature is seen and through which many of the natures inhabiting it can be readily understood.10
In the world of the living it is comparatively easy to grasp the natural form of an oak, or a squirrel, or a horse, and so to answer the question "What is it?" in a general way. From this we can proceed to note what it has in common with, and is different from, other organisms, and so formulate its scientific definition, say, in terms of genus and species. Not so with inorganic substances. True enough, we can know some elements in this way, say, carbon, sulphur, lead, and gold. But can we know them as elements in the proper sense, that is, in a way that enables us to differentiate them from, say, water, or salt, or emerald? Before the development of chemistry as a science, and particularly the knowledge provided in the periodic table, the answer would have to be "No." Now, with the aid of the periodic table, it can be "Yes." We can know the natures of inorganic substances, both elements and compounds. This can be seen with the aid of the following two figures.
The first (Fig. 4) is a novel way of presenting the periodic table in a way that shows its relation to early quantum mechanics.11 At the bottom center is shown the nucleus of an element's atom, and rising above it are some ninety energy levels to account for all the different states electrons can occupy in naturally occurring elements ranging from hydrogen to radium and beyond. The various shells are numbered in arabic numerals on the left, and these correspond to the periods of the periodic table in roman numerals on the right. Between the energy levels and the periods on the right are the symbols for all the elements (except the rare earths). Corresponding to these on the right are the divisions of shells into subshells, and then the further divisions of these into the spectral lines through which the energy levels of the different elements are known. Note that these are given in terms of the four quantum numbers, simply using the letters s, p, d, and f of spectroscopic analysis, and saying nothing about the details of electron orbitals. Further shown there is the order in which the various shells and subshells are filled up with electrons, as one proceeds from the lightest element, hydrogen, at the bottom, to the heaviest at the top.
My next transparency (Fig. 5) complements this peripheral electronic structure of the atoms by showing in the upper display the components of the nuclei and isotopes of the first ten elements in the periodic table.12 The lower display exhibits the peripheral electrons and valencies of the Bohr atom for the same elements, and below that some examples of the two types of molecular bonding, ionic and covalent, that result from these valency structures. These provide examples of typical molecules: for ionic bonds, HCl, BeO, NH3, and CaCl2; for covalent bonds, H2, HCl, CCl4, and Cl2.
In The Modeling of Nature I made the case that the information provided by the periodic table of the elements provides us with a knowledge of their natures far superior to any comparable knowledge we have of living organisms.13 All one need do is consult the Handbook of Chemistry and Physics to find all the essential features of the elements and their isotopes, plus their properties, in consummate detail. Tables of their compounds in the same source, both inorganic and organic, provide similar knowledge of their natures. And all of this information is reducible to sense knowledge, through the use of the metrical concepts we have explained above. What is more, we need not base our knowledge on theoretical entities such as elliptical orbits and spinning electrons. We tend to replace "element" with "atom" and "compound" with "molecule," but it is the elements and the compounds that fall directly under sense experience, not the atoms and the molecules. And when we measure the spectral lines that reveal energy levels, we do so in terms of wavelengths in the electromagnetic spectra of elements we handle, spectra that are themselves visible, either directly or reductively. The middle terms in our reasoning processes are thus both physical and mathematical. So, to answer my initial question, the mathematical physicist does have access to the natures of the non-living, and he does so through sense knowledge. Again, details will be found in The Modeling of Nature.14
So, we know the natures of elements and compounds. Do we also know the natures of planets and stars? I discuss this briefly in Modeling and propose that the answer is "No," and this for the simple reason that they do not have natures in the strict sense.15 Of the heavenly bodies, some, such as planets and asteroids, are mainly solids, whereas stars like our sun are principally hot gases. Earth itself is a mixture or aggregate of many different elements and compounds, held together by the force of gravity. Similarly the sun is a mixture of hydrogen and helium in the gaseous state. The unity of a star would seem to be analogous to the unity of the earth: largely a mass of different substances held together by natural forces of one type or another. And if current models of planets and stars are correct, they can go through a process of evolution and can have a history like many plants and animals. Yet, unlike plants and animals, planets and stars have no natural form, there is no unifying or specifying form within them guiding that history toward some perfective state, as it does in the case of organisms. The protomatter that is distributed throughout a star's bulk, for example, is informed by a variety of elementary forms that themselves are replaced by others, as the various potentials latent within the substrate are actualized, until the mass-energy of the star is exhausted and its ceases to exist as such. Much the same fate awaits planets and asteroids, as they break up into their components and dissolve into the elements and compounds that are their basic components.
Let us turn now to later quantum mechanics, which has taken two different forms, wave mechanics and matrix mechanics, both said to be equivalent from the mathematician's point of view and both extremely difficult to explain in non-mathematical terms. Wave mechanics, in particular, raises interesting philosophical questions, possibly because people can visualize wave packets and ponder how they can travel at speeds faster than light, tunnel through barriers, perform spooky actions at a distance, and in general do things offensive to common sense. Related to this development are many discoveries in high-energy physics, which has seen an enormous growth of so-called elementary particles, subatomic wave-particles, beyond the three mentioned thus far, electron, proton, and neutron.
But before discussing strange particles, we should make an important point. Replacing orbiting electrons with wave functions or other theoretical entities in no way calls into question what has already been said. The observational basis for quantum mechanics, and the information it provides of natures in the non-living, remains exactly the same as previously: energy levels, revealed by spectral lines, and the frequency of transitions from level to the other, revealed by the intensity of spectral lines. The changes over the past fifty years are in the way scientists theorize about what goes on in the interiors of atoms and their nuclei, not about the experimental findings that ground their theorizing.
Some idea of wave mechanics may be gained from the materials displayed in my next transparency (Fig. 6).16 At the top is shown the famous wave equation formulated by Erwin Schrödinger early in 1926. The variable is the Greek letter psi, and so the equation is referred to as the psi-function. What psi stands for, unfortunately, is the subject of much dispute. Schrödinger originally thought it stood for electric charge distribution within the atom, thus giving it a physical meaning, but he later ruled this out as impossible. In 1929 Max Born gave psi a statistical interpretation, saying that it represented the probability of finding an electron at a particular place within the atom. This view was vigorously rejected by Schrödinger. And finally in the early 1950's David Bohm gave psi a realist or deterministic interpretation, holding that its results are determined by potentials within the atom. So we have three views of the wave equation: an actual physical function, a probability function, or a potentiality function. No physicist currently holds the first view, but the second and third are still the subject of vigorous dispute among physicists and philosophers of science.
In actual practice the wave equation is most used by chemists, and for their purposes it suffices to interpret the equation statistically as a probability function. The wave equation for the hydrogen atom can be solved mathematically, and among its solutions are the six functions graphed in the upper left quadrant of the transparency. The ordinate here is probability. The functions are referred to as radial density plots of hydrogen orbitals, and they are labeled from (a) to (f) as the 1s, 2s, 3s, 2p, 3p, and 3d. The important thing to note here is the way in which the orbitals are designated -- in terms of shells (1, 2, 3) and subshells (s, p, d). This is spectroscopic terminology, not the terminology of the Bohr atom. In addition to radial density plots, moreover, it is possible to calculate the boundary contours of the orbitals, to give some idea of their orientation in three-dimensional space. The results for several of these are shown in the series of figures on the right side of the transparency, with the s orbital on the top directly under the wave equation, the three 2p orbitals under that, and the five 3d orbitals at the bottom. Some relationship can be discerned between these diagrams and the elliptical orbits of the Bohr-Sommerfeld atom, but the differences are more marked than the similarities.
The final diagram is that on the bottom left quadrant, which illustrates how chemists use the wave equation to explain the bonding of atoms within a molecule. The molecule here is beryllium chloride, and the atoms are one beryllium atom and two chlorine atoms. The (a) part of the diagram shows the two chlorine atoms with their orbitals on either side of the beryllium atom with its orbitals before their being joined. The (b) part shows how actual bonding occurs, with the BeCl2 molecular orbitals now serving to explain the binding forces between the component atoms. It seems obvious that calculations of chemical bonds using wave mechanics yield results that are far superior to the much simpler insights provided by the older quantum mechanics of Bohr and Sommerfeld.
Wave mechanics thus is a potent instrument for studying the functions of electrons within atomic and molecular structures. It can also be used to speculate about electrons and photons when they are conceived as individual wave pulses outside the atom. Here the mathematics becomes complex, partly because of the infinities involved. The problem may perhaps be understood by a simpler example that has its roots in Aristotle. In the Posterior Analytics I,13 Aristotle observes that "it belongs to the physician to know that circular wounds heal more slowly [than other kinds], but it belongs to the geometer to know the reasoned fact" (79a15-16). The reason is that healing occurs along the perimeter of a wound, and the circle has the smallest perimeter for any given area; thus it will heal more slowly. Using calculus, one can also calculate the rate of healing. What is involved is a function which starts with the wound's area, A, and decreases exponentially with the passage of time. The fit is remarkable for all points along the curve except at the end. The mathematical function approaches the x-axis asymptotically, which means that it never reaches the x-axis, or, as some say, it meets the x-axis at infinity. But here nature doesn't obey the mathematics. At some point in time nature closes the wound and A goes to zero.
In mathematical physics, as noted at the beginning of this paper, two premises are ultimately involved in any proof, one a physical premise, the other a mathematical premise. Which premise should take priority in case of conflict? In my view the physical premise must be regulative over the mathematical. This goes contrary to much contemporary discussion of quantum mechanics, where, it seems to me, mathematics is driving the arguments. If physics is not, then the philosophy of nature has little or nothing to contribute to the debate, which perhaps explains why it is consistently ignored.
A final word about matrix mechanics. The basic mathematics behind matrix mechanics is the same as that behind wave mechanics, but it uses a different formulation, one proposed by Werner Heisenberg in 1925. Heisenberg became concerned about Schrödinger's wave equation because psi was not an observable, and he thought physics should stick to observables. Accordingly, he saw the goal of quantum mechanics to be the computation of two matrices, one a diagonal matrix which would list the observed energy levels of atoms and molecules, the other a related matrix that would list the transition probabilities between the various levels. In both cases one would be concerned with observables or measurements: the wavelengths of spectral lines and their intensities, both of which are available to the physicist. Heisenberg also saw these matrices as grounded in the potentia of Aristotle, his term for protomatter. As I see it, his view of the psi-function was ultimately a potentiality function, although he is often listed as following the Copenhagen interpretation, which sees it as merely a probability function.
Our exposition thus far has been based on two powers distinctive of inorganic natures, gravitational force and electromagnetic force. At this point we must consider the two remaining powers, the strong force and the weak force, for knowledge of these is the main fruit of research in high-energy physics. To put these powers in context, Fig. 7 shows the main models we have been using thus far: at the top, the basic composition of the natural body, with its powers and sensible qualities on the right; in the middle, the Aristotelian model, with the basic couplets, hot-cold, wet-dry, and heavy-light, all directly perceived by the senses; and at the bottom, the modern analogue with the four forces of modern physics. Those on the left, gravitational force, explains mechanical motions, and electromagnetic force, the key concept, along with mechanics, explains the periodic table. Now we turn to the strong force, invoked to explain nuclear reactions, omitting, for lack of time, the weak force, which is required to explain radioactivity.17
The strong force is best explained in terms of the quark hypothesis, introduced in 1964 by Murray Gell-Man but preceded in 1961 by his "Eightfold Way," which can serve to illuminate his later work. Analyzing the results of many experiments in high-energy physics, Gell-Man arrived at a number of qualitative dimensions that seem to be conserved in strong interactions.18 Among these four are important for our purposes, as shown in my next illustration (Fig. 8). They are: atomic weight or mass (A) and electric charge (Q), both of which we have already mentioned, and two other dimensions that are new, hypercharge (Y) and isotopic spin (I). Measuring values of these for all the meson and baryon states then known, Gell-Mann observed the symmetries displayed in the octet diagrams below Table III, that in the middle for mesons, whose atomic mass number is zero, that at the bottom for baryons, whose atomic mass number is one. Isotopic spin can be thought of as the spin of the nucleus of an isotope of an element, and on that basis may be intelligible. More mysterious is hypercharge, which Gell-Mann thought was something like a strong charge, although it produced strange V-shaped effects in cloud chambers. In view of these effects, he later changed the dimension to "strangeness."
Let us compare Gell-Mann's eight-fold way to the simple qualitative matrix we used earlier to characterize the transformation of the elements in Aristotelian physics. For Aristotle, changes between the elements are possible only when one primary quality is conserved and the other modified. For Gell-Mann, this is likewise the case, for all particle reactions are based on some qualitative dimension being conserved and the other being modified. The abscissae on the diagrams represent electric charge (Q), while the ordinates are given for both isotopic spin (I) and hypercharge (Y). Notice that a diagonal transition is permitted in Gell-Mann's system, whereas it is not in the Aristotelian case. This seems to be possible because the mass number is conserved for either mesons or baryons, while both Q and I (or Y) are changing. In Aristotle's system, where only two parameters are variable (H and W) and there is no constant such as mass number to be conserved, diagonal transitions are automatically disallowed. But there is no reason to reject the possibility that protomatter lies behind both matrices as the ultimate conservation principle in the ontological order.
One could well question the sense in which "strangeness" can be a qualitative measure of an elementary particle. Apparently encouraged by his introduction of this mysterious attribute, Gell-Mann went on to formulate his quark hypothesis, which added a new entity, the quark, as the fundamental component of both mesons or baryons.19 The quark has many possible attributes, and when these are present in combination there can be thirty-six kinds of quark. According to the Standard Model, quarks come in different "flavors" and "colors," as explained on my next transparency (Fig. 9). As to flavors, there are six to choose from: up, down, strange, charm, bottom, and top. For each combination of these attributes there is an anti-particle with the same mass but opposite charge. As to colors, the choices are red, green, and blue. The names obviously do not mean anything: the pairs could be hot-cold as easily as up-down, or wet-dry as easily as bottom-top. All are related in one way or another to indirect measurements performed on nuclear components when these are subjected to high-energy cosmic rays or to heavy bombardment in particle accelerators.
Of all the possibilities, it is noteworthy that only two quarks, the up and down quarks, are stable; all the others enjoy but a transitory existence. Moreover, the general rule is that all baryons are composed of three quarks and all mesons are composed of a quark and an anti-quark. No one has ever observed a single quark, and so they are presumed, like protomatter, to be incapable of existing by themselves. They are also thought of as permanently "trapped" or confined within the baryons and mesons of which they are parts. And as to the baryons, only two of these are stable particles, the proton (composed of an up-quark, its anti-particle, and a down-quark) and the neutron (composed of a down-quark, its anti-particle, and an up-quark). All others likewise have a fleeting existence, probably representing transient states of matter at different stages in the formation of the universe.
The result of all this investigation, extending over fifty years, is that we are still left with only three basic particles, the proton, the neutron, and the electron. And none of these can be said to have a nature in the strict sense, although they enter into the composition of the elements and compounds whose natures we do know, as already explained. High-energy physicists provide evidence for the existence of mesons and baryons by the hundreds, of bosons such as the photon and gluons of different types, of leptons such as the electron, the positron, and various kinds of neutrinos. All of these are only transient entities, entia vialia, to use the Latin expression.20 We may think of them as transient natures, but this is not the sense in which Aristotle intended the term "nature." On the other hand, he did intend nature to mean protomatter. And here we have the clue to the significance of particle physics. Protomatter is the ultimate substrate, the basic potency that underlies the operations of nature. The different groupings of strange properties fail to yield a particle ultimate, but Aristotle's ultimate is still there, the potential correlate of the natures we have come to know in the world of the inorganic.
1. The full title of the book is The Modeling of Nature: Philosophy of Science and Philosophy of Nature in Synthesis (Washington, D.C.: The Catholic University of America Press, 1996). The paper is entitled "Thomistic Reflections on The Modeling of Nature: Science, Philosophy, and Theology," forthcoming.
2. "Transient natures" are only touched on here. A fuller treatment will be found in "Thomistic Reflections." Also relevant are my essays "St. Thomas on the Beginning and Ending of Human Life," Thomas Aquinas: Doctor humanitatis hodiernae (Rome: Società Internazionale Tommaso d'Aquino, 1995), 394-407, and "Nature, Human Nature, and Norms for Medical Ethics," Catholic Perspectives on Medical Morals: Foundational Issues, ed. E. D. Pellegrino et al. (Dordrecht-Boston-London: Kluwer Academic Publishers, 1989), 23-53.
3. The materials treated here are summarized from my "Elementarity and Reality in Particle Physics," Boston Studies in the Philosophy of Science 3 (1968): 236-271, reprinted in my From a Realist Point of View: Essays in the Philosophy of Science, 2d ed. (Lanham-New York-London: University Press of America, 1983), 185-212.
4. See The Modeling of Nature, 58-63, explained more fully in "Thomistic Reflections."
5. Mathematical models that might help one understand this development were omitted from The Modeling of Nature because of their complexity. They were introduced in Part II of "Thomistic Reflections" and are further developed in what follows.
6. On metrical concepts, see The Modeling of Nature, 239-244 and 409-414; also my "The Measurement and Definition of Sensible Qualities," The New Scholasticism 39 (1965): 1-25, reprinted in From a Realist Point of View, 2d ed., 73-97.
7. See Chapter 2, pp. 35-75.
8. See pp. 292-308 and 364-376.
9. See pp. 38-49.
10. See The Modeling of Nature, pp. 9-12 , 31-34, 45-49, and 53-59.
11. This figure is adapted from Philippus Soccorsi, S.J., De vi cognitionis humanae in scientia physica (Rome: Gregorian University Press, 1958), 141.
12. Likewise adapted from Soccorsi, De vi cognitionis humanae, 131.
13. See pp. 291-292.
14. Ibid.
15. See pp. 63-67 and 73-75.
16. These materials are adapted from K. M. Mackay and R. Ann Mackay, Introduction to Modern Inorganic Chemistry, 2d ed. (London: Intertext, 1973), 18, and James E. Huheey, Inorganic Chemistry: Principles and Structure of Reactivity, 3d ed. (New York: Harper and Row, 1983), 12.
17. For modeling the inorganic, see The Modeling of Nature, 53-58 and 70-73.
18. Again see my "Elementarity and Reality in Particle Physics" (note 3 above). Further information is given in G. D. Chew, M. Gell-Mann, and A. H. Rosenfeld, "Strongly Interacting Particles," Scientific American 210 (1964): 74-93. For technical details see M. Gell-Mann and Y. Ne'eman, The Eightfold Way (New York and Amsterdam, 1964).
19. The basic source here is Murray Gell-Mann, The Quark and the Jaguar: Adventures in the Simple and the Complex (New York: W. H. Freeman and Company, 1994).
20. On transient entities, see note 2 above.