Physical Theories: An Overview

Now that I have described how modern physics was founded by Galileo Galilei as a new science, I will follow the path of physics in the following and show how the legacy of the pre-Socratics and ancient mathematicians has borne fruit. After a decline of the ancient culture one could, after about 2000 years, revive it and reach new heights of knowledge. The fact that it took so long may has many reasons – if one can speak of reasons at all in history. However, I do not wish to take part in such considerations.

First, I want to give an overview of the theories that have been developed and established in physics over the course of time. The fact that they always had to prove themselves in competition with other theories than the “better ones” in each case was explained, for example, in (Honerkamp, 2017, p. 111ff). In the next chapters I will show what role the questions raised by the pre-Socratics have played in the development of these theories. In particular, I will discuss the development of the theories and their relationships to each other. Especially I will pursue the path along which there have always been mergers or unifications of theories. Thus, today we are only talking about two great theories and are out to recognize these two as parts of a single “theory for everything”. This would answer the most noble question of the pre-Socratics, the question of a “One”, albeit in a completely different way than one could imagine at that time.

The development and establishment of a physical theory always involved the explanation of phenomena of a certain type. We have already seen that the phenomenon of “motion” was the first theme that interested people in antiquity and again in the Renaissance. It is the most original and probably also the most general phenomenon that we know. 

We encounter the phenomenon of “light” in a similarly direct way.  So, it is no wonder that at the beginning of modern physics not only the phenomenon of motion was dealt with, but also experiments with light were carried out, as we know them from Isaac Newton, for example. A century later, people began to study other, seemingly quite different phenomena, electrical or magnetic. Finally, at the turn of the 20th century, they discovered sorts of radiation which obviously differed greatly from light rays, and among these new rays there were also different types.

The space of phenomena

In short, the history of physical theories is a history of discoveries in a “space of phenomena” where there have always been certain objects in play. In this space, one can identify large areas in which there are phenomena that seem to be so similar to each other that one might be tempted to invoke the same reason for their explanation. Thus, over time, the idea of fundamental forces acting between planets and the sun, electrons and other found or discovered objects emerged. Over time, four such interactions were distinguished: Gravitational, electromagnetic, strong and weak interaction. This distinction is still very helpful for an overview of the set of the theories developed in the 400 years since Galileo.

There are, however, two other categories in respect of which one should distinguish the theories. It is not only the interaction or the force that can be decisive for a phenomenon. This can also be the range on a length scale. Thus, phenomena can be distinguished according to the scale of length on which the phenomenon appears, whether in the world of the smallest dimensions, the largest dimensions or the middle dimensions.

Finally, an aspect will become important that has to do with our cognitive abilities, namely the question of whether we must describe the phenomenon as a complex one, namely as one in which it is not sufficient for an understanding to consider only a few objects with few important properties. Many or very many objects can represent a system that has new properties that are not inherent to the individual objects themselves, but only “emerge” through the interaction of the objects. Water, for example, has the property of being liquid. However, this does not apply to its components, the H2O molecules.

With these two characteristics, spatial size and complexity, we can already consider aspects that allow us to have an overview in the form of a landscape of phenomena. If one enters the characteristic length of some objects, which play a role in physical theories, into a coordinate system, in which this size is plotted against the complexity of the objects, then one obtains e.g. Fig.4.2.

Fig.4.2: Rough classification of certain objects according to size and complexity. Objects like planets can appear at different places, depending on how much of their properties you want to consider (R stands for order of magnitude, N for number of degrees of freedom or complexity resp.).

On this figure we can show how far we have explored the space of natural phenomena today with our physical theories. It also makes it clear that in addition to the physics of fundamental interactions, there is also a very large area of complex systems for which areas of physics such as thermodynamics or statistical mechanics, solid state physics, etc. are responsible. This should be kept in mind, even if we do not deal with it here and we mainly consider the wide range of spatial scales in the area of “simple” systems – from 10-15 to 1020 m.

The world of the middle dimensions in the range of about 10-4 to 1010 m is most accessible to us intellectually, because we ourselves, as participants of this world, can have direct experiences with it. Thus, the phenomena of this world are also the subject of the earliest physical theories; they are also called classical theories. The exploration of the space of phenomena thus started in the world of the middle dimension.

At the beginning of the 20th century, phenomena of the world’s smallest dimensions were discovered. One had to state that the concepts of the world of the middle dimensions are no longer suitable here. A completely different concept, a “quantum”, replaced the concept of a material object and gave physics on this scale the name “quantum physics”. At the same time a modern cosmology and astrophysics began to emerge. Today we hear of particularly spectacular discoveries in this field of the largest dimensions.

Classical physics, quantum physics and cosmology: this is a classification that can also be described as physics of medium, smallest and largest dimensions. Cosmology today has not yet required its own conceptual apparatus, as quantum physics does, which is why it is also added to classical physics, if one wants to emphasize the methodological aspect.

With a distinction regarding the spatial size alone one has of course not yet exhausted the space of phenomena. There are other quantities which, measured by the conditions of our world of daily experiences, can be small or large. Particularly prominent in this context is speed; but the strength of fundamental forces, in particular of gravity, will also be significant for the nature of physical theories. The landscape sketched in Fig.4.2 must therefore only be imagined as a slice from the whole space of phenomena.

The exploration of this space of phenomena resembles the exploration of our earth in the time of the great discoveries in the 16th century. One spoke thereby of the discovery of the “world”, although it were only new ranges of the planet earth, which one discovered at that time gradually. Today one knows almost every corner of the earth and “reaches for the stars”. 

Thus, one also knows all laws of nature in the world of the middle dimensions, but only on the fundamental level. The more complex the systems are in these dimensions, the less familiar they are to us today. But the more we limit ourselves to the fundamental side, the further we have advanced into the world of the smallest and also the largest dimensions.

Even if it were possible in several years or decades to establish a theory for all fundamental interactions, physics would not be at its end. In the direction of complex systems there are still many questions waiting for an answer. The transition to chemistry, biology and cognitive science will be fluent. Also, in the exploration of life and consciousness one will not be able to ignore physical conditions.

The theories of classical physics

Classical physics is dominated by three large phenomenon areas: the phenomenon of motion and the two areas in which we encounter the fundamental forces of gravity and electromagnetism, respectively.

Phenomena from these areas have been known since ancient times. “Nothing is older than motion,” we may quote Galileo once again. For the pre-Socratics, motion or non-motion always played a role, Aristotle distinguished different types of motion and formulated a first kind of theory of motion. Even in the Middle Ages there were always natural philosophers who wanted to trace the nature of motion.

Gravity was also an everyday phenomenon. With Aristotle it was a quality that made all bodies of the sublunar world strive for the centre of the world. Also, the sphericity of the earth was later explained by such a “natural striving”.

We read less about magnetic and electrical forces in early sources, but magnetic and electrical phenomena were already known in ancient times. If you rubbed an amber, it would attract dust or shreds of wool. Iron was attracted by a magnetis stone and it was discovered that splinters of such stones always rotate in a north-south direction.

The physical theories, which today explain all basic phenomena from these three phenomenon areas, are

– for motion: Newton’s and Einstein’s theory of motion,

– for gravitation: Newton’s and Einstein’s theory of gravitation,

– for electrical and magnetic phenomena: Maxwell’s theory of electromagnetism.

Normally, Newton’s theory of motion and gravitation is subsumed under the name classical mechanics. Newton’s theory of gravity essentially consists of a law for the forces between two material bodies. Newton was able to use this law to explain the motion of the planets within the framework of his theory of gravity.

Einstein’s theory of motion is the special theory of relativity, Einstein’s theory of gravity is the general theory of relativity. Both represent extensions of the corresponding Newtonian theories to a larger range of phenomena: for motions to “higher” velocities, for gravitation to “higher” velocities and to “stronger” gravitational forces. It will still be necessary to make precise what the terms “higher” or “larger” mean in each case.

Maxwell’s theory of electromagnetism serves to explain all electrical and all magnetic phenomena as well as those phenomena in which electrical and magnetic effects are mutually dependent. It is the result of a unification of two earlier theories, one for electricity and one for magnetism.

Gravity and electromagnetic forces act over long distances. Otherwise, we wouldn’t have felt it all the time. They are therefore called long-range, in contrast to the short-range forces that were only discovered in the world of the smallest dimensions. They hold the world “together at its innermost” and their reach does not go beyond that. Of course, the long-range forces can affect even at short distances, electromagnetic forces are even quite important for understanding structure of atoms. However, gravitational forces at the level of atoms have not yet been registered. The masses of the building blocks of the atoms are obviously much too small.

The theories of quantum physics

The establishment of Maxwell’s theory, in particular by the discovery of electromagnetic waves in 1886, increasingly drew physicists to the question of how an electric current and how electromagnetic radiation can be generated in matter. In the end, the question of the structure of matter stood in the center of attention.

From a pre-Socratic Leukipp and his follower Democritus one knew the concept of an atom, a smallest indivisible particle (άτομος gr. indivisible). The chemists used this idea in the 19th century with great profit for the explanation of the laws in the reactions of different chemical elements. But there were also vehement opponents, because one had not yet “really seen” an atom and the idea of indivisibility only raised new questions.

But other questions also came to mind. In the “golden years of physics” from 1895 to 1898 further rays were discovered, such as X-rays, cathode rays, α-, β- or γ-rays. Finally, there were the heat rays, a phenomenon that had been known for some time: All bodies become red, light red and finally white-yellow with constant heating; and one feels that heat emanates from them.

The origin and nature of these rays had to be understood. It was a very fruitful time for physics, and during this time the idea of atoms should establish itself as building blocks of matter, but only as a milestone on the way to ever smaller building blocks. One could then obtain an explanation for all these rays and thereby gain a consistent picture of the structure of matter and the atom.

In an attempt to develop this picture into a consistent theory success was only achieved after daring to describe the discovered relationships between the experimental results with a completely different mathematical conceptual apparatus.

A test case for each approach of a theory for the structure of an atom was the calculation of the possible energy states of a hydrogen atom. The success or failure of a mathematical calculation thus now decided on the success of a theory in this world of the smallest dimensions. The building blocks of an atom such as electrons, protons or neutrons could then no longer be regarded as particles in the sense of classical physics and the living world. They were soon called “quanta”, like the energy packages Max Planck had talked about in a lecture on 14 December 1900 when he gave an explanation of thermal radiation. By the way, the date of this lecture is regarded today as the birthday of quantum physics.

The quantum theories responsible for all the fundamental phenomena of the world’s smallest dimensions are first of all

– quantum mechanics, to a certain extent the replacement for classical mechanics

– quantum electrodynamics, the continuation of electrodynamics on the atomic level.

An explanation of the origin and nature of α- and β-rays could only be achieved by introducing two completely new types of forces, the “strong” force, which is responsible for the binding of the building blocks of the atomic nucleus, and the “weak” force, which can cause the transformation of a neutron into a proton, but also serves to describe the decays of other later discovered “particles”. Thus

– theories of weak and of strong interaction

were created. These two theories were constructed according to the model of quantum electrodynamics. This soon led to the desire to describe these three interactions within a unified theory. As an intermediate step the

– theory of electroweak interaction,

a unification of the electromagnetic and weak interaction was found, and finally

– the unified theory of electrical, weak and strong interaction, the so-called standard model.

Today, this model is regarded as the basic quantum theory. Quantum mechanics now plays the role of a theory for a limited range of phenomena in which relativistic effects do not have to be taken into account and in which there is no decay and no generation of particles.

Figure 4.3 shows the development of the individual theories over the course of time.

Fig.4.3: Timetable for the emergence of physical theories of fundamental interactions

Galileo and the New Science: Experiment and Mathematics

The notion of a concluding rule was the dominant theme of the second part of this blog. Aristotle had already recognized such rules as decisive tool of thought for dialogues and discourses. A conclusion, according to him, is “a discourse in which some things are presupposed and then something different […] results from it”. He also realised that the rules themselves and the nature of the assumptions were important. Thus, he distinguished between the logical conclusion and the dialectical conclusion.

The logical conclusion was at the centre of his teaching on such tools of thought. Here one could already give clear concluding rules at that time. Even today, any introduction to logic begins with an examination of these concluding rules. The further development of Aristotelian logic in the form of propositional logic is the basis for all further studies of human cognitive abilities. 

The realization that in a sea of mysticism and dialectic there is the possibility at all to transfer the truth of statements to another statement, has driven me very much in my youth, when I had become so properly aware of this.

What’s the use of all this? One could establish such a kind of logical order between statements, in which it becomes clear which true statements follow from which other true statements.  One could start from true statements and build a whole thought building on them, which consists only of true statements.

But – what statements can you start with? That was the big question.

The mathematicians and logicians of antiquity had already demonstrated how this question about a beginning of true knowledge can be answered. Aristotle had shown, as already mentioned in an earlier chapter, that from the syllogisms of the 1st form all other syllogisms can be derived . He had thus solved the problem of how to arrive at true statements at all in such a way that he regarded the syllogisms of the 1st form as true propositions. These were immediately evident for him.

A few decades later, Euclid of Alexandria had then logically ordered the knowledge of geometric areas and bodies and thus created the first larger axiomatic-deductive thought structure. Here, too, he had to regard a few sentences as true at the beginning. They seemed evident from intuition.

So, at term-logic and at geometry already was demonstrated, how knowledge of secure transport of truth can be extended to an axiomatic-deductive system. Throughout the centuries, mathematics has remained an unsurpassed model for such an organization of secure knowledge.

There had been attempts to introduce a similar rigour of argumentation in philosophy and ethics. Such approaches, however, all ran into the sand (see Wikipedia: Mathesis universalis). Had they been the wrong areas for a rigor of thought according to the mode of mathematization?

Perhaps axioms did not necessarily have to be immediately obvious, but it was more important to find a source of true knowledge at all. Just as Euclid could refer to a large number of mathematical proofs and arranged this material according to logical points of view and, if necessary, supplemented it, a “small” axiomatic deductive construct of thought may also emerge after knowledge of some true statements by clarifying the logical relationship between them. Gradually one could then combine these “small buildings” into larger ones. 

Galileo Galilei was the first to recognise that nature was the source of true knowledge, as well as the importance of mathematics for the formulation of such knowledge. He was the first to describe a result of a physical experiment in the language of mathematics.

He certainly saw the implications of this combination of mathematics and experiment. It was immediately clear to him what a revolution a mathematisation represented for the understanding of science at that time. Thus, he spoke of a “new science”, which he had founded. His sentence “The Book of Nature is written in the language of mathematics” bears witness to this, as does the passage of his letter to the Tuscan Secretary of State Vinta in 1610: “Therefore, I take the liberty of calling this a new science discovered by me from its foundations”.

Galileo thus took up the Pythagorean idea again, but in a completely new way. He probably also saw that there is an order, that is, regularities in nature, which can be expressed in mathematical relations, and he had also become acquainted with the rigour of mathematical conclusions through his study of Euclidean geometry. But he also recognized that one must “question” nature through experiments in order to discover this order of nature, to make true statements out of it in mathematical language and to bring these into a logical order. Not empiricism alone, not mathematics alone, but experiment and mathematics are the pillars of his new, strict science.

We all know the consequences of this discovery, without which our world today would be a completely different one. At some point, however, this “new science” had to be discovered; nature and mathematics – or rather nature and logic – are too close to each other.

When is an implication true?

Why does empiricism play such an important role, why do ” inquiries ” of nature in the form of experiments play such an important role, if one wishes a theory after the model of Euclidean geometry, thus as axiomatic-deductive system?  So let’s look again at the modus ponens as a prototype of a logical conclusion:

A, A → B ⊨ B.

In order to deduce a statement that is incontestably true, premises A and A → B must be true. There is one statement, namely A, which occurs in both premises.  The implication forms the bridge to a new statement, namely B, which is then deduced. There must be such “bridges” in every concluding rule, because nothing can be inferred from statements that are completely independent of each other. Also, the syllogisms each have a middle term, which occurs in both premises.

A true implication A → B means that A is sufficient for B: Always if A, then B. Where is that the case?

We can find true implications by questioning nature. We then receive the following answers: “If I throw a ball into the air, it falls to the earth” or “If an electric current flows in a wire, there is a magnetic field in its environment”. The experimental physicists are therefore suppliers of true implications, which we then also call laws of nature.

True implications can also be found if we transform the statement “All Greeks are human beings”, for example, into “If x is Greek, then x is human.”

Here we have formed the terms “Greeks” and “humans” in such a way that the implication is true. The statement thus becomes true by the fact that we form the concepts accordingly.

Then we’re already at the end of our rope. For all other implications the dialectical conclusion is probably responsible, i.e. here an implication belongs to the category of sentences about which Aristotle said:

Sentences are credible if they are recognized by all, or by most, or by wise men, the latter by all, or by most, or by the most experienced and credible.

We can add: And what is recognized by “wise men” also depends on time. Let us only think of the laws of legal science, e.g. the law §1356 of the German Civil Code (BGB), which until 1977 still read: “The woman manages the household on her own responsibility. It shall be entitled to be economically active to the extent compatible with its matrimonial and family responsibilities.”

When it comes to regulations for human coexistence, morals, customs and traditions, yes, everything that nature does not tell us, there can be no generally acceptable true implications. We are referred to the dialectical conclusion and thus to a negotiation about which implications are to be set as true. So here we can only “set” truth, not find it.

The consequence of this is that the statements of the natural sciences are universally valid, but there are countless religions and legal systems. In the natural sciences there is also a change over time. However, as we will see in the next chapters, this is a kind of evolution, a “finding of the ever better” basic assumptions based on ever new discoveries about nature’s behaviour.    

For some time, it was believed that rules for human coexistence could also be read from human nature. Such a doctrine of natural law can be used for the most diverse ideologies. Ultimately, it is always the “wise men” who generally decree the sentences, which actually only seem credible to some, to be true. The Catholic Church still adheres to this doctrine today. For centuries, however, one has been talking of a “naturalistic fallacy” when one infers “ought” from “is”. An implication that links statements about “is” with statements about “ought” be cannot be read from nature.  We owe the first explicit formulation of this insight to the philosopher David Hume (1711 to 1776).

The new science of Galileo Galilei

The “hot” topic of nature research at the time of Galileo was motion. In his work “Discorsi” he says: “Nothing is older than motion, and about it there are neither few nor few writings of philosophers. Nevertheless, I have experienced their peculiarities in great quantity, and among them very worth knowing”.  The motion had already been an issue for the pre-Socratics. Aristotle had distinguished different classes of motions and had found a special explanation for each. Motion is the phenomenon that we encounter most immediately, but which can also be observed in the sky as the course of the stars. If you wanted to learn anything at all from nature, you first had to “understand” the motion.

What was the experiment Galileo used to study motion, and what form of mathematics did he use to describe the results? How Galileo approached the problem is remarkable and symptomatic of the course of modern science. He did not focus on “the whole” as the pre-Socratics did, nor did he try to create a general overview like Aristotle. Instead of this he started it “on a small scale”. He let a small, smoothly polished ball roll down an inclined plane, i.e. an inclined narrow wooden board into which he had a channel buried – a child’s play in modern times. 

This turn of the view alone demonstrates the independence of his thinking, as it is characteristic of a genius. Even in Goethe’s day, philosophers had to think about “what holds the world together at its innermost”, and Faust has only mockery for Mephistopheles when he fights for people: “You can do nothing on a large scale, and now you can begin it on a small scale.  Religions only know this question about “the whole”.

Actually, Galileo has taken up the trail of Xenophanes again. If one trusts that it will be possible to “search for the better”, one appreciates also “small successes” in the search for knowledge; one looks for a template on which one can build. This is how modern science, modern technology works. That is why there is research and also development.

Galileo now had to measure times and distances for each roll of the sphere. How he could determine in particular a time unit in which he used his feeling for an even measure in a song is described in detail in (Fölsing, 1983, p. 177ff). In his notes, he reports: “… with probably a hundred times repetition, we always found that the distances behaved like the squares of time, and this for every inclination of the plane, that is, the channel in which the sphere ran. (Discorsi, after (Fölsing, 1983, p. 174)).

Galileo formulated the result in the form of proportions, ratios, as was customary at the time and as had not yet been learned in any other way. Time periods and distances were variables of different physical dimensions, and one had not yet understood how such variables could be directly related. Therefore, he wrote down his result not in the form in which the distance proportional to the square of the time required is given, but as equality of the ratios of two distances and two squares of corresponding times. In a graph, in which the times are plotted against the distances, this presents itself as a semiparabola, as is indeed found in the Dialogo Quarto of Discorsi Galileis in the discussion of thrown bodies (Fig. 1).

Fig. 1: Semi-parabola as drawn by Galileo during the discussion of tossed bodies ( (Galileo, 2015, p. 276) after (Simonyi, 1990, p. 200).

Here one must say something about the state of mathematical knowledge of Galileo’s time. This could not have been higher than what one knew from late antiquity and how it was probably also taught at the universities of the time in the faculties of the artists, the faculties of the “artes liberales”, the “free arts”. Thus, in mathematics one thought predominantly in geometrical terms, since geometry had always been dominant in antiquity. It was only about a generation after Galileo that René Descartes (1596 to 1650) was to develop an “Analytical Geometry” in which geometric relations could be expressed as arithmetic relationships. Geometric problems could thus be analysed within the framework of arithmetic. Afterwards mathematics became essentially arithmetic and algebra, the doctrine of transforming arithmetic relations. But the fact that the relationship between times and distances in the case on the inclined plane could now be represented by a parable fitted well into the world in which mathematics consisted for the most part of geometry.

Galileo had also been initiated into the beauty and stringency of Euclid’s geometry by an engineer and geometer Ostilio Ricci. He was already “infected” by the idea of having to logically arrange his experimental statements. He was therefore also looking for a principle from which all these statements could be derived. However, he was caught on the wrong track. Four years later he was able to correct this error (Fölsing, 1983, p. 175ff). Such a “theory” for a falling movement would soon have been obsolete anyway. He could not have imagined that at the end of his century a theory would emerge that could explain all motions in the sky and on earth from a few axioms. His falling motion became a small special case in it.

The English physicist and mathematician Isaac Newton stood on Galileo’s shoulders during the development of this theory. The first axiom in this theory was based on Galileo’s hypothesis, on which he had been guided in his falling experiments.  It was the hypothesis that, on a horizontal plane, the motion of the rolling sphere “would continue forever at a uniform speed” if it were not affected by unevenness of the ground (Galilei, 1982, p. 30). 

For Aristotle, a motion that gradually comes to rest through friction is the natural, actual motion. So this is a process for him, only with “force” the motion can be maintained. The rest is then a very special state, “essentially” different from a motion.

With Galilei, on the other hand, the uniform motion is the natural one, and this is a state. Through external circumstances such as friction it can come to rest, but this is only a special state of this kind. This insight stands at the beginning of modern physics.

With which statements can one begin with the formulation of an axiomatic-deductive system for a theory of motion? The answer to this question was obvious for Newton: Galileo’s insight, which was later formulated as the law of inertia, must have been at the beginning of a theory of motion.

Let us take a closer look at which statements have been put at the start in this theory, but also in other physical theories. We will see that this happened in very different ways. But let us first get an overview of these theories in the next chapter.

The Syllogisms of Aristotle

In the last blog post, Aristotle presented the three types of conclusions: the logical conclusion, the dialectical conclusion and the false conclusion. Now we will have to deal with the logical conclusion, which derives a true conclusion from two true premises.

This is, of course, the most important conclusion, which ensures a safe “transport” of the truth of statements. That such a possibility exists is the good news, and you can’t overestimate it. The “bad news” would then be that truth is not gained in any way, it is only passed on. The premises must be true. The question as to how and where one can begin with true statements in practice will be of great concern to us later on.

With the logical conclusion one can distinguish now again several kinds. The individual proofs, as he also called them, differ in the nature of the premises and the conclusion, which then results from a “summation” of the premises. In the Greek “summation” means “συν-λογισμός” (syllogismos); thus, one also speaks of a syllogism, the doctrine of syllogisms is called syllogistics. 

Let us first look at an example of syllogism:

All humans are mortal.
All Greeks are humans.
Therefore: All Greeks are mortal.

So, there are three statements, two propositions from which one proceeds, and a conclusion in which the propositions are “added together”.

The form of sentences and their representation

In this example, all sentences are of the form “All A are B”.  The sentence “All A are B” can also be formulated as follows: “B applies to all A.”  This formulation suggests that B is a predicate of all A: B is attributed to all A (predicated), e.g.: To be mortal applies to all human beings.

The formulation is also the one that comes closest to the Greek text, so it is the more original form. In scholasticism however, this was rewritten then as “All A are B”. All A has the predicate B: All humans are mortal. 

The above syllogism is then in its original form:

To be mortal belongs to all human beings,
To be a human being belongs to all Greeks.
Therefore: To be mortal belongs to all Greeks.

Moreover, this formulation also suggests a method that was discovered much later to illustrate the relationship between two terms. One takes advantage of the fact that each term applies to a set of entities. “Human beings” can be Greeks but also Egyptians, Thracians or Asians or Europeans. If one represents the set of human beings by a circle or any closed curve (B), the set of Greeks as well (A), then the circle for the Greeks lies within the circle for the human beings.

Fig.1: The set of Greeks (A) is a subset of the set human beings (B). To be a human being (B) applies to all Greeks (A).

Such pictures are called Venn diagrams, after the mathematician John Venn (1834 to 1923), who introduced them following Leonard Euler (1707 to 1783). Actually, the philosopher Gottfried Wilhelm Leibniz (1646 to 1716) already used them. These Venn diagrams thus illustrate relationships between two sets in general, no matter of what kind the elements are. Within the framework of set theory, we would also write: A ⊂ B, i.e. A is a subset of B.

But this form of a sentence is not the only one at syllogisms. Aristotle obtained an overview of all possible forms of sentences and statements. This resulted in: (Prior Analytics, Book I, 1,  see
https://ebooks.adelaide.edu.au/a/aristotle/a8pra/book1.html):

A premiss then is a sentence affirming or denying one thing of another. This is either universal or particular or indefinite.
By universal I mean the statement that something belongs to all or none of something else;
by particular that it belongs to some or not to some or not to all;

by indefinite that it does or does not belong.

Thus, beside the general sentences like “All Greeks are human beings” or “To be human belongs to all Greeks” there is also the negation (no-compliance) and the particular statement. Altogether you get the following types of sentences:

B belongs to all A, (all A are B),
B doesn’t belong to any A, (no A is B),
B belongs to some A, (Some A are B),
B doesn’t belong to some A, (some A are not B).

We can illustrate the other types of statements as follows:

 Fig. 2: Left: B doesn’t belong to any A. Middle: B belongs to some A. Right: B doesn’t belong to some A.

In scholastic circles these forms are abbreviated as (A a B), (A e B), (A i B) and (A o B). The letters “a” and “i” should remind us of “affirmo”, “e” and “o” of “nego”.

The structure of syllogisms

Two such forms of statement then form the premises, one such the conclusion. Each conclusion can thus be characterized by three letters from the set {a,e,i,o} and by the position of the three letters, which stand for the concepts in the respective sentence.

One of these terms, the so-called middle term, must occur in both conditions, it can be in first or second place, or in one condition in first, in the other in second. This results in four different shapes or figures. The first figure is the following (see example above)

A – B

B – C

A – C.

Now Aristotle selects in all forms that combination of two forms of statement that necessarily lead to a conclusion. He simply sorts out the conclusions for which he finds a counterexample.

The valid conclusions can then each be represented by the form and a specific combination of the letters a,e,i,o. And in order to be able to remember such combinations better, one has integrated them into corresponding peculiar words, e.g. one remembers the combination a a a with the word “Barbara”, and knows in addition that here the 1st figure is present. This conclusion corresponds exactly to the above example.

Fig.3: The Barbara syllogism

Another important conclusion, also of the 1st figure, is called “Celarent” in this way:

Fig.4: The syllogism Celarent

Aristotle demonstrated in this way that one could systematically formulate a system of conclusions in which from the truth of the premises necessarily follows the truth of the conclusion.

With the systematics of Aristotle, one now has a complete overview of all possible conclusions. Previously however, in syllogistics, “the whole art of syllogistics had consisted in searching around with great effort of time and effort”. as Aristotle wrote in his work On Sophistical Refutations (after Schupp, I 275).

But here one already has a system of statements that is reminiscent of the formal predicate logic that will be introduced later. Like there, you can “quantify” using the predicates, i.e. you can operate with quantities such as “all”, “none” and “some” for the predicates.

Most importantly, Aristotle must select the valid conclusions “by hand” from the set of all possible combinations, simply by discarding those that he recognizes as invalid with the help of an example. This “recognition” is an intuitive one, one with “common sense”. One does not doubt the correctness of the conclusion, but for a strict science in today’s sense this kind of knowledge is not sufficient.

Even if one describes the relations of terms mentioned above with the help of set theory, and thus justifies the conclusions within the framework of set theory, the insight is a mathematical one. Thus, the conclusions would be only indirectly logically justified, because mathematics only uses, as we know today, the rules of inference that are ultimately gained in modern predicate logic. Only in this predicate logic can the conclusions be strictly justified by deriving them from tautologies. We’ll see about that in a later blog post. Only in this logic is the bottom reached on which one can incontestably win a true sentence again from true sentences.

We still would have to comment on Aristotle’s remark on the case that the conditions are only likely to be true or credible. Then the conclusion could also only probably be true. Saying more here was not possible at that time.  It was not until the beginning of the 20th century that a theory of probability was developed with which one can become more precise in this case. This will be explained in more detail in a later blog post on the topic “How to deal with insecure knowledge”.

Syllogisms in an axiomatic-deductive system

But Aristotle’s logic has not only shown us how safe knowledge can be passed on safely. He has also shown that “It is possible also to reduce all syllogisms to the universal syllogisms in the first figure.”, as he writes in the Prior Analytics, 1st book, 7th chapter (see https://ebooks.adelaide.edu.au/a/aristotle/a8pra/book1.html).

Thus, we already have a concrete axiomatic-deductive system. This logical order of the statements in a field of knowledge represented for him the ideal of a science. In the Posterior Analytics, 1st book, 3rd chapter he writes:

On the other hand, I maintain that any science must be based on proofs, but that the knowledge of the unmediated principles is not provable. And it is clear that this must be necessary. For since a knowledge of the earlier propositions from which the proof is made is necessary, but one stops once at unmediated propositions, these must necessarily be unprovable. This is my view and I maintain that there are not only sciences, but also supreme principles of them through which we learn the concepts of conclusion. (Here I prefer the translation of a german version in http://www.zeno.org/Philosophie/M/Aristoteles/Organon/Zweite+Analytiken+oder+Lehre+vom+Erkennen/1.+Buch/3.+Kapitel).

The mathematician Euclid of Alexandria logically arranged the geometric knowledge of that time in this way in the time around -300. This organization of a scientific theory as an axiomatic-deductive system is still a model for any rigorous science today. One tried in all centuries to imitate this organization of a thought building, thus “more geometrico”, after kind of the Euclidean geometry, to arrange the knowledge of his science. In “Die Idee der Wissenschaft – Ihr Schicksal in Physik, Rechtswissenschaft und Theologie” I described how successful it has been to this day to realize this idea (Honerkamp, 2017).

Who first had this idea is not clear. The mathematical proof was already known to the Pythagoreans. If there are enough statements in an area, secured by proofs, one will probably at some point consider which statements could be regarded as axioms. This poses the question: Which statements do I need as a basis in order to be able to deduce all the others from them? Or also: How do I create a logical order?

Such specifications will then always be special statements, have special properties. They can be immediately insightful, i.e. “certain by themselves”, as in theories of mathematics, but also highly abstract and far from our idea, as in theories of physics.