Extensions of the propositional logic

In addition to the considerations presented in the last chapter, some important extensions of the propositional logic must be mentioned here in any case, in order not to let the reader believe that he or she has already become acquainted with a large part of the logic through propositional logic. The possibility of expression of the propositional logic is still very limited and an extension in this or that direction will soon be desired, if one is occupied longer with it. However, the knowledge I have provided so far about the origin and meaning of the concluding rules is sufficient to understand the further considerations in this book.

Predicate Logic

With the propositional logic one can find rules of inference that lead from true statements necessarily to true statements. The modus ponens here is the prototype of such a concluding rule.  But we know from Aristotelian logic also other conclusions which necessarily lead from true to true, e.g. the two sentences “All men are mortal” and “Socrates is a man” lead to “Socrates is mortal”. 

Here in the proposition “All men are mortal” the word “All” appears.  One could continue to consider the proposition as a whole and its truth value and thus stick to the propositional logic. But you can also look into the “inside” of the sentence and notice that here the number of elements of a set comes into play. This is not always the case, but with a view to expressing oneself in as much detail as possible, this possibility should also be taken into account in a formal language. This has led to the extension of the propositional logic to the so-called predicate logic, in which a special structure is now provided for the propositions, namely a quantification with “quantifiers” such as “all” or “none”.

The decisive factor for such an extension was that one learned to give a more general meaning to the term “predicate”, which we know from the grammar of a natural language. In logic, a predicate is everything that can be meaningfully attributed to an object, i.e. “predicated”. So, in the sentence “Socrates thinks”, “thinks” is the predicate. Thus, in addition to properties, these can also be verbs. But also multi-digit relations, e.g. R(x,y) can be predicates; here now two objects x and y are assigned to a relation R.

The best translation of such sentences into character strings has proven to be, for example, that a one-digit predicate is seen as a function of a variable and the object to which the predicate is assigned is seen as an argument. The function s(x) then means that to the object “x” the predicate s≔ “is mortal” is predicted, i.e. assigned. One can then easily formulate that the predicate belongs to several or all objects from a given set or to none. This therefore gives the possibility of quantification.

This function s(…) has as function value the truth value of this statement; s(x) is therefore 1 or true, if x is mortal. If we now define the function M(x) with M(…):= “is a human being”, we can now formulate:

For all x applies: If x is a human being, then x is mortal, or

∀x (M(x) → s(x)).

where we have introduced the symbol ∀x for “For all x”.  The symbol “∀” is called a quantor. Of course, you have to determine beforehand to which quantity “all” refers.

Useful is also the symbol “∃x” for “There is an x”. In principle, however, the symbol “∀x” alone can be used for all quantifications.

The predicate logic therefore includes the propositional logic. Additional concluding rules in predicate logic are first:

∀x M(x) ∧ (x = c) ⊨ M(c),

i.e. “If a predicate M is assigned to all x from a given set, then also to a single element c of this set”, and second:

M(c) ⊨ ∃x M(x),

i.e. if an object c has the property M, then there exists an object x that has the property s.

After all, it does not work without the mode ponens, e.g. in the form

M(c), ∀x (M(x) → s(x)) ⊨ s(c),

where “c” stands for a specific element x from a predetermined set.

All syllogisms of Aristotelian logic can also be formulated in predicate logic. The syllogism “Barbara” reads here for example:

∀x (G(x) → M(x)), ∀x (M(x) → s(x)) ⊨ ∀x (G(x) → s(x))

where “G” stands for the predicate “is Greek”. Here a concluding rule is applied which can be derived from the modus ponens. But it can also be derived directly from the tautology

 (A → B) ∧ (B → C) → (A → C).

So far, quantification has always extended to object x, an element from a given set. One speaks here of a first-order logic. You can also introduce a second-order logic by extending the quantification to the predicates as well.

We want to study this from a famous example: Let us first consider the statement

P(x) → P(S(x)),

i.e. if the predicate is assigned to object x, then also to object S(x). This makes sense, for example, if x is a number and S(x) = x + 1, the successor of x. If, for example, an equation for x is fulfilled, then it should also be fulfilled for x + 1.

This one can be required for all natural numbers, and one gets

∀x (P(x) → P(S(x))).

This is one of the prerequisites for a particularly prominent concluding rule in number theory, the “mathematical induction”. One should not confuse the word “induction” with the logical induction. The danger of confusing it with physical induction is probably less. In order to arrive at a conclusion, there must still be an induction start, e.g. P(0), i.e. the property P is assigned to the number 0. Then, with the above implication we may conclude: ∀x P(x), i.e. the property P applies to all natural numbers x.

If, on the other hand, one wants to demand this statement ∀x (P(x) → P(S(x))) for all predicates P, then one writes:

∀P [∀x (P(x) → P(S(x))].

So, this is a statement in the language of second-order logic; the quantifier operates also on a predicate P.

You can also continue predicting, i.e. assigning a property, by assigning properties, i.e. predicates, to properties again. If F designates a property that can be assigned to a predicate P, you can, for example, first formulate:

F(P) → P(x),

That means: If a predicate P has the property F, then x has this predicate P. If you require this for all predicates P of a certain set, you can, for example, formulate:

∃x ∀P(F(P) → P(x))

i.e. for all predicates P applies: If they have the property F, then there is also an object x, which has this predicate P.

If the property F means “is good”, whatever that is, then the proposition says: “There is an object x that has all good properties.” Of course, this does not say anything about whether this statement is true or can be derived from any axioms. One can only formulate it.

About the differences of the first- and second-order logics with regard to their means of expression there are deeply rooted propositions in mathematical logic to which one can gain access only through intensive study. Here it can only be shown how to extend the language of propositional logic. It should be noted that the concluding rules of first-order logic are “sufficient as building blocks for all mathematical modes of argumentation” (Ebbinghaus & Thomas, 2018, p. 62). 

Modal and Deontic Logic

In the propositional and predicate logic, one always restricts oneself to a certain type of proposition, namely to those that can either be true or false. The law of the excluded third applies here, i.e. A ∨ ¬A is a tautology. 

But often we do not know for sure whether the statement A about a fact is really true. We now also want to consider that it is only possible that a proposition p is true. (In the following we describe propositions with small letters p, q, …, so that the notation remains clear later.) However, for statements p, which we have derived from accepted assumptions according to logically correct concluding rules, we would then say that the proposition is necessarily true.

Finally, we also know the situation in which we can only say about a fact that it can be possible but does not have to be necessary, that it is therefore “contingent”. If the language of predicate logic is extended by characters for such terms, one speaks of modal logic.

Analogously, another extension of the language of propositional logic can be developed, which deals with duties such as obligations or prohibitions. You then get a deontic logic (δέῖ, δέῖ gr. = you have to). But you can’t make proposition in this area, but you can make actions: The symbol “p” then stands for an action instead of a proposition. And one can also allow actions, i.e. neither forbid nor command them.

Here we see parallels between the terms “necessary, impossible, contingent” and ” obligatory, forbidden, permitted”. Fig. 1 also shows them clearly.

Fig.1 The three basic modal and deontological terms

It turns out that the modal terms “necessary”, “impossible” and “contingent” are disjunctive to the other two terms: What is not necessary can be impossible, but also contingent. What is not impossible may be contingent, but even necessary, and what is not contingent is either necessary or impossible. The same applies to deontological terms: What is not obligatory may be forbidden, but also permitted, etc.

With these three terms one must now expand the formal language of propositional logic by introducing new symbols. Actually, you only need a symbol for one term, because it shows that you can express all the others with the help of negations through this one symbol.

This new symbol shall be the sign “N” in modal logic, which can also be interpreted as an operator, which acts on a proposition p: “Np” shall mean: “It is necessary, that p is true”.

In modal logic we introduce the operator “O”, “Op” now means: “It is obligatory that the action p is performed”. 

All in all, you get this:

Np: It is necessary that p is true, i.e. it is impossible that p is false,
¬Np: It is not necessary for p to be true, i.e. it is not impossible for p to be false,
  N¬p: It is necessary that p is false, i.e. it is impossible that p is true,
¬N¬p: It is not necessary for p to be false, i.e. it is not impossible for p to be true,
¬Np ∧ ¬N¬p: The statement p is contingent true because it is not necessary and not impossible to be true.

One can formulate accordingly:

Op: It is obligatory to perform the action p,
¬Op: It is not obligatory to perform the action p,
O¬p: It is obligatory not to perform the action p, i.e. it is forbidden to perform the action p,
¬O¬p: It is not obligatory not to perform the action p. So, it is allowed to perform the action p.
¬Op ∧ ¬O¬p: It is neither obligatory nor forbidden to perform p.

A distinction must be made here between norms that are to be set with a law and propositions about norms. The norms are brought into effect by setting, the propositions about norms can be true or false.

Fig. 2 shows the different operators in the so-called modal or deontic hexagon. With these terms and their symbols, which are comprehensible for everyone, all specifications can be clearly represented.

Fig.2: The modal and deontic hexagon (after (Joerden, 2010), (Honerkamp, 2015)). The arrows indicate an implication (what is obligatory is also permitted), ” >-<” a contravalence or contradictory opposition (e.g. what is not forbidden is permitted), “—-“an exclusion or contrary opposition (e.g. being obligatory and being forbidden are mutually exclusive) and “…”” a disjunction or subcontrary opposition (e.g. contravalence >-< to “permitted” and to “not obligatory”).

Further explanations such as the discussion of possible axioms and a calculus in connection with the calculus of propositional logic go beyond the scope of this book.

Outlook

However, the list of extensions to the propositional logic has not yet been completed in any way. There are also so-called non-classical logics in which, for example, the law of the excluded third does not apply. One then does not assume that propositions are only either true or false. In modal logic we have assumed this, only considering the possibility that one does not know it exactly. Here now it is to be renounced that there is “ontologically” only this alternative. It will no longer be adequate to map the Boolean association of statements to a binary Boolean association. There is a wealth of approaches to such multi-valued logics, and “intuitionist logic” is a prominent example of this.

The so-called fuzzy logic must also be mentioned here. It still permits nuances of the predicates such as “very” or “rather”, however treats them quantitatively again. It plays an important role in many control engineering applications.

But the most elaborated approach is “dealing with uncertain knowledge”, which is based on mathematical probability theory. Here a proposition is assigned a measure of the probability that it is true. This can also be seen as a measure of the credibility of a statement.

Thus, we can pick up the theme of Aristotle, which he hinted at in the classification of conclusions, when he spoke of a dialectical conclusion. Here already questions arose, which could not be answered at that time: What can be said about a conclusion in dialectical inferences? Can one even develop a calculus for propositions that are merely credible? How could conclusions be drawn as strictly as possible in such a calculus? What would “strict” even mean here? These questions will be addressed later.

Tautologies and Concluding Rules

In the excursus on formal languages in the last blog post, we already got to know the signs and more general expressions of propositional logic. We have seen that there are special character strings, so-called tautologies, which are always true, i.e. independent of the truth values of the individual characters. We had also already introduced such a tautology, namely (A ∧ (A → B)) → B.

Here we will show how logically correct reasoning rules can be formulated with the help of tautologies. But let’s first ask which simple tautologies still exist and how to generate tautologies in general. Interesting in this context are also character strings, which are always false, because from their negation one can also gain a tautology.

Tautologies

Let us first introduce prominent tautologies:

            A ∨¬A is in any case true,

because either A is true or ¬A. There is no such thing as a third according to our requirements. This statement is called the statement on the excluded third.

On the other hand, A ∧ ¬A is wrong in any case, because the statement A and the statement ¬A cannot be true at the same time. For example, it cannot be that it is raining and not raining at the same time. A and ¬A contradict each other. One generally calls a compound expression, which is false regardless of the truth values of the individual statements, a contradiction. Then, the following applies to the negation:

¬(A ∧ ¬A) is true in any case.

This statement is called the principle of contradiction.

Here we should now also list the tautology from the last blog post again.

(A ∧ (A → B)) → B is in any case true.

How do we find further tautologies in order to be able to form further rules of inference?

It can be shown that the propositional logic can be regarded as an axiomatic-deductive system. Axioms can be all tautologies of the form (by Kutschera & Breitkopf, Alfred, 2007, p. 69).

 A → (B → A),

(A → (B → C)) → ((A → B) → (A → C)),

(¬A → ¬B) → (B → A)

and as a rule of inference or concluding rule the modus ponens, which we already mentioned in an earlier blog post. In a moment, we will really introduce this concluding rule by deriving it from a tautology.

All logical expressions, that can now be derived from these axioms using the modus ponens, are tautologies again. So you can set up as many concluding rules as you want. Only a few will be needed.

Now we understand why Wittgenstein says: The propositions of logic are tautologies. So the propositions of logic say nothing (Wittgenstein, 2006, pp. No. 6.1, 6.11).

From Tautologies to Concluding Rules

Let’s have a look at the truth table for the tautology (A ∧ (A → B)) → B again:

A B A B A ∧ ( A B) (A(A B))B
1 1 1 1 1
1 0 0 0 1
0 1 1 0 1
0 0 1 0 1

Let us first consider the first line, in which both premises A and A → B are true. From the second column of this first row we then infer that B is true. The statement B must therefore necessarily be true if both A and A → B is true. Only in this way can the character of tautology be respected.

This is now a conclusion that results from inspection of the truth table of a tautology. No reasoning can be more elementary. This is also the conclusion with which all other concluding rules can be won. So here we have the origin of logical reasoning before us, the “mother” of all reasoning rules.

You write, with the sign “⊨” for a logical conclusion:

A ∧ (A → B) ⊨ B,

but frequently also in a form in which the individual premises are separated only by a comma:

A, (A → B) ⊨ B.

The symbol “⊨” is not a sign of propositional logic, but an abbreviation for the phrase “follows logically from this” in colloquial language. Otherwise we should be able to count with this symbol like with “∧” or “∨”. It only expresses the relationship between the statements A ∧ (A → B) and B in the meta language, our colloquial language:  In the case that the statement A is true, and in the case that the statement B follows from A, then B is true.

You may be confused at first and ask why all the effort.  They knew that a long time ago. It is indeed trivial, in the truest sense of the word, because in the Middle Ages the word “trivial” was coined from insights gained in trivium, the lowest level of education in a monastic school. This trivium, in turn, was named in ancient times for a place to which three paths lead, and where many who share the same opinion can gather.

Here, however, in a formal language, what Aristotle has already defined is very concretely realized: “A conclusion is thus a speech in which, with certain assumptions, something other than the presupposed follows with necessity on the basis of the presupposed”. (after Schupp, I, p.267). The emphasis is on “necessity”.

This is the modus ponens. It is the most prominent logical conclusion, was already known in ancient times by the philosophers of the Stoa and subject of many discussions in the Middle Ages (see “the Logic of the Stoics”).

We still want to investigate what this conclusion tells us if one of the premises or both are wrong. In any case, the entire premise is then wrong, because the individual premises are linked by a “∧”. We extract the two relevant columns from the table above and arrange them somewhat differently:

A (A B) B
1 1
0 0
0 1
0 0

So, if the total premise is false (2nd to 4th line), B can be true, but also false, i.e. nothing can be said about the truth value of B. Everything can be deduced from a false premise. At first this is surprising, and in the history of logic it has been discussed for a long time. But if one can derive both B and ¬B, then the conclusion is meaningless.

From the derivation of the modus ponens we can learn how to create a general final rule: From every tautology that can be found in the form

M → B, you can use the final rule

M ⊨ B

because then it follows immediately from the truth table for “M → B” that under the condition that M is true, also B must be true, because this implication M → B is a tautology after assumption and therefore true.

This is a statement on the meta level, not in the calculation of the syntactic level. However, we know that because M → B is a tautology, we can go from the character string M to the character (string) B on the syntactic level without leaving the realm of true statements on the semantic level. We write this in the form of

M ⊢ B,

and call this operation a derivation at the syntactic level. That what is a conclusion on the semantic level is called a derivation on the syntactic level.

At the syntactic level, we can therefore pass from the character string M to the character or character string B without leaving the realm of true statements at the semantic level.

But now one knows how to “calculate” on the syntactic level, namely according to the rules for the formation of character strings and according to the concluding rules, by which one may transform certain character strings into others, which are also usually shorter. Such a system of calculation rules is called a “calculus”.

The calculus of propositional logic therefore has nothing to do with any meaning of the statements in terms of content. To a certain extent, it only provides the tracks on which truth of statements can be safely transported from premises to conclusion. Wrong statements on such tracks lead to arbitrariness. Without the truth of the premises “everything is nothing”. That’ll still occupy us.

Two remarks are in place here:

Let us consider the premises:

A: = „2 + 2 = 4“,

B: ≔ “Freiburg is located in the south of Germany”. 

The statements A and B are true, thus also the implication A → B, i.e. if 2 + 2 = 4, then Freiburg lies in the south of Germany”. Thus, the expression A → B is well formed, but futile. Then the logical conclusion

A, A → B ⊨ B

is also meaningless. That doesn’t have to be irritating. Even in our colloquial language we can form grammatically correct sentences that are meaningless: “The moon babbles red suit.” The tracks are not responsible for what goes on them. Incorrectly formed sentences or not well formulated expressions are senseless anyway.

You often hear people say: “It’s logical, isn’t it?” and the speaker means that the conclusion immediately makes sense to him. This feeling, however, probably does not refer to the concluding rule, but to the implication, which is of course true for the speaker. The speaker thus confuses the logical conclusion with his assumption that his premise is true. He should say, “It’s a plausible assumption for the modus ponens, isn’t it?” He would be met with venerable understanding.

Important Concluding Rules in Applications

The modus ponens is probably the most prominent concluding rule, even in generalized form, which reads:

(A1 ∨ A2 ∨ …   ∨ AnA) (¬A B1 ∨ B2 ∨ …   ∨ Bm)

  ⊨ A1 ∨ A2 ∨ …  ∨ AnB1 ∨ B2 ∨ …   ∨ Bm.

The underlined portion of the premise represents exactly the simple modus ponens. This term then also appears accordingly as B1 in the conclusion.

The premise here is based on a character string in normal or standard form, into which each character string can be brought by a systematic transformation process. In general, this is the normal form:

(A1 ∨ A2 ∨ …   ∨ An) ∧ (B1 ∨ B2 ∨ …   ∨ Bm).

Such procedures and applications of inference rules as the generalized modus ponens are implemented in so-called resolution algorithms. The length of the character string can be successively reduced. Within the “Prolog” program, for example, such an algorithm can be processed on a computer.

A particularly popular inference rule in mathematics, already known in ancient times, is proof by contradiction:

If one wants to prove that a statement A implies another statement B, one first assumes for the proof that besides a premise A also ¬B is true premise. If then one can deduce a contradiction from it, ¬B cannot be true. So B must be true, because there’s no such thing as a third. Here it is shown that the negation of an assumption can be reduced to a contradiction. In the Middle Ages this form of proof was therefore also called “reductio ad adsurdum”.

The contradiction can show itself e.g. in the fact that one can deduce from A ∧ ¬B a statement C, and in addition, the statement ¬C. In order to show that this strategy can also be represented as a concluding rule, one only needs to find the corresponding form M → B. This is

((A ∧ ¬B) → C) ∧ ((A ∧ ¬B) → ¬C) → (A → B).

This is indeed a tautology and therefore the concluding rule:

((A ∧ ¬B) → C) ∧ ((A ∧ ¬B) → ¬C) ⊨ (A → B).

In a somewhat different form one uses the proof by contradiction if one wants to know whether a statement B is contained in a knowledge base W and thus also true. So, you’re asking whether

W ⊨ B

applies. This is the case if W → B, i.e. ¬W ∨ B is a tautology. Since ¬W ∨ B can be transformed into ¬(W ∧ ¬B), we must therefore ask whether ¬(W ∧ ¬B) is a tautology and thus W ∧ ¬B is a contradiction. So, we can see that a statement B can be deduced from a knowledge base W if

W ∧ ¬B

is a contradiction. That is also plausible: If the information of B is contained in W, the contradiction B ∧¬B must show up somehow with the evaluation of the expression W ∧ ¬B.

In order to now show that W ∧ ¬B leads to a contradiction, the expression W ∧ ¬B is transformed into the disjunctive normal form within the framework of the calculus and then the receiving expression is successively reduced with the help of the generalized modus ponens until an expression is shown that represents a contradiction – or not, depending on whether the statement B is contained in the knowledge base W or not. 

The proof of the Pythagoreans that there are infinitely many prime numbers is e.g. of this form. The statement B is then: There are infinitely many prime numbers. The statement ¬B is: There are only finitely many prime numbers. The knowledge base consists of the rules of arithmetic for integers.

On this basis of ¬B and with the knowledge of W one shows then that one can always find a new prime number to every set of finitely many prime numbers, thus ¬B is false, in contradiction to the assumption that W ∧ ¬B is true.

Aristotle and the Stoa

If one examines the insights of the ancient thinkers as to whether they can still be regarded as generally valid today, one encounters Aristotelian logic, apart from many mathematical and some physical statements. Although at the end of the 19th century something “better” was found with modern logic, Aristotle’s statements on the laws of thought are still valid and will always remain so. It is highly admirable how clearly Aristotle saw the structure of an argumentation and how he worked out the decisive criteria for reliability.

The first step is a detailed analysis of a conclusion.  In a sentence from Topics, Book 1, Chapter 1 of Aristotle:

The conclusion is now a discourse in which some things are presupposed and then something different from them results from it with necessity mediated by those propositions.

The point is here: “Some” is assumed and “some of it different” results. The extent to which this “with necessity” arises will still have to be discussed.

Let’s have a look at a classic example:

The “some” that we presuppose are the two sentences, also called antecedents:

One: “All human beings are mortal.”

Second: “Socrates is a human being.” 

The “different” of it, the conclusion is the sentence:

   “Socrates is mortal.”

So here we have three terms: “Socrates”, ” human being” and “mortal”.  These are put into relation, “human being” is the generic term to “Socrates”, “mortal” to “human being”. If the relations of terms are correct, the antecedents are true. The conclusion then follows from this Dihairesis, i.e. a classification of terms.

The conclusion seems evident to us; to Aristotle it indeed follows “with necessity”. No one would deny that, everything else would be “unreasonable”. However, we still use our “common sense”, which is what we call reason.

We will leave it as it is and deal with this point again later. It should be noted, however, that the protagonists of this conclusion, i.e. those involved in the conclusion, are three terms, or to be more precise: two relationships for a total of three terms, in each case between two terms. For this reason, one speaks here of a term logic.

Logical inference, dialectical inference and false conclusion

However, something is also said about the sentences in which the terms appear, here e.g. that the antecedents are true. With regard to these, Aristotle now makes a decisive case discrimination (Aristotle, no date):

The conclusion provides a proof or a logical inference if it is derived from true and general superordinate sentences, or from such, which are based on true and superordinate sentences of the science concerned.

Dialectical, on the other hand, is the conclusion derived from credible sentences.

So, it depends on whether the first sentences are true or only “credible”. Often the Greek word translated here as “credible” is translated as “probable”. This suggests the idea that one could indicate here a degree of probability for whether the statement is true. However, only in the last century was it possible to develop a theory of probability in which one can calculate with different degrees of probabilities. These can be applied just as well if one assumes degrees of credibility. We will use that in later blog posts.

But in the case of a proof where one can speak of a logical inference, we are dealing with “certain knowledge”, whereas in the case of a dialectical inference we are dealing only with “uncertain knowledge”. In later blog posts we will deal in detail with what kind of knowledge can be gained from certain or uncertain knowledge respectively.

First it is important to distinguish between a “proof”, in which one must assume that the first sentences, the antecedents, are true, and a dialectical conclusion, in which one can only proceed from “credible” antecedents.

About the “true and general superordinate sentences” he says:

True and superordinate propositions are those which are not mediated by others but are certain by themselves. Because for the most fundamental principles of the sciences one must not demand a reason for them, but each of these principles must be certain by itself.

These are principles which were later called “axioms”. They play a major role in an axiomatic-deductive system by Euclid of Alexandria. However, we may assume that there are also sentences “certain by themselves” outside of special sciences, such as: “All human beings are mortal”.

He determines the “credibility” of sentences as follows (ibid.):

Sentences are credible if they are accepted by all, or by most, or by wise men, the latter by all, or by most, or by the most experienced and credible.

Here we are now “in the center of life”. It is almost always the case that we are dealing with sentences that we can only believe. Even as a scientist you will have to believe almost everything, e.g. the statements of the scientists of another subject – even those of your own subject, if you have not checked the statements yourself or cannot check them directly. “Credibility” is therefore a high good for a society. Today in particular, it is often not easy to decide who to believe.

Finally, Aristotle also deals with the fallacy:

A false conclusion is one which is derived from apparently credible sentences without them really being, or which is derived only apparently from credible sentences or from sentences that only seem so.

The error can therefore lie with the antecedents, that’s trivial. More interesting is the case that the rule of inference is not valid, that a inference is only “apparently”, not truly present.

One such false conclusion, which often remains hidden, is the “fallacy of four terms”. This is demonstrated particularly clearly in the following example (Wikipedia: Fehlschluss):

One: What’s got a beard can be shaved.
Second: Keys have a beard.
Conclusion: Keys can be shaved.

A shift in meaning has occurred here in the transition from the first to the second antecedent. A “beard” in the first sentence means something different than a “beard” in the second sentence. One should better speak of two terms “beard1” and “beard2”, and there would not be three, but four terms in the game – hence the name.

Since we constantly argue with unclear terms in our considerations and discussions, we are often undermined by such false conclusions.

The first major step in the analysis of an argumentation is thus taken: a discrimination between a rule of inference and the “presupposed”, a case discrimination between true and only credible antecedents as well as an investigation of the possibilities of a false conclusion. Here again the great systematist shines through. In the next blog post we will, again systematically, distinguish between different types of antecedents and rules of inference.

The Stoics’ logic

About 100 years later, a different approach to logic emerged in the philosophical school of the so-called Stoa. The philosopher Chrysippus (-276 to -204) of Soli (Cilicia) was probably the representative of this school who most successfully dealt with logic. According to Diogenes Laertius his extraordinarily numerous books were very famous at that time (Laertius, 2015, p. 415ff).

In the long run, however, Aristotle’s approach was far more effective. In all centuries up to the time of Gottlob Frege (1848 to 1925), who founded modern mathematical logic, logic was associated with the name Aristotle; stoic logic was almost forgotten in the Middle Ages, its significance was rediscovered only in 1950 by the American science theorist Benson Mattes. I think it is still underestimated.

The stoic logic was based on the findings that had already been gained by the Megarian school who referred to Euclid of Megara. Stiplon, Diodoros Kronos and Philon von Megara were the most prominent representatives of this group. Stoic logic was already a propositional logic in its approach, while Aristotelian logic, as already mentioned, was a term logic. Aristotle had grown up in the Platonic Academy and had therefore probably incorporated the Platonic terminology (Dihairesis). His logic thus became term logic. The Megarian school was free of such influence, and probably saw dialectics more directly as the problem of checking an argumentation for its conclusiveness. For them, the statements were thus in the foreground.

Aristotle had already seen that it depends on whether the antecedents were true, only credible or not. The rules of inference, however, had to be concerned with the relationships between terms. But now the rules of inference are focused on the “transport of truth” – from the antecedents to the conclusion. Then why shouldn’t the protagonists who are supposed to accomplish this be the sentences themselves? In a propositional logic then it must be only a matter of whether the antecedents are true or not. Terms no longer appear explicitly.

It is then also no longer of interest to “categorical judgements”, such as “All human beings are mortal”, in which a judgement is made about the categories, i.e. in which the category “human being” is set in relation to the category “mortal”, for example. Such a judgement, which corresponds to a division of terms, is to be distinguished from the “synthetic judgement”, which in today’s language corresponds to the connection “A and B” or “A or B” of two statements A and B respectively. So, here sentences A and B are connected in various ways.

A particularly important connection is the “implication”: If A, then B, e.g. “If it rains, then the road is wet”. Where A = “it rains”, B = “the road is wet”. So, if A is the case, then B is the case. An implication can also be true, credible or false. The Stoics were already familiar with this connection: “An implication is true, if the after-sentence is contained in the preceding sentence in the sense”.  (Sextus Empiricus: Pyrronic Skepticism II,112. p.181, after Schupp, I, p. 346). With this statement they excluded the case that the implication makes no sense, as in the example: “If 2 + 2 = 4, then my friend has birthday today”. Such a case led to difficulties in formulating certain rules of inference.

The Stoics formulated five “unprovable rules of inference”, and there are also said to have been rules on how more general final formulas can be traced back to these fundamental “unprovable conclusions”. One of these conclusions is identical with the “modus ponens”, a rule of inference in which implication plays an important role. That conclusion is:

Be the statement A true, and be also true:

If A is true, then B is true.

Then follows: Statement B is true.

This can be seen immediately by everyone, and so it is not surprising that this rule already belongs to the “unprovable conclusions” of the Stoics.

But there were always great discussions about the “modus ponens”. One always discussed all logical conclusions only by given meaning of sentences or terms. However, the “modus ponens” contains the implication as an antecedent. Because this could now also be senseless as in the above example, “If 2 + 2 = 4, then my friend has birthday today”, the logicians always doubted the general correctness of this rule of inference.

The “unprovable conclusions” long belonged to the school knowledge of late antiquity; writings by Cicero (-106 to -43) or Isodor of Seville (560 to 630) bear witness to this.

The difference between Aristotelian and Stoic logic was not seen by many even in late antiquity, as can be seen from works by Cicero or Galen. However, the neo-Platonist Porphyrios (234 to 305) still compared the stoic with the Aristotelian logic in terms of terminology and objective. Boethius (477 to 524) could then only report on the stoic logic. When one spoke later of logic, one always meant the Aristotelian logic (Schupp, I, p.349) and one referred thereby predominantly to the writings of Boethius. What remained of stoic logic was the distinction between categorical and synthetic judgments and the knowledge of “modus ponens” as a rule of inference, without knowing stoic logic.

Stoic logic, by being an early form of propositional logic, was much closer to modern logic as formulated by Gottlob Frege at the end of the 19th century. What it lacked was a decisive step: the discovery that a formal language of its own is necessary for logic, just as it is for mathematics, so that the laws of logical thought can be formulated independently of the meaning of the statements. Then you can “compute” like in mathematics. The correctness of conclusions can then be defined and checked at this level. With such a strict separation of syntax and semantics, i.e. of grammar and meaning, the “meaning problem” of the implication described above becomes irrelevant.

Thus, we will have to deal first with formal languages, before the modern propositional logic can be introduced. But first we should study explicitly the rules of Aristotelian logic for the sake of completeness.

Suche in OpenEdition Search

Sie werden weitergeleitet zur OpenEdition Search