Welcome to the logic pages of Maarten Maartensz. See also: HelpMap + Tour + Tips + Notes + News + Home

 


Basic Natural Logic

 

Dec 15 2009: A paper of long ago, still with some interest, if never finished properly. Note:
                   Also some of the ideas involved are not 'classical'. (Read at your own risk.)

I have copied this as a file on its own with the above title from LPA02 with the intension of doing some correcting and extending it. As is, it indeed is "still with some interest, if never finished properly" - and original.
 

 

1. On logic and language
         Logic as extension of natural language
2. Naive initial assumptions about written languages
         Letters
         Words
         Terms
         Sentences
         Variables
         Substitutions 
         Formulas
         Inferences
         Validity

3. Formal Grammar for Basic Logic
4. Basic Propositional Logic 
         General Rules of Inference
         Propositional Rules of Inference
5. Basic Logic of Terms
         Rules of substitution
         Rules of valid inference
         Rules of quantification
         Rules of equality
6. Basic Logic
         Elements
         Sequences
         Abstracts
7. Basic Set Theory
         Classes, Sets and Individuals
         Relations, Functions and Mappings         
8. Semantics

         

 

 

 

1. On logic and language

 

Since Boole noted - in the 1840-ies - a far-going parallel between ordinary simple algebra when the values of the variables are restricted to 0 and 1 and the reasoning people do with statements in natural language, there have been very many systems of logic of various kinds, and indeed in the 20th Century machines were developed which embody the sort of logical algebra Boole conceived - computers.

Also in the 20th Century, it has become the received tradition to present formal logics as a formal languages, much like natural languages except being more regular, precise and simple, and developed for some specific purpose, and to assume - usually somewhat tacitly, for the question I am now raising tends to be not clearly raised or answered - that formal languages are languages in their own right, rather like Ancient Greek is a language in its own right, and must be learned like Ancient Greek is learned, as a language on its own, presented and studied in another natural language of its own, such as English.

It is this sort of assumption I wish to reject, and to insist that, instead, formal logics are extensions of natural languages, that arise from natural languages when someone adds variables and formal rules of inferen ce to it, and that while it is grammatically correct and justified to speak of "formal languages", such formal languages are extensions and part of natural languages, and indeed cannot be understood or constructed without natural languages.

More specifically, what I reject is the received distinction between a "meta-language", such as English, and an "object language", such as propositional or predicate logic. This distinction is not totally misconceived, but it is unclear and leads to mistaken conclusions in semantics. The more correct way of conceiving the difference between a formal and a natural language is that a formal language is an extension of a natural language whic h involves variables, which forms a part of a natural language, and is created for a specific purpose, namely usually that of reasoning more clearly about some subject-matter than is possible in natural language.

Indeed, I hold the same is true of mathematics, though for trained mathematicians it will be rather natural to fill reams of paper with mathematical formalese, with little or no benefit of English or another natural language, not because the formalese is a complete natural language in its own right, but because at a certain level of competence in mathematics, it becomes more convenient to mostly avoid ordinary English than to use standard English, since what one needs to say one can say more clearly in the artificial fragment of English that was introduced for just that purpose.

Quite the same holds for bio-chemistry and its users: For those much trained in it at a certain point it becomes more clear and evident to state just the special chemical symbolism developed over the last 125 years rather than English.

But neither pure mathematics, nor logic, nor the notation of chemistry are natural languages in their own right, and to assume they are leads to confusions I want to avoid.

For this reason, I shall start with a brief sketch of the sort of assumptions that are sufficient to set up a system of formal logic.

 

 

 

 

 


 

2. Naive initial assumptions about written languages:


We start with a number of naive initial assumptions about written languages - which are initially kept naive, simple and unqualified because we need simple and naive beginnings to develop and qualify once we have them:

 

Naive initial assumptions about written languages:

1: Letters are small drawings.

2: Words are sequences of letters.

3: Words are of some grammatical kind .

4: Terms are sequences of words.

5: Terms are of some grammatical kind.

 6: Sentences are sequences of terms.

7: Sentences are of some grammatical kind.

Now I shall comment these assumptions separately.

1: Letters are small drawings.

This naively characterizes what letters are, and the reader can see many examples of English letters on this page. In a formal system, conversely, letters are instantiated as pictures and assumed to be letters, as will be shown later on.

2: Words are sequences of letters.

The central notion here is "sequence", and the reader can see many examples of sequences of English letters that are English words in this text, and can also see that the sequences of English letters that are English words run from left to right horizontally. Here we take the basic meaning of "sequence" as understood and well-known from e.g. English, but later we shall set up a formal system based around the notion of sequence. In a formal system, a word is  instantiated by writing it down and declaring it to be a word, as will be shown later on (for not each and every sequence of letters is a word).

3: Words are of some grammatical kind.

A grammatical kind is a word that names a class of words. Examples in English of words that are grammatical kind words are "noun", "verb", "statement" and the like. Grammatical kinds are used to write grammars, as will be shown later on.

4: Terms are sequences of words.

Examples of the kind of terms in English that we shall need the counterparts of in formal systems are "the father of John" and "the things which have the property of being green". The first is a functional term, that names some unique individual that fathered John, and the second an abstraction term, that names the class of things that are green. Formal developments of these notions will be given below, and here we only observe that the examples are English, and that English contains many other functional and abstraction terms (in English rendered also in other ways, such as "John's father" and "the things which are green" or "the class of green things").

5: Terms are of some grammatical kind.

This is as for words, and will be used later on. Here it need only be remarked that both terms and words name things, and that the tems "functional term" and "abstraction term" named in the previous paragraph are grammatical kind terms for terms.

6: Sentences are sequences of terms.

Note first that since terms are sequences of words, the simplest statements will be sequences of simple words. Next, a statement differs from terms and words in naming structures and asserting that these structures involve certain relations of things or classes of things. An example is "Romeo loves Julia", which names the two-place structure made from the relation loves, and asserts that Romeo has that relation to Julia. This will be more fully explained below.

7: Sentences are of some grammatical kind.

This is also as for words. Examples in English of the grammatical kinds for sentences are: (declarative) statements and questions, each of which may be sub-divided into further grammatical kinds.

Next, we turn to the assumptions which are basic for formal languages:

Naive initial assumptions about variables and formulas in written formal languages:


8: For all terms of each grammatical kind new words may be introduced that serve as
variables of that kind

9: If variables of a kind are introduced, the non-variables of that kind are called constants, and no variable of any kind is a constant of any kind.

        10: A formula is a statement with at least one variable
 

Here are brief explanations:

8:  For all terms of each grammatical kind new words may be introduced that serve as variables of that kind.

In a normal natural language there are no real variables, apart from an imprecise and incidental use of "the unknown X", borrowed from mathematics, "John Doe", borrowed from jurisprudence, and some other similar examples. It seems that Aristotle was the first to introduce variables explicitly in formal systems, and that he did this for syllogistic logic. Most users of a natural language who can read also can calculate and have at least some experience of variables as used in algebra or geometry.

9: If variables of a kind are introduced, the non-variables of that kind are called constants, and no variable of any kind is a constant of any kind.

This is mainly terminological, and it follows that the words and terms of a natural language are constants. One important point is that no variable is a constant and no constant a variable, which is to prevent possible confusions. 

10: A formula is a statement with at least one variable.

This gives us the basis we need for logic: The generalization from statements (declarative sentences) that result when one replaces one or more of its terms by some variables (of the same kind as the constants they replace). The reason we are interested in formulas is that it allows us to write and consider statements like "if x is a square, x is angular", in which x is a variable, and the whole formula can be seen as a generalization that covers each and every statement that results from it if the variable "x" is replaced by a constant. Likewise, "if x is human, x is capable of being rational".

Having variables and constants, the notion of substituting or replacing occurrences of the one by occurrences of the other, or indeed an occurrence of one variable by an occurrence of another etc. becomes necessary. Two related notions we shall need occur below, where in (10) we use variables:  

Naive initial assumptions about substitutions in written formal languages:

 
11: A uniform substitution of a term T by a term U in a sequence S is a replacement of each occurrence of T in S by U, if T and U are terms of the same kind, and is S if not.

12: A substitution instance of a formula is a uniform substitution of one of its variables by a constant (non-variable) term of that kind in the formula.

 Note this states something in fairly precise terms what everyone who knows how to write knows how to do: Replace terms in written statements with other terms in a systematic way:

11: A uniform substitution of a term T by a term U in a sequence S is a replacement of each occurrence of T in S by U, if T and U are terms of the same kind, and is S if not.

An example may be in place: The result of uniformly substituting "Julia" for "Gertrude' in "Gertrude loves Hamlet while Hamlet likes Gertrude" is "Julia loves Hamlet while Hamlet likes Julia". The main point of a substitution being uniform is that occurences of the same terms are replaced by occurences of the same terms.

12.A substitution instance of a formula is a uniform substitution of one or more of its variables by one or more variables or constants (non-variable) terms of that kind in the formula.

Thus, "Grass is green" is a substitution instance of "x is green" and also of "Grass is z", "x is y" and "x y z".

Note we get formulas from statements by uniformly substituting some constant(s) by (a) variable(s) of the appropriate kind, and we get statements from formulas by uniformly substituing constants for all variables in the formulas.

Naive initial assumptions about inference schemes, axioms and arguments in written formal languages:


13: An inference schema is a sequence of n formulas with the property that any substitution instance of it of which all but the last formula's instances have been written as a assumption or conclusion in an formal argument, the instance of the last formula may be written as a conclusion in that argument, justified by that inference schema and the assumptions and conclusions that are instances of all but the last forms, which are called the premises of the inference schema.

14: Inference schemes may be assumed or not in formal arguments, and may only be used in formal arguments if already assumed.

15: An axiom schema is an inference schema of 1 form i.e. one without premises

16: A formal argument is a sequence of statements that are either assumptions or conclusions, and if conclusions are substitution instances of conclusions of some assumed inference schema.

Again, anyone who speaks a natural language knows how to make inferences using that language. Here a general mechanism is sketched in terms of formulas that explains part of what is involved in inference:

13: An inference schema is a sequence of n formulas with the property that when any substitution instance of it of which all but the last formula's instances have been written as a assumption or conclusion in a formal argument, then the instance of the last formula may be written as a conclusion in that argument, justified by that inference schema and the assumptions and conclusions that are instances of all but the last formulas, which are called the premises of the inference schema. 

There will be examples below, and the main point to note here that the characterization of inference schemes is grammatical, and so far nothing has been assumed that would make an inference schema valid or useful or not.

14: Inference schemes may be assumed or not in formal arguments, and may only be used in formal arguments if already assumed.

The main points here are that inference schemes are like ordinary statements in being capable of being assumed or not, and that at least in formal arguments one cannot infer anything on the strength of inference schemes that have not been already assumed.

15: An axiom schema is an inference schema of one formula i.e. one without premises.

This accommodates formulas like "x is true or x is not true" with "x" a variable for statements, the instances of which are according to (12) always conclusions in arguments in which the axiom scheme has been assumed.

16: A formal argument is a sequence of statements that are either assumptions or conclusions, and if conclusions are substitution instances of conclusions of some assumed inference schema.

If we have the English inference schema for any statements p and q that from "p and q" it may be inferred that "q" and we have as assumption that "it rains and it is cold" and "p"="it rains" and "q"="it is cold" we may infer "it is cold". There will be examples in formal languages below, but the present example makes it clear that we can state formal arguments in extended English also, for as soon as we have extend English with variables and some inference schemes, we have the where with all to state formal arguments. Indeed, this was first done by Aristotle, who added variables and inference schemes to Greek, to state, explain and investigate syllogistic reasoning.

So far, everything that was assumed only involves languages as sequences of written marks, without any assumption about how words, terms and statements come to mean non-linguistic things, possibilities or structures, and about how statements come be declared true or not.

Naive initial assumptions about truth-value semantics, logical terms and formal grammars:


17: A
truth-value semantics is a set of inference schemes that infers that a formula of a certain form is true or not from assumptions or conclusions that certain formulas are true or not.

18: A componential rule of inference is a rule of inference that infers that a formula of a certain form is true (not true) from assumptions or conclusions that certain components of the formula are true or not.

19: A logical term is a term that has a componential analysis of the formulas in which it is the only constant i.e. that is true or not depending on which of its components are true or not.

 20: A formal grammar is a set of axiom schemes and inference schemes for sequences of grammatical kind terms, the complete instances of which without any free variables are the statements the formal grammar represents.

 Here are comments.

17: A truth-value semantics is a set of inference schemes that infers that a formula of a certain form is true or not from assumptions or conclusions that certain formulas are true or not.

An example in English of such an inference schema is "From "(p and q) is true" it follows "(p) is true" and "(q) is true", and conversely. These two inference schemes give a rather good rendering how "and" is used in English between two statements as regards attributions to such statements with "and" that they are true or not. Similar inference schemes can be adopted for "or" and "not", and will be presented and considered later on. 

At this place it should be carefully noted that a truth-value semantics does NOT explain or define the notion "true", but presupposes and uses it, and that the notion of "true" that is presupposed was first clearly stated by Aristotle: A statement is true if what it says is so in reality, and a statement is not true if what it says is not so in reality.

18: A componential rule of inference is a rule of inference that infers that a formula of a certain form is true (not true) from assumptions or conclusions that certain components of the formula are true or not.

It is not necessary that a rule of inference is componential, but if a rule of inference is componential it has the pleasant property that it contains all the information that is necessary to determine whether its conclusion is true.

19: A logical term is a term that has a componential analysis of the formulas in which it is the only constant i.e. that is true or not depending on which of its components are true or not.

The term "and" is a logical term if it is assumed to satisfy the rules proposed fort it under (16). This means essentially that logical terms have the same meaning in any context, and do so because they have a componential analysis.

20: A formal grammar is a set of axiom schemes and inference schemes for sequences of grammatical kind terms, the complete instances of which without any free variables are the statements the formal grammar represents.

Note first that a formal grammar consists of inference schemes that generate conclusions that when fully instantiated are the statements of a formal or natural language, and second that in case the formal grammar lays down a non-natural language it is basically conventional, whereas in case the formal grammar attempts to describe which are the grammatically correct statements of a natural language, this may be a very interesting hypothesis.

I conclude this section with an assumption about what makes an inference schema valid:

  Nave initial assumptions about truth-value semantics and valid inference schemes:


21: A formal inference schema is valid with respect to a truth-value semantics if in every substitution instance of the schema in which the premises are true by the semantics, the conclusion is true by the semantics.

This explains why when reasoning one is interested in the inference schemes used, and in their validity: The inference schemes one uses describe the inferences one makes, and inference schemes that are valid will unfailingly and always lead from true premises to true conclusions. Valid schemes of inference are the basis of valid reasoning.

It should be noted carefully that the validity of a formal inference schema is relative to some truth-value semantics, and that there are in the logical literature different semantical systems for the same logical words. Also, a truth-value semantics presupposes and uses the notion of true and false, and leaves much unarticulated that is addressed in a more complicated semantical notion:

 Naive initial assumption about structural semantics

22: A structural semantics for a language L is a set of assumptions that correlates the statements, terms and formulas of language L with structures in some domain D.

Accordingly, this requires a lot more than a truth-value semantics: A language L the terms and statements of which are represented by the structural semantics; some domain D of structures, where the domain is a set of otherwise arbitrary things; and some correlation of the terms and statements in L with structures in D that actually states and contains the semantics for L.

One fundamental problem of a structural semantics about the relations between language and something else is that as soon as something else is something one cannot draw or write on the page the language is written must itself be represented linguistically.

Below there are examples of both truth-value semantics and structural semantics.

 

 

 


3. Formal Grammar for Basic Logic

Interpunction is made up of the parts of a language that are used to group the other parts of the language. Something like 1/5th to 1/3rd of written text in a natural languages consists of interpunction. (In spoken languages interpunction consists of pauses, stresses and intonation.)
 

Interpunction of BL: 

Left bracket

 (

Right bracket 

 )

Curly Left bracket   

 {

Curly Right bracket

 }

Straight Left bracket  

 [

Straight Right bracket 

 ]

Comma

 ,

Space

 

Dot 

 .

Double dots

 ..


In BL statements are sequences of terms, that are sequences of letters. In BL there are no undefined grammatical kinds other than statement, term and variable terms of both kinds. Statements are indicated with an initial capital letter in BL, and terms with an initial undercast letter. Since the reader is supposed to know natural language and some basic arithmetic, the very convenient notation of suffixes made from terms for natural numbers is used in BL and quite admissible.
 

Free variables, statements and formulas of BL  


A formula of BL is a statement of BL with one or more free variables.

X, Y, Z are statement variables of BL.
If V is a statement variable of BL and i a positive natural number, then Vi is a statement variable of BL.
x, y, z are term variables of BL.
If  v is a term variable of BL and i a positive natural number, then vi is a term variable of BL.


 

This lists the basic logical constants of BL, including their readings. The basic logical constants come in three groups, separated by horizontal lines: The constants for assuming and inferring, both defined in terms of what one may write; the constants for propositional logic; and the constants for general logic. The difference between propositional and general logic is that in propositional logic statements  cannot analysed into their terms, whereas in general logic they can be, and every statement is assumed to be a sequence of terms.

 

 

 Logical Constant

Read as

  |- X

  X may be written

  X ||- Y

  if X may be written, then Y may be written

  (X1 .. Xn) ||- Y

  if X1 ..  Xn may be written, then Y may be written


  ~X

  not X

  X&Y

  X and Y

  XVY

  X or Y

  X-->Y  

  if X then Y, X only if Y

  XIFFY

  X if and only if Y


  x=y

  x equals y

 

 

  (x1 .. xn)

  the sequence of x1 .. xn

  x(y)

  the x of y, y 's x

 

 

  |y.(Z)

  the y (such) that Z

  x|y.(Z)

  x is substitutable for y in Z

 

 

 

Note that in fact general logic goes beyond propositional logic in three ways

  • General logic contains the notion of equality of terms and indeed the notion of terms and sequences of terms

  • General logic contains the notion of functional terms

  • General logic contains explicit abstractions and substitutions

Finally, there are the basic rules of grammar of BL:

 

Inferences, statements, terms, substitutions, abstracts and quantifiers of BL


Inferences:

If X is a statement, then |-X is a statement.
If X is a statement and Y is a statement, then X||-Y is a statement.
If X1 is a statement and .. and Xn is a statement and Y is a statement, then ((X1 .. Xn) ||-Y) is a statement.

Statements:

If X is a statement, then ~X is a statement.
If X is a statement and Y is a statement, then X&Y is a statement.

If X is a statement and Y is a statement, then XVY is a statement.
If X is a statement and Y is a statement, then X-->
Y is a statement.
If X is a statement and Y is a statement, then XIFFY is a statement.

Terms:

If x is a term and y is a term, then x=y is a statement.
If x1 is a term and ... and xn is a term, then (x1 .. xn) is a term.
If x is a term and y is a term, then x(y) is a term.

Substitutions:

If x is a term and y is a term and z is statement, then x|y.(z) is  a statement.

Abstracts:

If (x1 .. xe .. xi .. xn) is a statement and y is a term, then |xe..xi.((x1 .. xe .. xi .. xn) = y) is a term.

 

This lists the basic grammar of BL in English enrichted with the symbols used in BL.  Here are some brief notable points:

  • Assumptions to the effect that a statement may be written, or may be written if other statements have been written are statements as well.
  • The basic logical constants of propositional logic also occur in natural language, and are used their in similar ways as in BL, but with less preciseness and fewer restrictions.
  • Sequences of terms are terms and statements are terms.
  • Two terms, say x and y, may be combined into a functional term: x(y) read as the x of y or y's x, as in "the father of John", "the neigbors fridge" a.s.o,
  • The claim that one term is substitutable for another in a certain statement is a statement.
  • Abstracts are terms and result from statements by replacing some of their terms by variables and - as the phrase goes - binding these by an abstraction-prefix, that starts with | and ends with . and inbetween lists terms that occur in the formula that follows the abstraction-prefix in the order in which they occur in the prefix.

 

 

 


4. Basic Propositional Logic 

Rules of inference for Basic Propositional Logic

-||-i

 (p ||- q) & (q ||- p) 

  ||-

  p -||- q 

-||-e

 p -||- q

  ||-

  p ||-q & q ||-p

 ||-

 p ||- q 

 -||-

  |- p-->q

 AE

 ( (X1 .. Xn-1 Xn) ||- Y) 

  ||-

  |- ((X1& ..  &Xn-1) --> (Xn --> Y))

 AI

 |- X

Name

 |- (A1 .. An ||- C)

 -||-

 (1) (A1) [_]
         ..
 (n) (An) [_] 
---------------------------------
 [n+1] (C) [Name,1 .. n]

Given these explanations, here is a set of rules of inference for BPL, with names and abbreviations. The format has been explained above and will be further explained below:

Rules of inference for Basic Propositional Logic

Name

Rule

&I  

 ((X) .. (Y)) 

 ||-

 (X&Y)

&i

 (X&Y) 

 ||-

 (X)

&e

 (X&Y) 

 ||-

 (Y)

Vi  

 (X) 

 ||-

 (XVY)

Vi

 (Y) 

 ||-

 (XVY)

Ve

 ((XVY), (~XVY)) 

 ||-

 (Y)

Ii  

 (Y) 

 ||-

 (X-->Y)

Ie

 ((X-->Y), (X)) 

 ||-

 (Y)

IFFi

 ((X-->Y), (X-->Y)) 

 ||-

 (X IFF Y)

IFFe

 X IIF

 ||-

 ((X-->Y)&(X-->Y))

~i

 ((X-->Y), (X-->~Y)) 

 ||-

 (~X)

~e 

 (~~X) 

 ||-

 (X)

This may be written also in a more convenient format with each premiss on its own line, followed by the conclusion of the premiss(es). The difference between the above table and the following one is only one of notation

Rules of inference for Basic  Propositional Logic

 Name

 

[&i,x,y]  

 (x) |- A [_] 
 (y) |- B [_]
 ----------------------
 (next) |- (A&B) [&i,x,y]

[&e,x]

 (x) |- (A&B) [_]
 -----------------
 (next) |- A [&e,x]

[&e,x]

 (x) |- (A&B) [_]
 -----------------
 (next) |- B [&e,x]

[Vi,x]  

 (x) |-A [_] 
 ------------------
-
 (next) |- AVB [Vi,x]

[Vi,x]

 (x) |-B [_]
 -------------------
 (next) |- AVB [Vi,x]

[Ve,x,y]

 (x) |- AVB [_]
 (y) |- ~AVB [_]
 -------------------
 (next) |- B  [Ve,x,y]

[Ii,x] 

 (x) |- B [_]
 -------------------
 (next) |- A
-->B [Ii,x]

[Ie,x,y]

 (x) |- A-->B [_]
 (y) |- A [_]
 ------------------
 (next) |- B [Ie,x,y]

[IFFi,x,y]

 (x) |- A-->B [_]
 (y) |- B
-->A [_]
 ---------------
 (next) |- A IFF B

[IFFe,xy]

 (x) |- A IFF B
 -------------------------
 (next) |- A
-->B  & B-->A  [_]

[~i,x,y]

 (x) |- A-->B [_]
 (y) |- A
-->~B [_]
 ---------------------
 (next) |- ~A [~i, x, y]

[~e,x,y] 

 (x) |- (x) ~~A [_]
 ------------------
 (next) |- A [~e, x]

[AE,x]

 (x) |- (A1...An-1An ) ||- C)  [_]
 ------------------------------------------
-
 (next) |- ((A1&...&An-1)
-->  (An --> C)) [AE,x]

[AI,next]

 (next) |- A [AI,next]

In actual proofs using these rules of inference such rules as are used must be first assumed, as the rules one will argue by.

However, as we shall show below, each stated rule except AI has the property that if the premisses are true by the semantical rules then the conclusion is also true by the semantical rules.

Thus, all stated rules other than AI are logically valid by the stated semantical rules, and can lead from true statements as premisses only to true statements as conclusions. And every assumption introduced by AI can be eliminated by AE in a logically valid way.

All rules are stated in the same format, in which "x" and "y" refer to line-numbers of lines already in the argument; "next" is the line-number of the new and last line to be written on the strength of the rule; and the justifications of the lines that are premisses have not been written out but is  in dictated by "[_]".

The general format is the statement that "(next) ||-F [_] " may be added to any argument as the next line if "(x) ||-A1", .. ,  "(y) ||-An" have already been added to the argument.

Here is an elementary example that gives a proof of the principle of contraposition - in words: "If p implies q, then not-q implies not-p" using some of the above rules:

T*0

 |-(P-->Q) -->  (~Q-->~P)

Theorem

(1)

 |- P-->

 [AI,1]

(2)

 |- ~Q 

 [AI,2]

(3) 

 |- P-->~Q

 [Ii,2]

(4)

 |- ~P 

 [~i,1,3]

(5)

 |- ~Q-->~P

 [AE,2,4]

(6)

 |- (P-->Q)-->(~Q-->~P) 

 [AE,1,5]

This example shows that in fact an assumption about the variables used is needed, namely that they are valid substitutions in the presumed rules of inference, but after merely noting this fact (often missed in elementary introductions to logic), we leave the proof-rules of CPL for the moment.

If any rule of inference other than AI is proposed, it must be shown to be adequate with respect to the supposed semantics, if one wishes to claim that one's proofs are logical or logically valid.

In case of propositional logic the proof of the validity of a rule of inference is usually easily done by a truth-table or by restating a proposed rule "A1,...,An |- C " to the implication " A1&...&An ==> C " and showing [A1&...&An ==> C]=1.

The reason that the proof of validity of a rule of inference is of importance is that if a rule of inference is not valid, then when used in a proof it may lead from true premisses to false conclusions.

All stated rules except AI are valid with respect to the semantics of CPL, as the reader can easily verify, and AI is only needed to make the necessary assumptions to have any argument, while each and any assumption introduced by AI can be eliminated again by AE, in a valid way.

Proofs proceed then as stated above, on the understanding that the justification indeed does correspond to an instance of the inference rule named in the justification. UP

 

 


5. Basic logic of terms

In propositional logic statements are not analysed into parts smaller than statements. Yet it is  obvious from English that most  arguments in English turn also around terms in statements.

In Basic Logic statements  are analysed as sequences of terms, and the only presumed logical terms on the level of terms are terms for substitutions and equalities of terms. Thus, BL does not depend on rather vague assumptions borrowed  from Natural Language to the effect that there are predicate-terms and relation-terms and individual terms.

Here is the group of  axioms  concerning substitution, inference, quantification and equality, that I first state and then briefly discuss. The way of  reading  these axioms is given by the earlier grammatical rules.

Note that the rules of Basic Logic are all definitional axiomatic equivalences, that amount to a spelling out of the meanings of fundamental terms in other fundamental terms by means of equivalences (which automatically serve as introduction and elimination-rules by the properties of equivalences). The terms on the LHS of the equivalences are defined; those on the RHS of the equivalences are the defining terms.

The substitution axioms:

(Sub|) 

  x|y.(z1 .. zn)   

 IFF

   ( x|y.(z1) .. x|y.(zn) )

(Sub+)

  x|y.(z) = x

 IFF

    y = z

(Sub-)

  x|y.(z) = z

 IFF

 ~ y = z

Sub=

  (x1 .. xi .. xn) & xi=yi

 IFF

  (x1 .. yi .. xn)  & yi=xi

Seq=

  (x1 .. xn) = (y1 .. yn) 

 IFF

  x1=y1 & ... & xn=yn 


The resolution axioms:

Res|

 a1..ai|b1..bi.(x1..x)  

 IFF

  (a1|b1)..(ai|bi) (x1..xn)

Res

 (a1|b1)..(ah|h)(ai|bi).(x1..xn) 

 IFF

  (a2|b2)..(ah|bh) ((a1|b1)(x1)..(a1|b1)(xn))

 

The valid inference axiom:

INF0 

 (A1 , .. , An) ||- C  

 IFF

 (v1 .. vn|a1 .. an)(A1&..& An --> C) 

INF1

(A1 , .. , An) ||- C  

IFF

 (v1 .. vn||a1 .. an)(A1&..& An --> C)


The quantifier axioms:

(Ex)

  (E).(z) 

 IFF

  (lx.(z) <> |x.~(x=x))

(x)

  (x).(z)

 IFF

  (lx.~(z) = |x.~(x=x))


The equality axioms:

Def=

  x=y

 IFF

  (Ez).(x=z & y=z)

DefE!

  E!x

 IFF

   x=x

Def~

  =|x.~(x=x))

Now for some comments per group of axioms - all of which logically imply introduction and elimination rules by the properties of IFF.

The substitution axioms:

Sub| 

  x|y.(z1 .. zn)                IFF  ( x|y.(z1)  ..  x|y.(zn) )

Sub+

  x|y.(z) = x                   IFF    y = z

Sub-

  x|y.(z) = z                   IFF ~ y = z

Sub=

  (x1 .. xi .. xn) & xi=yi     IFF   (x1 .. yi .. xn)  & yi=xi

Seq=

  (x1 .. xn) = (y1 .. yn)    IFF   x1=y1 & .. & xn=yn 

Sub| says that x is substitutable for y in (z1 .. zn) iff the result of substituting x for y for each of z1 .. zn in (z1 .. zn) is true.  Sub| also implies that substitutable terms distribute over the sequences of terms that are statements. The immediate result of Sub| generally is an intermediate term that is simplified by the other substitution axioms and the equality axioms. Note that on the left of Sub| we have x|y prefixed to a sequence of terms that is a statement while on the right of Sub| all terms in  the sequence have the prefix x|y.  The terms introduced by Sub|  can be removed again by Sub+, Sub- and Sub=

It should be noted that in (Sub|) the substitutions hold only if the result of the proposed substitutions is a true statement.

Sub+ and Sub- spell out what it means if we have x|y prefixed to a term z and also involve Sub= : x|y.(z)  becomes z if y is not equal to z and becomes x if  y is equal to z

Sub= states the basic substitution axiom for any statement and pair of equal terms, while Seq= states when two n-termed sequences (of any kind, including terms that are non-statements) are equal: Precisely if each and any of their i-th terms for all their terms from 1 to n are equals.

The resolution axioms:

Res| 

 (a1..ai|b1..bi)(x1..xn)             IFF  (a1|b1)..(ai|bi) (x1..xn)

Res

 (a1|b1)..(ah|h)(ai|bi)(x1..xn)  IFF (a2|b2)..(ah|bh)((a1|b1)(x1)..(a1|b1)(xn))

The resolution axioms are mostly procedural: Res| introduces on the LHS convenient abbreviations for the RHS, which have several substitution prefixes. However, (Res) is a substantial assumption about the order in which the substitution-prefixes of statements are to be worked out: Always the left-most first. ("LHS" = "left hand side" and "RHS" = "right hand side).

The valid inference axiom:

INF0 

 (A1 , .. , An) ||- C      IFF  (v1 .. vn|a1 .. an)(A1 & .. & An --> C) 

INF1  (A1 , .. , An) ||- C      IFF  (v1 .. vn||a1 .. an)(A1 & .. & An --> C)

The valid inference axioms state and explain valid inference in terms of substitution in effect as: A principle of inference is valid iff it corresponds to an implication that is true for any appropriate (free) substitution of terms in It. (In general INF1 is safer because more restrictive than INF0: INF1 also requires that none of v1 .. vn occur in (A1&..& An --> C).

The quantifier axioms:

(Ex)

  (Ex).(z) 

 IFF

  ~(lx.(z) & x=x)) = |x.~(x=x)

(x)

  (x).(z)

 IFF

  (lx.(z) & x=x)) = |x.(x=x)

The quantifier axioms in effect define the quantifiers "there is" and  "for all" in terms of admissible substitutions of any terms of the same kind. Indeed, the term "any" is very appropriate in reading the RHSs of the axioms: (Ex)(z) is true precisely if there is a term x that is freely substitutable in (z) to produce a true statement while (x)(t1 .. tn) is true precisely if any term that is substitutable for x in (z) produces a true statement. The reason that in this last case the substitution is not required to be free is to accommodate inferences like (x)v(y)(x=y) --> (y=y).

Hence prefixing "(x)" to a statement amounts to insisting a strong form of substitution is true for any x that is in principle substitutable for y occurring in the formula "(x)" is prefixed to,  while prefixing "(x)" to a statement amounts to insisting a weak form of substitution is true for any x that is in principle substitutable for y occurring  in the formula "(Ex)" is prefixed to.

It should be noted that in effect both quantifiers are defined in terms of substitutions of any arbitrary terms into statements and that (Ex) is a strengthening and (x) a weakening of the previously assumed substitution axioms.

The equality axioms:

Def=

  x=y

 IFF

(Ez)(x=z & y=z)

DefE!

  E!x

 IFF

 x=x

Def

  =|x.~(x=x))

The equality axioms state the basics for equality and do so in terms of equality (if only for lack of anything else). They supplement the axioms with = in the substitution axioms, that claim that equals are substitutable "salva veritate" (i.e. truths remain truths after the substitution of equals for equals) and that two sequences are equals iff all their respective terms are equals.

According to Def= two things are equals iff there is some thing each is equal to. This last axiom has the interesting and useful consequence that it implies equality is symmetric (i.e. x=y ==> y=x) and transitive (x=y & y=z ==> x=z) but reflexive only conditionally:  x=x IFF (Ez)(x=z). This gives us a means to define E!x IFF x=x, and so the equality axioms allow a term expressing existence and identifying existence with being the equal of some thing. And also we have the possibility of defining when x is the equal of nothing: Precisely if it doesnt exist.

Note we immediately have intuitive restatements for the quantifier axioms (that correspond more closely to the notation and statements of standard logic):  (Ex).Z[x] iff ~(lx.Z[x] =   and (x).Z[x] iff (lx.~Z[x] = . UP

 

 

 

6. Basic Logic:

The rules of Basic Logic are all definitional axiomatic equivalences, that amount to a spelling out of the meanings of fundamental terms in other fundamental terms by means of equivalences (which automatically serve as introduction and elimination-rules by the properties of equivalences).

In what follows the axioms for sequences, elements, abstractions and individuals, sets, classes, relations, functions and maps are stated. I first state them and then comment them:

The sequence axioms:

(E[])

  (Ex)(x=(x1 .. xn))

 IFF

   x1=x1  & .. &  xn=xn

(e[])

  xe(y1 .. yn)

 IFF 

   x=y1&y1=y1   V .. V  x=yn&yn=yn

(*)

  (x1 .. xn) e (y1 .. yn)

 IFF

   x1ey1  & .. &  xneyn


The abstraction axioms:

SAT

(ai .. ak) e |yi .. |yk.(x1 .. yi .. yk .. xn)

 IFF

 (ai .. ak)|(yi .. yk).(x1 .. yi .. yk .. xn)

PRD   (t1) ..(vi) .. (vk) ..(tn) (ET) ((t1 ..vi .. vk .. tn) IFF T[vi .. vk] ))

TRM

(t)(ET)(Ev1)..(Evk)  (t = |vi .. vk.T[vi .. vk])

Now for my comments, in which I repeat each group of axioms:

The sequence axioms:

(E[])

 (Ex)(x=(x1 .. xn))

 IFF

   x1=x1  & .. &  xn=xn

(e[])

  xe(y1 .. yn)

 IFF 

   x=y1    V .. V   x=yn

(*)

  (x1 .. xn) e (y1 .. yn)

 IFF

   x1ey1  & .. &  xneyn

The existential sequence axiom states that there is a sequence of any conjunction of existing terms, while the remaining axioms in this group define when it an element or sequence and when a sequence is element of a sequence. This last axiom allows the use of Cartesian Products (as they are known).

Note that in the sequence axioms the term e for being an element of is introduced, about which the element axioms then give for existential assumptions complete with proposed English readings that indeed presuppose the axioms for sets and classes. 


The abstraction axioms:

SAT

(ai .. ak) e |yi .. |yk.(x1 .. yi .. yk .. xn)

 IFF

 (ai .. ak)|(yi .. yk).(x1 .. yi .. yk .. xn)

PRD   (t1) ..(vi) .. (vk) ..(tn) (ET) ((t1 ..vi .. vk .. tn) IFF T[vi .. vk] ))

TRM

(t)(ET)(E(v1)..(Evk)  (t = |vi .. vk.T[vi .. vk])

 

The abstraction axioms specify when a term is in another term; when a sequence of terms satisfies an abstract; when a sequence of terms can be cast in predicative form; and when a term equals an abstract. Also, its two basic axioms IN and ABS introduce the element-term e.

 

IN: x e (t1 .. tn) IFF x=t1 V ... V x=tn: This specifies what it is to be an element of a string, where strings are sequences of terms and terms sequences of characters. All that is required is that the term is identical with one (or more) of the terms of the sequence.

 

ABS: x e |y.(t1 .. tn) IFF x|y.(t1 .. tn): This is the basic definition of abstraction in terms of substitution of sequences of terms into n-ary abstracts. A k-ary abstract is an expression of the form |y1 .. |yk.(x1 .. y1 .. yk .. xn) where |y1 .. |yk is the abstraction and consists of a sequence of terms that occur in that order in the string (x1 .. y1 .. yk .. xn).

 

The net effect of the RHS is that x occurs in (t1 .. tn) if y does. NB that x|y.(t1 .. tn) means the result of substituting x for y in (t1 .. tn) is true i.e. what the resulting string represents is so in the reality presupposed.

 

PRD: (t1) ..(vi) .. (vk) ..(tn)(ET) ((t1 ..vi .. vk .. tn) IFF T[vi .. vk] )): In effect: For every string that is a statement or formula there is an equivalent string with a predicate-subjects structure. The predicate-expressions that are thus introduced correspond thus to that part of an abstract that does not occur in its abstractions and the subject-expressions those parts that do. This axiom allows the abstraction of ordinary predicates of English, as e.g. loves in loves[x,y] from x loves y but also e.g. loves*[x,y] from x loves the appearance of y passionately in spite of ys scorn.

TRM: (t)(ET)(Ev1)..(Evk)(t = |vi .. vk.T[vi .. vk]): In effect: Every term is identical with some abstract. Put otherwise: Every term is definable in other terms in the form of an abstract. Note the simplest form here is: t = |y.(y=t) i.e. t is defined to be the y that are equal to t.

 

 

 

7. Basic Set Theory:


The element axioms:

Z1

(Ey)(Ex)(Ez)     (xey & yez & ~(x=y) & ~(y=z) & ~(x=z))

 There is a non-empty set

Z0

(Ey)(Ex)~(Ez)   (xey & yez & ~(x=y) & ~(y=z) & ~(x=z))

 There is an empty set

C1

(Ey)(Ex)~(Ez)   (xey & yez & ~(x=y) & ~(y=z) & ~(x=z))

 There is a non-empty class

C0

(Ey)~(Ex)(Ez)   (xey & yez & ~(x=y) & ~(y=z) & ~(x=z))

 There is an empty class


Individuals, sets, classes, relations and functions

 IND

   xeIND 

 IFF

 ~(Ey) (yex) & ~(x=y))

 SET

   xeSET

 IFF

 ~(Ey) (xey) &   (x=y))

 CLS

   xeCLS

 IFF

 ~(Ey) (xey) & ~(x=y) )

 NUL

   xeNUL

 IFF

   xeSET & ~(Ey)(yex)

 VOID

   xeVOID

 IFF

   xeCLS & ~(Ey)(yex)

 REL

   xeREL

 IFF

   (Eya)..(Eyk)(Ez1)..(En)( x = (|ya .. |yk)(z1  .. zn)

 FNC

   xeFNC

 IFF

   xeREL & (y1)(y2)(y3)((y1,y2)ex & (y1,y3)ex--> y2=y3)

Now for my comments, in which I repeat each group of axioms:

The element axioms:

Z1

(Ey)(Ex)(Ez)   (xey & yez & ~(x=y) & ~(y=z) & ~(x=z))

 There is a non-empty set

Z0

(Ey)(Ex)~(Ez) (xey & yez & ~(x=y) & ~(y=z) & ~(x=z))

 There is an empty set

C1

(Ey)(Ex)~(Ez) (xey & yez & ~(x=y) & ~(y=z) & ~(x=z))

 There is a non-empty class

C0

(Ey)~(Ex)(Ez) (xey & yez & ~(x=y) & ~(y=z) & ~(x=z))

 There is an empty class

Note that here the term for being an element of is introduced, about which the element axioms then give for existential assumptions complete with proposed English readings that indeed presuppose the axioms for sets and classes. 

It should be noted that the general upshot of the element axioms concerns in fact the expression (xey & yez) and specifies in effect that there are terms y that both have and are elements ("non-empty sets"); there are terms y that are but have no elements ("empty sets"); there are terms y that have but are not elements ("non-empty  classes"); and there are terms y that neither are nor have elements  ("empty classes").

Thus in effect there is a three-tier system with respect to being an element of: On the bottom row things that have no elements other than  themselves ("individuals") and on the top row things that  are no elements other than themselves ("classes") with things that are and have elements ("sets") inbetween the two.

The last group of axioms gives axiomatic equivalences for the fundamental terms to do logic and mathematics  with, all defined in terms of and = and abstraction

Individuals, sets, classes, relations and functions:

 IND

   xeIND 

 IFF

 ~(Ey) ((yex) & ~(x=y))

 SET

   xeSET

 IFF

 ~(Ey) ((xey) &   (x=y))

 CLS

   xeCLS

 IFF

 ~(Ey) ((xey) & ~(x=y) )

 NUL

   xeNUL

 IFF

   xeSET & ~(Ey)(yex)

 VOID

   xeVOID

 IFF

   xeCLS & ~(Ey)(yex)

 REL

   xeREL

 IFF

   (Eya)..(Eyk)(Ez1)..(Ezn)( x = (|ya .. |yk)(z1  .. zn)

 FNC

   xeFNC

 IFF

   xeREL & (y1)(y2)(y3)((y1,y2)ex & (y1,y3)ex--> y2=y3)

Individuals are their own elements; sets are proper elements; null-sets are sets without elements; classes are only elements of themselves; void classes are classes without elements; relations are abstractions (so in Basic Logic in the end everything is a relation or a statement about relations, while each and any relation corresponds to an abstract); and functions are relations whose last  terms are unique given their other terms.

The reader should realize at this point that what we've surrected, if consistent, should be adequate to most mathematics, since this can be formulated in terms of sets, relations and functions.
 

 


Semantics

The most useful simple definition of "semantics" is "theory of meaning". In general terms, it is the end of a semantics to explain how a string of text (or anything else supposedly meaningful) can represent something that's normally neither text nor contained in the text.

There are three immediate problems with any semantics:

1. the differences involved in meaning and denotation
2. the difficulties involved in treating truth linguistically and in stepping outside language
3. the meanings of representing and meaning

These problems are as follows:

1. the differences involved in meaning and denotation:

There are intuitively two different though related meanings of "meaning", which can be illustrated by considering the meanings of the terms "elephant" and "mermaid": Both terms have a meaning according to any qualified speaker of English, in the sense that any qualified speaker can mentally imagine elephants and mermaids, but the terms differ in that there really are elephants while there really are no mermaids.

This is intuitively a quite clear distinction, but it is hard to spell out precisely, for one  thing because  there may be terms one just doesn't know whether there are any real things it stands for. It is also connected with a number of oppositions such as "meaning" and "denotation" to mark the difference between representing mere ideas ("meanings") that do not represent anything real and representing ideas that do represent something real ("denotations"). And indeed I will use the terms "meaning" (intuitively: the idea a term represents) and "denotation" (intuitively: whatever things an idea represents) to  mark this opposition.

2. the difficulties involved in treating truth linguistically and in stepping outside language:

There is an intuitively correct rendering of what it is to be a true statement that was first clearly stated by Aristotle: A statement is true if and only if it says what it is the case. We shall restate this somewhat as:

(*) A statement is true if and only if what it represents is what represents what is the case.

This rendering has the great merit of explaining why humans would be interested in believing true statements: It provides them knowledge about their real environments and themselves. True statements represent ideas that represent some real fact(s).

The problem with this intuitively correct rendering is that for most true statements one knows what is the case is not linguistic or a part of language. Therefore,  to spell out in language what is truth one needs some way to represent linguistic terms in language and some way to represent what linguistic terms denote and mean in language, and not mix up the differences. 

In what follows the well-known means of quotation-marks is used to mark the differences between terms for terms, terms for ideas (i.e. what terms mean), and terms for things (i.e. what ideas refer to). To indicate that a term T is a term for terms, T will be included in double quote-marks, and "T" will be read as "the term T". To indicate that a term T is a term for ideas, T will be included in single quote-marks, and 'T' will be read as "the idea of T". To indicate that a term T is a term for things, T will be unquoted, and T will be read as "T" or "the thing T".

3. the meanings of representing and meaning:

The best systematic general way to think about meaning and denotation is in terms of representing, as e.g. a map of England represents the territory of England.

This is an analogy which is helpful in many ways, including that:

 

        the map is usually not the territory (even if it is part of it) 

         the map does usually not represent all of the territory but only certain kinds of things occurring in the territory, in certain kinds of relations

         the map usually contains legenda and other instructions to interpret it

         the map usually contains a lot of what is effectively interpunction

         maps are on carriers (paper, screen, rock, sand)

         the map embodies one of several different possible ways of representing the things it does

         the map usually is partial, incomplete and dated - and

       having a map is better than having no map at all to understand the territory the map is about

         maps may represent non-existing territories and include guesses and declarations to the effect "this is uncharted territory"

 


The problem here is to find a useful central definition of representing that can be used to explain meaning and reference systematically.
Using the tools introduced above: 

Maps, representations and simulations

 

 f maps F

 IFF

  (x,y)eF iff f(x)=y

 

 repr(X,Y,f) 

 IFF

  (EF)(f maps F & (X,Y) inc F)

 

 sim(X,Y,d,e) 

 IFF

  (repr(X,Y,d) & repr(Y,X,e))

The notion of mapping in Basic Logic is defined using functors: A map is a  relation the last terms of which are given by a characteristic functor for their first terms. This makes mappings functional without making functions necessarily mappings, since it may happen that a given function comes without functors for its values. Note that functors were introduced in the grammar of BL as "x(y)" and correspond to the English phrase "the x of y" or "y's x" and that a characteristic functor f for a functional relation F is one  that allows the finding of all y such that (x,y)eF for any x using the functor f.

So functors are things that are capable of responding systematically to other things (when these are presented to them in an appropriate way), while real things in general may be presumed to be functorial in certain ways in that real things respond to other real things in some definite determinate ways.

Maps can be seen as functions with an extra: both f(x) and y are explicitly included, and indeed one can conceive of a map as a functor for a function: Having the functor f enables one to find all second terms of F given first terms of F and presenting these to f. Thus, if F is the relation 'fathered', e.g. 'father of' may be taken as a functor for it. That is: one has with (F = fathered) for each x one y that is the father of y and one has for each x the f that somehow finds or produces the father of x whomever he may be. (In actual practice such f may be a pass-port or a DNA-test.)

One often helpful way of thinking about this is that f(x) is the program to find y. This makes sense already for simple functions like doubling, squaring or taking the logarithm: As soon as the numbers grow largish the actual calculations - say: the double, square and logarithm to 10 places of 1234.567.890 (which cannot be found in a lookup table (as is in  fact presumed by a mere function!) and have to be somehow  calculated, and to calculate these we do in fact apply a paper or electronic algorithm or program (that takes a certain time, and itself  is a definite entity or thing). And indeed an ordinary handheld calculator contains programs (functors in BL terms) to find values for inputs which it displays once calculated. 

And it also should be noted that there are in nature and in experience very many quite ordinary things that work like functors, from light-switches and other switches to chemical reagents that allow one to test whether a certain substance is gold (or helium or etc.) to software and hardware for many kinds of tasks . Also, one is oneself functorial in many ways, including laughing and blushing, both of which are functorial processes that allow others to see that one is effected in a certain way by something.

The notion of representation given here uses mappings. The terms X and Y in it are any terms for classes or sets, and  "(X ,Y) inc F" abbreviates "(x)(y)(xeX & yeY --> (x,y) e F)". Note that the same X and Y may be included in different mappings by another g and G, and that what is mapped in X to what in Y depends on the mapping that is chosen. The general point of a representation of Y by f on X is that one can infer structures and relations in Y from structures and relations in X by way of f and F, just as with an ordinary paper map, where one can infer the lay of the land from a map about it.

Also, with paper maps one has a rather clear instance of "f(x)" in the legenda. Thus, on ordinary political maps one can infer the approximate number of inhabitants of a city from its color or shape on the map, where this this shape or color then encodes 'the number of inhabitants of'.

The notion of simulation amounts to mutual representation. Convenient names for the functors d and e are resp. "denotes" and "encodes". Note that by the rules for equality we have d(e(y))=y and e(d(x))=x, given that d(x)=y and e(y)=x. Alternative names for this relation of simulation are: "similar", "isomorphic", "analogies" and "mapping". Here is a simple sketch illustrating the general idea of simulating:

If we suppose the above sketch depicts Ideas on the left and a World on the right, one may add a linguistic representation thus: B0(B1, B2(B3)). 

The terms "maps", "represents" and "simulates" represent the structures fit and used for mapping, representing and simulating things and relations - in short: structures - linguistically. And especially simulations are formal analogies: If X and Y do simulate each other - are analogies, similar structures, mappings of each other - using d and e this means that certain kinds of things and relations in X correspond to certain kinds of things and relations in Y and conversely, so that when one knows the denoting functor d one can infer aspects of Y from aspects of X and if one knows the meaning functor e one can infer aspects of X from aspects of Y. (Note that while these undo each other formally, intuitively they differ: The entity a term or idea denotes is related differently to the term or idea than an idea or thing is related to the term or the idea that represents it.)

But none of this explains the differences between means and denotes. Now intuitively, what is meant by a statement or term is some idea of some speaker, if only an idea of the speaker himself. and what is referred to or denoted is some thing(s) or structure(s) in some world, real or fictional, but supposedly such as diverse speakers can find evidence about and come to agreements about with other speakers about what is and is not in it.

For semantics we shall use "[q](L,M,D,m,d)=1" which may be read as "q is element of L that means something in M by d that is meant  by q via m and denotes something in D by d that is meant by what q means via m",  where "[q]" is a conveniently brief variant of a mapping term: "[q]" is read as "the truth-value of  q". One reason to introduce language, ideas, worlds and mappings by subscripts is that they are often left out as "understood from the context".

It should be noted that this notation comprises quite a lot in a compact way that may be also spelled out somewhat differently - as will indeed be done in later chapters. Here I shall merely sketch the general foundations.

Now suppose that a language L, ideas I and things T are given such that L and I simulate each other and I and T simulate each other, in both cases by the same functors d and e that involve the relations of denoting and encoding that comprise (L,I) and (I,T) i.e. (x)(y)(xeL & yeI --> (x,y)eD & (y,x)eE)) and (x)(y)(xeI & yeT --> (x,y)eD & (y,x)eE)). Note that by earlier conventions and assumptions this means that there are things or processes that are denote and mean.

This general assumption may be fleshed out and qualified in several ways, but  the general idea is that one has the functors meaning and denoting that relate both linguistic items and human ideas and human ideas and entities in some world or domain, in such a way that the structures that are related are similar.

Semantics 

 

 Sim(L,I,m) & sim(I,T,d)

 

 [q](L,I,T,m,d)=1

 IFF

  qeL & m(q)≠ & d(m(q))≠

 

 [q](L,I,T,m,d)=0

 IFF

  qeL & ~( m(q)≠ & d(m(q)) ≠ )

 

 [q](L,I,T,m,d)=1 V [q](L,I,T,m. d)=0

 IFF

  (Eta..tk)(Et1..tn)[taeL & .. & tneL
   & (|ta..tk)(t1.. ta  .. tk .. tn) = q)

 

 [p-->q](L,I,T,m,d)=1

 IFF

  [p](L,I,T,m,d) <= [q](L,I,T,m,d)

 

 [x=y](L,I,T,m,d) =1

 IFF

  xeL & yeL & m(x)=m(y) & d(m(x))=d(m(y))

 

 [E!x](L,I,T,m,d) =1

 IFF

  (Ex')(Ex'')(x'eI & x''eT & xeL & m(x)=x' & d(x')=x'')

 

 [(Ex)(|t.(z))](L,I,T,m,d) =1

 IFF

  xeL & teL & zeL & m(x|t(z))eI & d(m(x|t(z)))eT

 

 [(Ex)~(|t(z))](L,I,T,m,d) =1

 IFF

  xeL & teL & zeL & ~[m(x|t(z))eI & d(m(x|t(z)))eT]

 

 [(x)(|t(z))](L,I,T,m,d) =1

 IFF

  xeL & teL & zeL --> m(x|t(z))eI & m(d(x|t(z)))eT

 

 

 

 

 

 Pos[q](L,I,m,d)=1 

 IFF

  (ET)(qeL & m(q)eI & d(m(q))eT)

 

 Nec[q](L,I,m,d)=1 

 IFF

  ("T)( qeL --> m(q)eI & d(m(q))eT)

 

 

 

 

 

 S = STRUCTURES(L,I,T,m,d)

 IFF

 S = (|t)(teL & m(t)eI)

 

 T = THOUGHTS(L,I,T,m,d)

 IFF

 T = (|q)(qeL & m(q)eI)

 

 O = OBJECTS(L,I,T,m,d)

 IFF

 O = (|t)(teL & m(t)eI & d(m(t))eT)

 

 F = FACTS(L,I,T,m,d)

 IFF

 F = (|q)(qeL & m(q)eI & d(m(q))eT)

Here are some brief comments:

 

 sim(L,M,m) & sim(M,D,d)

This assumes in effect that languages simulate ideas and ideas simulate things, and that indeed the same functors are involved that are converses of each other. This may be an idealization, but it is convenient (and in fact can be achieved formally in any case). In both cases, the main import of "simulates" is "has the same structures".

Note that what is explicit here in semantics is usually left out: There is a domain of human ideas that is denoted by linguistic structures and that denotes things in some possible world or domain.

 [q](L,I,T,m,d)=1

IFF

 qeL & m(q)≠ & d(m(q)) ≠

 [q](L,I,T,m,d)=0

IFF

 qeL & ~( m(q)≠ & d(m(q)) ≠ )

Formally, all semantics adds is terminology for explicitly formulating (supposed) truths about some presumed world or domain, but it does so by fitting this within linguistic structures that represent ideas of things.

For intuitively to state a truth is to represent linguistically some idea that represents something in some presumed domain of things of some kind, and this is precisely what the first assumption says, just as the second says that to state a falsehood is to represent linguistically something that does not represent an idea that represents something in the presumed domain. And indeed two reasons to be explicit about both ideas and domains next to languages is that different languages may be used to describe parts of the same domain, and different domains - fact, fiction, guess, past, future, possibility, indeed anything whatsoever - may be described by the same language, and indeed in any case one describes a domain one does so by describing one's ideas about it.

Note that the import of the first equivalence is that a statement is true iff it both denotes an idea and  that idea denotes a fact, all in the presumed domains of human ideas and world spoken or thought about. Therefore in the second equivalence, that stipulates when a statement is not true this may be so for several reasons:  There is no idea the term represents, or there may be no thing the idea represents, or both. That is alternatively: [q](L,I,T,m,d)=0 IFF qeL & [m(q)= V d(m(q))=]  that expresses that either the  term q denotes nothing ("is not meaningful") or else that while the term denotes some idea that idea denotes nothing in the world T spoken about ("is not true").

  [q](L,I,T,m,d)=1 V [q](L,I,T,m,d)=0          IFF   (Eta..(Etk)(Et1..tn) [taeL & .. & tneL  
                                                               & (|ta..tk)(t1.. ta  .. tk .. tn) = q)
]

  [p-->q](L,I,T,m,d)=1 

 IFF

  [p](L,I,T,m,d) <= [q](L,I,T,m,d)

The first of these assumptions says in effect that any statement is true or not precisely if it is in fact an abstraction. Indeed, this makes statements kinds of abstractions, and this seems a sound insight most people tend to miss most of the time: Even the truths one knows are partial, abstract and selected, and normally what one states even if true states only a small part of what is truly going on.

The second of these assumptions uses the fact that truth-values were assumed to be numbers and in effect defines an implication to be true iff its consequent is true if its antecedent is true. (This recourse to numbers not necessary but very convenient, since having numbers one has known structures for and properties of numbers one can use. This also is convenient when logic is extended to probability, as will be done in a later chapter.)

Here and elsewhere in formal semantics the reader should realize that a considerable part  of any good linguistic analysis of a concept consists in defining it in terms of earlier presumed notions, and that definitions take the form of equivalences.

 

 [x=y](L,I,T,m,d) =1

 IFF

 xeL & yeL & m(x)=m(y) & d(m(x))=d(m(y))

 

 [E!x](L,I,T,m,d) =1

 IFF

 (Ex')(Ex'')(x'eI & x''eT & xeL & m(x)=x' & d(x')=x'')

These two  postulates state when an equality is true (iff its two terms denote the same idea that denote the same thing) and when a term for an entity exists (iff it denotes some idea that denotes something in the domain).

 

 [(Ex)(|t(z))](L,I,T,m,d) =1

  IFF

  xeL & teL & zeL & m(x||t(z))eI & d(m(x||t(z)))eT

 

 [(Ex)~(|t(z))](L,I,T,m,d) =1

  IFF

  xeL & teL & zeL & ~[m(x||t(z))eI & d(m(x||t(z)))eT]

 

 [(x)(|t(z))](L,I,T,m,d) =1

  IFF

  xeL & teL & zeL --> m(x|t(z))eI & d(m(x|t(z)))eT

Here the  truth-values for quantifiers are given in terms of their satisfying abstractions: That there is some x that satisfies the things t that are z is true in T by d iff x and z belong to language L that represents domain T by functor d while the  denotation of freely substituting x for t in z exists in T. This is similar for there NOT being some x that satisfies the things t that are z is true in T by d.  The assumption for "for all" involves mere substitution, to accomodate valid inferences like (x)(y)(x=y -->  y=y).

 

 Pos[q](L,I,m,d)=1 

 IFF

 (ET)(qeL & m(q)eI & d(m(q))eT)

 

 Nec[q](L,I,m,d)=1 

 IFF

 (ET)( qeL --> m(q)eI & d(m(q))eT)

Here logical possibility and logical necessity are defined by means of quantification over domains. The main reason to include these definitions is to show how modalities like possibility and necessity may be dealt with plausibly: By quantifying over several domains (then often called "possible worlds" - which is somewhat misleading for several reasons, one of which is that one may also quantify over domains with supposedly impossible or fictional entities).

 

 S = STRUCTURES(L,I,T,m,d)

  IFF

 S = (|q)(qeL & qePROP & m(q)eI)

 

 T = THOUGHTS(L,I,Tm,d)

  IFF

 T = (|t)(teL & teTERM & m(q)eI)

 

 O = OBJECTS(L,I,T,m,d)

  IFF

 O = (|t)(teL & teTERM & m(t)eI & d(m(t))eT)

 

 F = FACTS(L,I,T,m,d)

  IFF

 F = (|q)(qeL & qePROP & m(q)eI & m(d(q))eT)

This uses earlier definitions to define some useful terms: The structures of language L are precisely the ideas represented by the statements of L and the thoughts of a language L are precisely the ideas represented by the terms in L. The facts of a language L related to a domain T by a functor d are precisely the statements in L that do represent some structure(s) in T, while the objects of language L are precisely the terms in L that represent some thing(s) in T.

Of course, the structures and thoughts of L may have non-empty intersection, and the same holds for the objects and facts of L. And in general these distinctions derive from the language L.

Also, the things that make up L are structures as well (made from terms) in fact anything whatsoever if it can be represented by language - is supposed to be a structure, and the statements and terms of a language are structures as well.

Note that in each case the reference to the domain and the functor is quite essential: Without these nothing true or real is conveyed, whatever the appearances. And note also that the domain and the language may be - and usually are - known to be simplifications of other domains and modes of speech. (The models - descriptions, explanations, representations, diagrams - used in science normally disregard all the possible facts that are supposedly irrelevant to whatever is modelled.)

As the given definitions of logical possibility and necessity likewise convey the notion of a true statement as defined is both relative in several senses and objectively so: It is relative to a language, a domain and a mapping relating these, but once these are given or fixed it is objectively so whether the specified domain does or does not contain something that corresponds to the statement by the mapping (even if one does not know whether the domain does contain a structure represented by a statement).

...................................................

 

Dec 15, 2009: This is copied from LPA02 to start with. It now also is the start of a new version directly called Basic Natural Logic.

Maarten Maartensz

 

 

Welcome to the logic pages of Maarten Maartensz. See also: HelpMap + Tour + Tips + Notes + News + Home