Logic

 On the question why human beings cannot be computers 

 


Here follows a fairly simple argument why human being cannot be computers that I formulated in 1995. It is similar to an argument by J.R. Lucas, and has been sent to him in 1998. Originally, I wrote it to clarify for myself what R. Penrose might have had in mind when writing "The Emperor's New Mind", which attempted to prove that human beings cannot be computers on the basis of Gödel's Incompleteness theorems.

Penrose later produced another book, "Shadows of the Mind", dedicated to the same question. Both this and the earlier book are well-written and by a mathematician, but the logic of the argument is far from really clear, and indeed has been faulted by professional mathematical logicians, such as S. Feferman.

J.R. Lucas stated his argument to the same effect and from the same Gödelian premise much earlier, namely in 1961, and was also met by mathematical logicians with the argument that he did not properly understand the import of Gödel's ideas.

J.R. Lucas has an interesting website, on which there are many papers and reprints. It so happened that I only read his book "Free Will" in the beginning of 2000, and I hereby recommend it, because its first half is an excellent introduction to the arguments there have been through the ages about free will, and its second half is a clearer and shorter statement than are Penrose's later arguments to the same effect from the same premise.

I first give my version of the argument, then an explanation of the logic of it, then discuss some problems with this whole line of arguments, and I end with my own position on whether human beings are computers.

1. The Argument:

Notation:

"f(a)"

 =

"a has the property f"

"aBf(a)"

 =

"a believes the statement that a has the property f"

""f(a)"="g(h(b))""

 =

"the statement that a has the property f is the same as
the statement that b with the property h has the property g"

"F(x) iff F(y)"

 =

"any two statements all the same except for having "x"
where the other has "y" or "y" where the other has "x"
are either both true or both not true"

"Ca"

 =

"a is a computer"

"["f(a)"]"

 =

"a is in the state of belief that corresponds to
the statement that a has the property f"

"p(a,["f(a)"])"

 =

"there is a program that a runs or that runs a that has put a in the state of belief that corresponds to the statement that a has the property f"

Assumptions:

1

aBf(a) --> ~aB~f(a)

 

2

aBaBf(a) --> aBf(a)

 

3

aB~aBf(a) --> ~aBf(a)

 

4

"f(a)"="g(h(b))" --> F(f(a)) iff F(g(h(b)))

 

5

Ca --> aBf(a) iff p(a,["f(a)"])

 

6

Ca --> p(a,["f(a)"]) iff ["f(a)"]

 

7

Ca --> p(a,["p(a,["f(a)"])"]) --> p(a,["f(a)"])

 

8

Ca --> p(a,["~p(a,["f(a)"])"]) --> ~p(a,["f(a)"])

 

9

"g(a)"=~p(a,["g(a)"])

 

10

aB("g(a)"="~p(a,["g(a)"])")

 

------------------------Ergo:-------------------------

11

Ca --> aBaBg(a) --> aBg(a)

 by (2)

12

Ca --> aBg(a) iff p(a,["g(a)"])

 by (5)

13

Ca --> aBg(a) --> p(a,["~p(a,["g(a)"])"])

 by (3,9,12)

14

Ca --> aBg(a) --> ~p(a,["g(a)"])

 by (8,13)

15

Ca --> aBg(a) --> ~aBg(a)

 by (12,14)

16

Ca --> ~aBg(a)

 by (15)

17

Ca --> g(a) iff ~aBg(a)

 by (9,12)

18

Ca --> g(a)

 by (16,17)

19

Ca --> ~aBaBg(a)

 by (11,16)

20

Ca --> ~aBaBg(a) & ~aBg(a) & g(a)

 by (16,18,19). Qed.


In short: If one is a computer then one does not believe that one believes that there is no program that puts one into the state of belief that corresponds to this same statement (that there is no program that puts one into the state of belief that corresponds to this same statement) and one does not believe that there is no program that puts one into the state of belief that corresponds to this same statement (that there is no program that puts one into the state of belief that corresponds to this same statement) and indeed there is no program that puts one into the state of belief that corresponds to this same statement (that there is no program that puts one into the state of belief that corresponds to this same statement).


And therefore, since human beings obviously are quite capable of believing that their own beliefs have not been produced by some program (whether or not they are really right), it follows human beings cannot be computers, since computers cannot have such beliefs (if any). Up

2. The Explanations

2.1: The notation: The notation is logical on the left in the table, with the reading on the right. The logic I use is a classically bi-valent predicate logic with some extras. I start with explaining the notation I use.

"f(a)" = "a has the property f"

This is standard predicate logic, where it is assumed that statements can be analysed as subject-predicate structures, and the predicates may be binary relations between two subjects or three-place relations between three subjects. In the argument considered, all subjects are either human beings or computers.

"aBf(a)" = "a believes the statement that a has the property f"

This is an extra compared to standard predicate logic, for it introduces notation for propositional attitudes. Propositional attitudes are terms like "believes", "desires", "fears", "hopes" and very many others that are used in Natural Language to attribute to users of language that they have certain attitudes - beliefs, desires, fears, hopes etc. - to what is expressed by certain statements.

""f(a)"="g(h(b))"" = "the statement that a has the property f is the same as the statement that b with the property h has the property g"

This conforms to standard predicate logic, though this kind of expression is not commonly treated in textbooks. An example of statements as intended is "my father (a) has a pension (f)" is "the husband (h) of my mother (b) has old-age benefit (g)". Note that "the same" in the notation is understood by reference to the meanings of the statements on the left and right hand sides: These are declared to be the same (rightly or wrongly).

"F(x) iff F(y)" = "any two statements all the same except for having "x" where the other has "y" or "y" where the other has "x" are either both true or both not true"

This is again standard logic. The term "F" in both cases refers to all of the statement except the terms "x" or "y". An example is "Mary (x) never married (F)" and "Yonder spinster (y) never married". The term "iff" is the standard logical and mathematical abbreviation for "if and only if", and defined as stated.

"Ca" = "a is a computer"

This is standard notation in logic. Here "a" is a name of something and "C" is the predicate "is a computer".

"["f(a)"]" = "a is in the state of belief that corresponds to the statement that a has the property f"

Here a special convention is introduced by way of the brackets "[" and "]", which are used to indicate that what is contained within it is a state of belief that corresponds to the statement within these brackets, which therefore does need to contain some proper name of some supposedly animate entity (man, animal, computer, angel, god, extra-terrestrial intelligence - you name it) and occurs within quotes.

It should be noted that states of belief are very everyday events for human beings, but that the states of belief of other entities, whether those of one's pets or one's computers are more dubitable. (Of course, logically speaking we may assume anything we please.)

"p(a,["f(a)"])" = "there is a program that a runs or that runs a that has put a in the state of belief that corresponds to the statement that a has the property f"

This is the last and most complicated bit of notation we need. Formally, it combines the previous bit of notation with the prefix "there is a program that a runs or that runs a that". Informally, it states the claim that some program has given some entity some state of belief that the same entity has a certain property. It does so by explicitly listing the statement used to express that belief, for which reason that statement appears within quotes within the notation.

My reason for writing "a program that a runs or that runs a" is simply to avoid discussing whether a human being or a computer runs the programs it (presumably) uses or is run by the programs it (presumably) uses. (This may be an interesting topic, but not in the present context.)

Also, the reader should be aware that it is an uncontroversial fact that both human beings and computers do use programs for some tasks. What is at issue is not this fact, but whether there is anything more involved in being a human or being an animal than is involved in being a computer (in the sense of: a finite Turing machine).

2.2. The assumptions: Next, we must consider the assumptions we set up using the notation we just discussed.

1

aBf(a) --> ~aB~f(a)

 

2

aBaBf(a) --> aBf(a)

3

aB~aBf(a) --> ~aBf(a)

These are three assumptions about the propositional attitude "believes". The first says that if one believes one has a property, then one does not believe that one does not have the property. The second says that if one believes that one believes one has a property, then one believes one has the property. And the third one says that if one believes that one does not believe one has a property then one does not believe one has the property.

All three are theorems in the logic of propositional attitudes I have set up elsewhere, and make perfect intuitive sense. It is especially (2) that is used in the argument, and (2) is interesting in that its premise states a conscious belief of a, to the effect that a believes that a believes that a has property f, and its conclusion infers that therefore a simply believes that a has property f.

This is interesting because it embodies a way in which one may manipulate and change one's own beliefs: By coming to have conscious beliefs about them.

4

"f(a)"="g(h(b))" --> F(f(a)) iff F(g(h(b)))

This is basically a standard convention about the substitutions of two expressions supposed to represent the same thing, as given in the hypothesis. It is stated as given because we need this fairly complicated expression in the argument. A more simple statement corresponding it is "a"="b" --> F(a) iff F(b).

One detail that is noteworthy is that the hypothesis is explicit about its talking of expressions: What it says is that the quoted expressions on the left and right sides of "=" represent the same thing. The conclusion, by contrast, uses the expressions that occur quoted in the hypothesis.

An example of (4) is "The father of Cesare Borgia" = "Pope Alexander VI" --> The father of Cesare Borgia was a consumate liar iff Pope Alexander VI was a consumate liar. And since the intent of "a"= "b" is: The expressions "a" and "b" represent the same thing, the conclusion without quotation-marks justified, and both sides of the equivalence must be both true or both false, since they have the same predicate and refer to the same thing.

5

Ca --> aBf(a) iff p(a,["f(a)"])

Sofar, we have made assumptions about propositional attitudes and about the usage of quoted expressions. In (5) we find the first application to computers. What (5) says is "If a is a computer, then a believes a has property f iff there is a program that a runs or that runs a that has put a into the state of belief corresponding to the statement that a has property f".

Accordingly, (5) expresses the assumption that the things computers do, including the having of beliefs, are done by programs.

6

Ca --> p(a,["f(a)"]) iff ["f(a)"]

Next, (6) is more precise about the supposed relation between computers and their beliefs, if any, for (6) says that "a is a computer only if there is a program that a runs or that runs a that has put a into the state of belief corresponding to the statement that a has property f iff a is in the state of belief corresponding to the statement that a has property f."

What this expresses, then, is basically that if a is a - properly working - computer that has states of belief, then these states of belief are produced by a program and conversely that a computer has a state of belief if this is produced by a program the computer runs or is run by.

7

Ca --> p(a,["p(a,["f(a)"])"]) --> p(a,["f(a)"])

This further precisifies what would be involved in a computer running a program that manufactures its states of belief, if any. What (7) says is that "a is a computer only if there is a program that put a in the state of belief that there is a program that put a in the state of belief that a has property f only if there is a program that put a in the state of belief that a has property f.

In effect, (7) claims for computers and their programs what (2) claims for human beings: That if one has an iterated or conscious attitude, such as a belief that one has a belief that 2+2=4, then one simply has the belief that 2+2=4. (The converse need not hold for computers nor for humans: Humans, at least, have quite a few beliefs they are not always or not ever conscious of.)

8

Ca --> p(a,["~p(a,["f(a)"])"]) --> ~p(a,["f(a)"])

Like (7) corresponds to (2), so (8) corresponds to (3). Hence what (8) says is that "a is a computer only if whenever there is a program that put a in the state of belief that there is no program that put a in the state of belief that a has property f then there is no a program that put a in the state of belief that a has property f.

Thus, (8) claims for computers and their programs what (3) claims for human beings: That if one has an iterated or conscious attitude, such as a belief that one does not have a belief that 2+2=22, then one simply does not have the belief that 2+2=22.

The reason (7) and (8) differ from (2) and (3) is simply to be explicit about how computers are supposed to reach their beliefs, if any: By some program that manufactured such states of computed belief of a computer.

What is also noteworthy is that sofar the assumptions introduced make intuitive sense for human beings, and also make intuitive sense for computers on the assumption that they generate their own states of belief, if any, by their own programs.

9

"g(a)"=~p(a,["g(a)"])

Here in effect we introduce the notation that corresponds to a Gődelian diagonalization in Gődelian incompleteness arguments (that the reader may take for granted if he is not familiar with them: A good reference are the books of Raymond Smullyan, such as "Gődel's Incompleteness Theorems" and "Forever Undecided" - the former requires some knowledge of logic, and the latter is an exquisite book of puzzles around Gődelian themes).

What (9) says and defines is the property g: That something a has the property g refers to the same fact as that there is no program that put a into the state of belief expressed by the statement that a has the property g.

It should be intuitively obvious that many human beings would insist that they have such a property, and that some of their beliefs are not manufactured by a program (but, say, by their own free will, that is no program and cannot be adequately represented by a program).

What may be questioned is whether (9) is a completely correct notation. This is a question I won't enter into apart from noting that the apparent circularity of (9) is apparent only, and corresponds quite well to such properties "g" like "is akwardly self-conscious" or "is not an automaton" or "has a free will" or "a speaks English iff a understands what "speaks English" means".

10

aB("g(a)"="~p(a,["g(a)"])")

This is our last assumption and simply uses (9) in the context of a propositional attitude. What (10) says is that something a (whether a man or a computer) believes that the expressions "g(a)" and "~p(a,["g(a)"])" represent the same, as (9) claims.

2.3: The argument: We arrived at the explanation of the argument.

The idea of the argument is to deduce a statement that humans know to be true of themselves that cannot be true of computers with properties as earlier assumed. Therefore we argue all the time on the hypothesis that a is a computer, and we use the fact that our assumptions were framed about any thing to which we might attribute propositional attitudes including computers.

11

Ca --> aBaBg(a) --> aBg(a)

by (2)

Accordingly, (11) simply substitutes "g(a)" for "f(a)" in (2)=(aBaBf(a) --> aBf(a)), and adds the hypothesis "Ca". This is simply applying standard logical principles of inference to the assumptions made (as is all of the argument).

12

Ca --> aBg(a) iff p(a,["g(a)"])

by (5)

Here (5)=(Ca --> aBf(a) iff p(a,["f(a)"])) is used as in step (11).

13

Ca --> aBg(a) --> p(a,["~p(a,["g(a)"])"])

by (9,12)

This step uses (12) to get Ca --> aBg(a) --> p(a,["g(a)"])} and substitutes (9)= ("g(a)"=~p(a,["g(a)"])) into that to obtain (13).

14

Ca --> aBg(a) --> ~p(a,["g(a)"])

by (8,13)

This results from (13) by combining it with (8)=Ca --> p(a,["~p(a,["f(a)"])"]) --> ~p(a,["f(a)"]).

15

Ca --> aBg(a) --> ~aBg(a)

by (12,14)

That ~aBg(a) is a direct consequence of (12)'s (aBg(a) iff p(a,["g(a)"])) and (14)'s ~p(a,["g(a)"]).

16

Ca --> ~aBg(a)

by (15)

The conclusion aBg(a) --> ~aBg(a) in (15) is logically equivalent with ~aBg(a) V ~aBg(a) which is logically equivalent with ~aBg(a).

17

Ca --> g(a) iff ~aBg(a)

by (9,12)

By (12) we have Ca --> ~aBg(a) iff ~p(a,["g(a)"]) whence (17) by (9).

18

Ca --> g(a)

by (16, 17)

From (17) follows Ca --> (~aBg(a) --> g(a)), whence (18) via (16).

19

Ca --> ~aBaBg(a)

by (11,16)

From (11) follows Ca --> (~aBg(a) --> ~aBaBg(a)), whence (19) via (16).

20

Ca --> ~aBaBg(a) & ~aBg(a) & g(a)

 by (16,18,19). Qed.

This simply gathers previous conclusions and is the result we wanted.

The reason this is the result we wanted is this:

Suppose you are a computer. Then (20) implies that you do neither consciously nor unconsciously believe that there is no program that puts you into the state of belief that corresponds to this same statement (that there is no program that puts you into the state of belief that corresponds to this same statement) and indeed there is no program that puts one into the state of belief that corresponds to this same statement.

But clearly you, being human, are quite capable of believing that there is no such program that put you into the state of belief that there is not such program. But then it follows by (20) that as soon as you believe this i.e. believe that your beliefs are not produced by a program, then you are no computer, for a computer just cannot have such a belief, as has been proved (for a computer, if it believes anything at all about its states of belief, must believe these have been produced by a program, and especially cannot falsely come to believe its program-manufactured beliefs are not program-manufactured). Up

3. Some problems: One may well ask what an argument such as the one just given really achieves.

According to some, such as the philosopher Lucas and the mathematician Penrose, such an argument plainly proves human beings cannot possibly be computers.

According to others, such as the mathematical logicians Feferman and Boolos (both top in their field) such arguments prove no such thing, for a reason we shall consider below.

According to yet others, including most so called "cognitive scientists" the matter is open: The purported proofs of Penrose and Lucas are either mistaken or cannot be understood, and the developments in computer technology and programming, including a chess-program that has beaten Gari Kasparov, who is a genius at chess, show that the day may be close that computers outperform humans on all tasks humans sofar uniquely excelled in.

I have given my own version of the Lucas-Penrose line of reasoning, with the remark that, whatever its status, it is clearer and briefer than the versions of Lucas and Penrose. I will return to the merits and demerits of my argument below, after first turning to the other positions I mentioned in the previous paragraph.

The two basic reasons Feferman and Boolos disagree with the Lucas-Penrose argument are that (1) the arguments of both Lucas and Penrose are either not mathematically rigorous or can be faulted and (2) both Feferman and Boolos have rather deep doubts about semantical interpretations of formalisms, particularly of the present kind. Especially Feferman, in his criticism of Penrose, takes a formalist stance, i.e. one that does not go beyond rigorous formal logical proofs, except to express doubts about such a semantical beyond.

Feferman and Boolos are certainly right about (1), though it should be added in fairness to Mr Lucas that he insists that his argument is informal and also must be informal, the last essentially because it involves semantical interpretations.

The formalist stances of Feferman and Boolos are - to my mind - rather odd, and quite similar to one who refuses to pronounce on moral questions apart from the literal statements in books of law. For there certainly is a problem in making sense of semantics, interpretations, etc. but these problems have at least been somewhat resolved by mathematical logic and model theory, and it seems rather prudish to object to their use in interesting cases like the present one.

What remains true is that a supposedly valid logical or mathematical argument should be capable of getting a valid formal statement, and that as long as there is no such valid formal statement there also is no valid proof of it.

Most "cognitive scientists" do not understand much about mathematical logic, and arguments based on what computers can do at present to what computers may be able to do in 10, a 100 or a 1000 years are as safe as predictions about the course of the stock exchange.

Also, it should be noted in the present context that the notion of "the Turing Test" "cognitive scientists" are prone to appeal to when discussing what computers may and may not do is incoherent.

The idea was first stated by Alan Turing, and comes to this: As soon as a computer is capable of outperforming a human being, e.g. in chess, one (at least: Alan Turing) concludes that "therefore" the computer thinks and plays chess at least as well as a human being does.

The fallacy here is this: Even if a computer mirrors all of one's behaviour completely, this is no reason to conclude it produces this behaviour in the same sort of way as a human being does - and indeed, it is an elementary fact that the same result may be usually brought about in very many different ways.

Thus, the computer that beat Gari Kasparov did not really play chess as human grandmasters of chess do (which at present is not at all understood more than very superficially and partially): it merely is capable of extra-ordinarily fast processing and searching. And while it seems to play chess, what happens inside it as it produces that behaviour, is quite different from what happens in a human being when it thinks about chess, for so much is certain.

Hence the Turing Test is about as conclusive as is looking at the image of a man in Madame Tussaud's, and concluding it "must" be a man because it looks externally like a man and "therefore" must be like a man internally as well.

To turn to the argument I outlined above.

The argument I outlined above is also not mathematically rigorous in a strict sense, because many of the details that make a system and a proof into a logically rigorous system and proof have not been given, either because this would be tedious or because this would be difficult.

However, it is formally more rigorous than what Penrose and Lucas offer, and seems to clarify what they intended to prove.

The matters I left out that are difficult are those that relate to propositional attitudes (e.g. assumptions (1)-(3)) and to self-referential statements (assumption (9)). So far, there just is no adequate logic of propositional attitudes, and so far there is no adequate theory of self-reference, though in either case there are promising beginnings and applications of such beginnings. Up

4. My own position on whether human beings are computers: Personally, I don't believe I am a computer, i.e. a deterministic finite state machine, and I don't believe you are, either, whoever you are, if you can read and understand this.

My fundamental reasons are not the Lucas-Penrose arguments, but the following - and perhaps I should add that, unlike Mr Lucas, I am no religious believer and do not believe I have an immortal soul:

A. There is to this day no computer (Turing Machine) that has, supplies or generates a semantics (for natural language, mathematics, or anything else of interest).
B. Turing Machines are fundamentally very simple things.
C. There is no clear theory of either human qua
lia or human selves or of meaning.

These arguments are given in order of increasing importance.

Argument (A) has been best stated by J. Searle. I have given my own version of his argument in a treatment I wrote of Leibniz. Here is a link: Searle's argument in a Leibnizian context.

Briefly, the argument is that human beings contribute meanings to understand symbolism, and all that computers do is to shift about - what are for human beings - symbols without any understanding of their own, merely based on the understanding programmers had and incorporated into the programs that run on the computer their programs run on.

As I pointed out in my treatment, it is not impossible that computers will have some semantical understanding in some sense, but it is true that sofar they have none, and what they seem to have is supplied by human beings, either in making the programs of computers or in interpreting the output of computers. (Note that supplying a computer with a ready-made computational semantics hardly counts: The amazing thing about babies is that within a few years they learn to talk without any programmer filling their heads with computational semantics.)

Argument (B) is considerably stronger. The underlying reasons are that (1) the mathematical principles embodied in Turing Machines are the most simple there are and (2) Turing Machines are finite in memory and in speed.

The first point is the most important. It seems as if Nature involves all manner of continuous transformations and operations, and as if Nature as a matter of course solves - or acts according - the most complicated systems of differential equations. A Turing Machine can mimic such mathematics, but only in a discrete way, based on what are effectively finitely many natural numbers (for "the real numbers" that a computer processes are simply finite lists of integers that approximate real numbers).

If Nature really embodies continuous processes, as it seems it does, a Turing Machine can at best approximate aspects of such processes, but not really represent them. And if the most complex thing known in Nature, such as a human being's brain, essentially involves continuous processes, therefore such a human can only be approximated in some of his or her aspects by a Turing Machine but cannot be fully represented by a Turing Machine.

Reason (2) has a theoretical form that is of some interest: The natural reply of anyone who knows anything about computers is that "the sky is the limit" as regards limitations of speed and memory of computers. This seems to me to be somewhat optimistic, but is not my real point, which is this: It may be that human beings (or bacteria or some other simple form of life) are not finite Turing machines but are infinite Turing machines. Note that for this it is not necessary that they are infinitely large, but merely that they have something like an interval of the real numbers accessible to them as memory, which then can play the role of an infinite tape.

Argument (C) is the strongest, and comes to this, in three steps.

C.1.: One essential set of qualities human beings have is that they have human feelings, desires, ends, fears, hopes etc., while no physical thing as reconstrued by physics (sofar) has any feelings, desires, ends, fears or hopes: All physical things have are charges, speeds, sizes, number, hardness etc. - in short, palpable physical qualities.

At present there just is no physical explanation of the characteristiscs of human experiences that makes them into human experiences: their aspects of being feelings, desires, ends etc. that are in philosophy often referred to as "qualia". And conversely, at present there also is no explanation of physics in terms of qualia, i.e. feelings, desires, ends etc. (although Aristotle believed there was, and some modern philosophers, like Whitehead, followed him in this, without much success).

C.2.: Another essential set of qualities human beings have is that they all (or nearly all) believe they are or have a self, a personality, a character, that is more than is ever given in their momentary experiences, and carries them from the past towards their future ends, across the present, and that is and remains "what they really are" through all manner of bodily changes.

At present there just is no adequate theory of what a human self is, and what makes such a theory quite difficult is that it involves self-reference, levels of meaning, human ends that go far beyond what is given, such as fantasies that function in one's character, and so on.

C.3.: A last essential quality all sane human beings is that they know how to speak and understand natural language, attribute meanings to marks, interpret gestures, and are capable of symbolizing all manner of possible and impossible things by arbitrary sounds or pictures, and may understand such symbolizations when given them.

At present there just is no adequate theory of what meaning is, beyond the simple level of first order predicate logic, i.e. including self-reference, including reference to universals and abstract entities such as classes, functions and categories, and including a full and clear explanation of human thinking and understanding and its various aspects, such as language, mathematics, music and visual art.

Hence my own position is on the question whether human beings are computers is this:

While I am an atheist, and don't believe I am or have an immortal soul, I also do not believe that "therefore" I am such a fundamentally simple thing as is a Turing Machine, essentially because (1) Turing Machines embody so little mathematics, and Nature, including human beings and other living things, seems to embody very complicated mathematics and (2) there are not even on the level of human experience adequate explanations of qualia and of meanings and of selves, and so there is no basis at all to attribute these to computers, and no basis at all to start programming them into computers (for you can't program what you don't really understand, even if it could be programmed in principle when you would understand it).

So I prefer to think that what I and other human beings are is a natural organism, that is not created by any god or higher intelligence, that has evolved naturally, and that embodies continuous mathematics such as can be mimicked but not fully rendered by computers, and that probably has not been fully discovered and is not fully understood by mathematicians, physicists and biologists, at the present levels of their sciences.

Finally, it seems to me that the problems of what is consciousness and of what is life are rather intimately related, and that we know as much about the former as about the latter: A little but by far not all of what there is to know.

December 6, 2000
Copyright
maartens@xs4all.nl


Colofon: Part 1 was written in 1995, and the rest of the argument in 2000. The reference to Feferman is: "Penrose's Gödelian argument - A Review of Shadows of the Mind by Roger Penrose". This was published in "PSYCHE: an interdisciplinary journal of research on consciousness 2(7), May 1995."

Reformatted July 2004, with a few small clarifications and stresses added.
I checked the formatting (and changed some) on Sep 17, 2016.   Up