Learning from experience:
Explanation of how one can learn from
theories and their logical consequences if these
consequences are verified or falsified. The explanation to be given is
closely related to Bayesian Reasoning, but cast in such a way that certain
problems with that are avoided, and all relevant assumptions are clearly stated.
The start is standard
propositional
logic (PL) based on & and ~, that are sufficient to define all standard
connectives. This is extended to temporal propositional logic and
temporal probability theory as follows, noting that "p(α,Q)" is "person
α's probability for Q" ("α" = "alpha").
A1. (P&Q)._{t}
IFF (P)._{t} & (Q)._{t}
A2.
(p(α,P)=p(α,Q))._{t} IFF p(α,P)._{t }= p(α,Q)._{t}
That is:
Propositions are temporally relativized by an operator "._{t}" attached to them,
read as "at t" and this distributes as one expects over & and =, which are
together with ~ the basic logical operators. The times t are supposed to be discrete and
ordered by <=. This will be needed to keep track of the various truthvalues and
probabilities of propositions at various times.
A3. (EP)(Et)(Ex) ( 0 <= p(α,P)._{t}=x <= 1)
A4. (EP)(EQ)(Et) ( (Ex)(p(α,P)._{t}=x) & (Ey)(p(α,Q)._{t}=y) &
~(Ez)(p(α,P&Q)._{t}=z) )
For every person α there are propositions with some personal probability
at
some t  and so it may be that there are for α at t propositions which do not
have a personal probability. And for every person α there are at some t pairs of propositions such that
both propositions have some probability but their conjunction doesn't.
A3 and A4 entail that there may well be propositions for α that do not
have α's personal probability for some reason, and there may well be
conjunctions for which α does not have a probability even if α has probabilities
for the conjuncts. This differs from nonpersonal probability, where every
logical possibility has some probability always, even if it is unknown.
A5. p(α,QT)._{t}=x IFF p(α,Q&T)._{t} : p(α,T)._{t}=x
A6. p(α,~Q)._{t}=1y IFF p(α,Q)._{t}=y
A7. p(α,~QT)._{t}=1x IFF p(α,QT)._{t}=x
This defines conditional probability in A5 and defines ~Q for unconditional and
conditional probabilities in A6 and A7. Note this is a personal probability of α: It
is up to α to decide what it is, and often though not necessarily it is what α
believes that the real probability is. The reading of "p(α,QT)._{t}" is
"the probability for α of Q given T at t".
With two conditional probabilities plus p(α,T)._{t} we can calculate all
entries for a fundamental table, that lists the probabilities for all
logical alternatives. Indeed here is the fundamental table in three
different convenient forms, where the ._{t} are left out as understood:

I 
II 
III 

t 
T 
~T 
T 
~T 
T 
~T 

Q 
a 
b 
p(α,Q&T) 
p(α,Q&~T) 
p(α,QT)*p(α,T) 
p(α,Q~T)*p(α,~T) 
p(α,Q^{T}) 
~Q 
c 
d 
p(α,~Q&T) 
p(α,~Q&~T) 
p(α,~QT)*p(α,T) 
p(α,~Q~T) 
p(α,~Q^{T}) 

p(α,T) 
p(α,~T) 
p(α,T) 
p(α,~T) 
p(α,T) 
p(α,~T) 
1 
From A1A7 we get something much like standard probability theory, for those
propositions that a does have probabilities for, for we can easily prove:
T1. p(α,T)._{t}=p(α,Q&T)._{t}+p(α,~Q&T)._{t}
For p(α,Q&T)._{t}+p(α,~Q&T)._{t} = p(α,QT)._{t}*p(α,T)._{t}+p(α,~QT)._{t}*p(α,T)._{t}
by A5
= (p(α,QT)._{t}+p(α,~QT)._{t})*p(α,T)._{t} by
algebra
= p(α,T)._{t}
by A7
Likewise
T2. p(α,Q^{T})._{t} = p(α,QT)._{t}*p(α,T)._{t} + p(α,Q~T)._{t}*p(α,~T)._{t}
since p(α,QT)._{t}*p(α,T)._{t}+p(α,Q~T)._{t}*p(α,~T)._{t} = p(α,Q&T)._{t}+p(α,Q&~T)._{t} by A5. This is
written as p(α,Q^{T})._{t} to indicate p(α,Q)._{t} is calculated with respect to
p(α,T)._{t} and two of a's conditional probabilities involving Q and T. This will become of
some importance below.
Since also by A6
T3. p(α,Q^{T})._{t}+p(α,~Q^{T})._{t} = p(α,T)._{t}+p(α,~T)._{t} = 1
the fundamental table has been justified at this point. (The α,b,c,d entries
in it are for conventient abbreviation of the four possible logical
alternatives.)
A8. (T α Q)._{t}
IFF p(α,QT)._{t}=1 V p(α,T)._{t}=0
A9. (α Q)._{t}
IFF p(α,Q)._{t}=1
This defines
logical implication and verified formula for a in
terms of personal probability of a. Note that in fact only 1 and 0 are used
here, and that thus we have the basis for standard bivalent propositonal logic.
T4. (T α Q)._{t} >
p(α,T)._{t} <= p(α,Q)._{t}
For suppose (T α Q)._{t}. i.e. by A8 p(α,QT)._{t}=1 V p(α,T)._{t}=0.
In case p(α,T)._{t}=0, we have p(α,T)._{t} <= p(α,Q)._{t}.
So suppose p(α,T)._{t}>0. Then p(α,QT)._{t}=1 and so p(α,~Q&T)._{t}=0 whence
again p(α,T)._{t} <= p(α,Q)._{t}. Thus T4.
Therefore also, defining (T α Q)._{t} =def (T α Q)._{t}
& (Q α T)._{t}
T5. (T α Q)._{t} > p(α,T)._{t} =
p(α,Q)._{t}
which is to say that logical equivalents have equal probabilities. Again,
this is like standard probablity theory, but relativized to a's judgements.
Now we are going to say how one may learn from experience.
For given p(α,QT)._{t}=h, p(α,Q~T)._{t}=i
and
p(α,T)._{t}=j, with h≠i:
A10. p(α,QT).
_{t+1}=h IFF p(α,QT)._{t}=h
A11. p(α,Q~T)._{t+1}=i IFF p(α,Q~T)._{t}=i
A12. p(α,T).
_{t+1} = p(α,T)._{t} IFF
0 < p(α,Q)._{t} < 1
A13. p(α,T)._{t+1} = p(α,TQ)._{t}
IFF p(α,Q)._{t}=1
A14. p(α,T)._{t+1} = p(α,T~Q)._{t}
IFF p(α,~Q)._{t}=1
Any given set p(α,QT)._{t}, p(α,Q~T)._{t},
p(α,T)._{t} where T is a theory is called a basic theory for α if
p(α,QT)._{t}≠p(α,Q~T)._{t}, and A10 and A11 insist that the conditional
probabilities in a basic theory remain constant in time. A12 till A14 state
how p(α,T)._{t} in a basic theory changes or not depending on what α verifies
about the Q in the set: It remains constant if α neither verifies Q nor ~Q and
changes with Q or ~Q if either of these are verified for α.
To show how this works put p(α,QT)._{t}=h, p(α,Q~T)._{t}=i
and p(α,T)._{t}=j. Then suppose p(α,Q)._{t}=1. We have,
noting also that ~j=1j
(*) p(α,T)._{t+1} = p(α,TQ)._{t}
by A13
= (p(α,QT)._{t}
: p(α,Q^{T})._{t}) * p(α,T)._{t}
by A5 and T5, for p(α,Q&T)._{t}=p(α,T&Q)._{t}
=
(h : hj+i~j) *
j
by adopted conventions
Thus p(α,TQ)._{t+1}= p(α,QT)._{t}:p(α,Q^{T})._{t} *
p(α,T)._{t} and so the new theory p(α,T)._{t+1 }differs by
a multiplicative factor
p(α,QT)._{t}:p(α,Q^{T})._{t} from the old p(α,T)._{t}
. Now clearly
p(α,QT)._{t}:p(α,Q^{T})._{t} >= 1 IFF h : hj+i~j >= 1
IFF h >= hj+i~j
IFF h~j >=
i~j
using ~j=1j
IFF h >=
i
supposing ~j>0
IFF p(α,QT)._{t}
>= p(α,Q~T)._{t }
Therefore the direction of the degree of confirmation depends only on the conditional probabilities:
The multiplicative factor p(α,QT)._{t}:p(α,Q^{T})._{t}
equals or exceeds 1 iff p(α,QT)._{t} equals or exceeds p(α,Q~T)._{t }
. And as the conditional probabilities remain constant by A10 and A11 this
remains constant.
Next, the problem for Bayesian Reasoning that if p(α,Q)._{t} =1 i.e.
α Q._{t }then also, by probability theory, p(α,TQ)._{t}=p(α,T)._{t
}is avoided by the following theorem, formulated for the same basic theory
as before, and using the definition (T αrel Q) =def p(α,Q&T)≠p(α,Q)*p(α,T) and
(T αirr Q) =def ~(T αrel Q).
T6. (T αrel Q) > p(α,Q^{T}) < 1
Proof: p(α,Q^{T}) = p(α,QT)*p(α,T)+ p(α,Q~T)*p(α,~T) by T2
=
hj+i~j
by adopted conventions
Now hj+i~j = 1 IFF
hj+iij =
1
IFF
using ~j=1j
hjij = (1i) IFF
(hi)*j = (1i) IFF
((hi):(1i))*j = 1
supposing i<1
Lemma: ((hi):(1i))*j = 1 IFF h=1 & j=1 & h>i, assuming h, i and j are probabilities.
Proof: Make the assumption about h, i and j. Suppose the RHS of the
equivalence. Then ((hi):(1i))*j turns to ((1i):(1i)). Since
h=1 and h>i it follows ((1i):(1i))=1. Next suppose the LHS. Assume h<=i. Then
hi<=0 and so ((hi):(1i))*j ≠ 1. So h>i follows
from the LHS. Assume h<1. Then ((hi):(1i)) < 1 and ((hi):(1i))*j
≠ 1. So h=1 follows from the LHS. Since we
have proved h=1, ((hi):(1i))*j = 1, and so j=1. Thus the lemma has been
proved.
Now if j=1 then ~j=0 and then (T αirr
Q) by T4.
So if (T αrel Q) then j<1 and so hj+i~j < 1 by the lemma. Therefore indeed (T
αrel Q) > p(α,Q^{T})< 1. Qed.
Hence it is quite possible that p(α,Q^{K})._{t }=1 & p(α,Q^{T})._{t}
< 1. The probabilities involved in a basic theory need not be the same as
those of propositions that are not involved in it, but may be used to update the
probabilities in a basic theory by (*). And indeed one may write also p(α,Q^{TK})._{t}
to explicate K as well.
