
 A samplespace (e.g. S={a,c,g,t})
is the set of possible outcomes of some experiment.
 Events A, B, C, ..., H, ...
An event is a subset (possibly a singleton) of the sample space,
e.g. Purine={a,g}.
 Events have probabilities P(A), P(B), etc.
 Random variables X, Y, Z, ...
A random variable X takes values, with certain probabilities,
from the sample space.
 We may write P(X=a), P(a) or P({a}) for the probability that X=a.
Thomas Bayes (17021761)
Thomas Bayes
made an early study of probability and games of chance.
Bayes' Theorem
If B_{1}, B_{2}, ..., B_{k}
is a partition of a set B (of causes) then

P(B_{i}A) =
P(AB_{i}) P(B_{i})
/ ∑_{j=1..k} P(AB_{j}) P(B_{j})
 i = 1, 2, ..., k
One and only one of the B_{i} must occur
because they are a partition of B.
Inference
Bayes theorem is relevant to inference because
we may be entertaining a number of exclusive and exhaustive hypotheses
H_{1}, H_{2}, ..., H_{k}, and
wish to know which is the best explanation of some observed data D.
In that case P(H_{i}D) is called the posterior probability
of H_{i}, "posterior" because
it is the probability after the data has been observed.
 ∑_{j=1..k} P(DH_{j}) P(H_{j}) = P(D)

 P(H_{i}D) = P(DH_{i}) P(H_{i}) / P(D)
posterior
Note that the H_{i} can even be an infinite enumerable set.
P(H_{i}) is called the prior probability
of H_{i}, "prior" because
it is the probability before D is known.
Notes
 T. Bayes.
An essay towards solving a problem in the doctrine of chance.
Phil. Trans. of the Royal Soc. of London, 53, pp.370418, 1763.
Reprinted in Biometrika, 45, pp.296315, 1958.
Conditional Probability
The probability of B given A is written P(BA).
It is the probability of B provided that A is true;
we do not care, either way, if A is false.
Conditional probability is defined by:
 P(A&B) = P(A).P(BA) = P(B).P(AB)

 P(AB) = P(A&B) / P(B)
 P(BA) = P(A&B) / P(A)
These rules are a special case of Bayes' theorem for k=2.
There are four combinations for two Boolean variables:

 A  not A 
margin 
B 
A & B  not A & B 
(A or not A)& B = B 
not B 
A & not B  not A & not B 
(A or not A)& not B = not B 
margin 
A = A&(B or not B) 
not A = not A &(B or not B) 
LA 1999 
We can still ask what is the probability of A, say, alone
 P(A) = P(A & B) + P(A & not B)
 P(B) = P(A & B) + P(not A & B)
Independence
A and B are said to be independent
if the probability of A does not depend on B and v.v..
In that case P(AB)=P(A) and P(BA)=P(B) so
 P(A&B) = P(A).P(B)
 P(A & not B) = P(A).P(not B)
 P(not A & B) = P(not A).P(B)
 P(not A & not B) = P(not A).P(not B)
A Puzzle
I have a dice (made it myself, so it might be "tricky")
which has 1, 2, 3, 4, 5 & 6 on different faces.
Opposite faces sum to 7.
The results of rolling the dice 100 times (good vigorous rolls on carpet) were:
1 20:
3 1 1 3 3 5 1 4 4 2 3 4 3 1 2 4 6 6 6 6
21 40:
3 3 5 1 3 1 5 3 6 5 1 6 2 4 1 2 2 4 5 5
41 60:
1 1 1 1 6 6 5 5 3 5 4 3 3 3 4 3 2 2 2 3
61 80:
5 1 3 3 2 2 2 2 1 2 4 4 1 4 1 5 4 1 4 2
81100:
5 5 6 4 4 6 6 4 6 6 6 3 1 1 1 6 6 2 4 5
Can you learn anything about the dice from these results?
What would you predict might come up at the next roll?
How certain are you of your prediction?
 LA 1999

window on the wide world:

