0% found this document useful (0 votes)
6 views34 pages

End321 01 Introduction

Uploaded by

js547wsqkm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views34 pages

End321 01 Introduction

Uploaded by

js547wsqkm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

END 321 - Probability Review

Week 1

1
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
Uncertainty
Realistic models of real world phenomenon should take into
account the presence of uncertainty (or randomness).

The question is how we should factor uncertainty into models of


decision making in a real world environment.

Probability theory is still the most popular alternative in


characterizing uncertainty.

Probabilistic thinking goes back to ancient times, when people


began to grow interest on games of chance. Through those games,
people developed an intuitive sense of likelihood and average
behavior.

2
Basic Rules of Probability
• Definition: Any situation where the outcome is
uncertain is called an experiment.
• Definition: For any experiment, the sample space S
of the experiment consists of all possible outcomes
for the experiment.
• Definition: An event E consists of any collection of
points (set of outcomes) in the sample space.
• Definition: A collection of events E1, E2,…,En is said
to be a mutually exclusive collection of events if for
i ≠ j (i=1,2,…,n and j=1,2,…n), Ei and Ej have no
points in common.

3
Basic Rules of Probability
• With each event E, we associate an event Ē. Ē
consists of the points in the sample space that
are not in E.
• With each event E, we also associate a
number P(E), which is the probability that
event E will occur when we perform the
experiment.

4
Basic Rules of Probability
Some examples for sample spaces are as follows:

5
5
Basic Rules of Probability
Further examples:

6
6
Basic Rules of Probability
In each of the examples on previous slides, we may also define many events such
as:

Since events are represented as sets, all set operations can be


used to generate other events that are essentially subsets of the
sample space S. 7
Basic Rules of Probability
In each of the examples in the previous two slides, we may also define
many events with set operations such as:
E U F: E union F stands for the event that will occur if either E or F
occurs.
E ∩ F: Intersection of E and F stands for the event that both E and F occur
at the same time.

If E ∩ F is empty, then E and F are said to be mutually exclusive.

For any event E, Ec is its complement event. The complement of the


sample space S is an empty set.

If E1, E2, …, En, … are a collection of events then is the event that
at least either one of E1, E2, …, En, … occurs. Conversely, I
is the event that all events occur.
8
Kolmogorov Axioms on Probability
– Rule 1 For any event E, P(E) ≥ 0.
– Rule 2 If E=S (that is, if E contains all points in the
sample space), then P(E) = 1.
– Rule 3 If E1, E2,…,En is mutually exclusive
collection of events, then
k =n
P( E1 ∪ E2 ∪ ... ∪ En ) = ∑ P( Ek )
k =1

– Rule 4 P(Ē) = 1 – P(E)

9
Basic Rules of Probability

A fair coin refers to a coin that lands up head or tail with equal probability. In a
biased coin, though, this is not the case.

10
Basic Rules of Probability

Probabilities meet a nice intuitive property.; if our experiment is


repeated over and over again then (with probability 1) the
proportion of time that event E occurs will just be P(E).
11
Complementary Events
For an event E, Ec is the standard notation for its complement event. Any event and
its complement are also mutually exclusive. Then since

12
Union of Two Events
The union of two events E and F is denoted by E U F. Probability of this new event
should account for the probability of all outcomes that lie in either E or F. Then,
what about the below formula for the probability of E U F?

Can we accept this formula as valid? Why? Or if you


don’t think so, what’s the problem here?

Consider the following example where we toss two coins. Does


anyone remember the sample space of this experiment?

13
Union of Two Events
The sample space is

We assume that each outcome is equally likely. In other words, the probability of
each outcome is ¼. Consider the events:

Can you describe in words what these events are?


E is the event that the first coin lands on heads and F is the event
that the second coin lands on heads.

What can we say about P(E) and P(F)?

Both are equal to ½.


14
Union of Two Events
Then if we use the formula P(E U F) = P(E) + P(F) = 1/2+1/2 = 1, right? What is
the problem here?

The problem is that the event E U F = excludes the


outcome (T,T) which has a positive probability and yet we claim P(E U F) = 1. To
state it in a different way, in using the formula P(E U F) = P(E) + P(F), we implicitly
count twice the outcomes that lie in both E and F at the same time (i.e., (H, H) is
counted twice here).
So it seems we need a new formula here. Here it is:

where EF is a shorthand notation for the intersection of E and F ( i.e. EF = E  F


).
If we solve the problem using this formula, we obtain 1 – ¼ = ¾. 15
Union of More Events
What if we talk about the probability of the union of n events? Accounting for the
intersections in this case is a little more complicated. We use a well-known
identity from set theory :

We will not use this inclusion-exclusion identity very often in this course.
Nevertheless, it is a useful formula that you may wish to keep for your records.

16
Conditional Probabilities
Conditional probability is one of the most important (maybe the most) concept in
probability theory when it comes to modeling stochastic systems. It is used to
model the impact of incoming information on our probability assessments.
Consider the following example of rolling dice. If we roll the same dice twice, there
are 36 possible outcomes likely to occur. If we use a fair die, we expect that the
probability of each outcome in this sample space is 1/36.
Now, assume that someone tells you the outcome of the first trial (which is 4).
Then, what is the probability that the outcome of the original
experiment is (3, 5)?

Zero!

17
Conditional Probabilities
How about the probability that the outcome of the original experiment
is (4, 6)?

1/6
This simple example shows one more time that incoming information might
change your probability assessments on the experiment you are making. In general
let E be the event whose probability you’re after, and let F be the event you can
observe before E occurs. Naturally, you might be interested in the conditional
probability that E will occur given F occurs. The standard notation used for this
probability is:

How can the information that F occurs change our assessment of the
probability of event E?
18
1 Conditional Probabilities
Well, in one simple way: The sample space of the experiment may be affected by
the occurrence of the event F. That’s what happened in the previous example. The
moment you learn that 4 is indeed the outcome of the first roll, you immediately
disregarded outcomes such as (3,5), (2,1), (6,6) …etc. In fact, a total of 30 possible
outcomes have been eliminated this way. What is left in the sample space then?
Outcomes such as (4,1), (4,2), …, (4,6). Since all are equally likely you concluded
that the probability of (4,6) is 1/6.
The natural question one might ask is whether there is a formula for P(E| F)? You
must have seen one in your probability and statistics course. Does anyone
remember?

19
Conditional Probabilities
Let’s consider some examples now:

Originally the sample space is S={1, 2, …, 10}. One would naturally


think that each outcome is equally likely to be drawn. Hence, if we
were asked the probability that the card that we draw is 10 before
we are told that the number is at least 5, we would say the
probability is 1/10.

When we learn that the outcome is at least 5, the sample space


shrinks to S={5, 6, 7, 8, 9, 10}. Since all are still equally likely, we
would immediately say the desired probability now increases to 1/6.

20
Conditional Probabilities
We did not use the formula given in earlier slides, though. If we wish to solve this
problem using the formula, then E denotes the event that the card drawn is 10 and
F is the event that it is at least 5. Then
Solution:

21
Conditional Probabilities
Another example is as follows:

Solution:

22
Conditional Probabilities
One more similar example is:

Let F denote the event that the first ball is black, and E denote the event that the
second ball is black. What is the event in question here?
Solution:

23
Independence of Two Events
In many uncertain situations, we may be faced with the question of assessing the
probability of two events that have no relevance to each other. Namely, for two
events E and F, knowing that F has occurred gives us no reason to update the
probability assessment on E. In this case, we say that event E and F are
independent.
A formal definition of independence is as follows: Two events E and F are said to
be independent if:

This also implies that when E and F are independent:

or:

24
Independence of Two Events
Let’s consider an example:

Are E1 and F independent? How do you check?


We should find, P(E1∩F), P(E1) and P(F) separately and then check
whether P(E1 ∩ F) = P(E1)P(F) holds.
Let’s do so then. First P(F) = 1/6. Simple! How about P(E1)?

The sum of two rolls is 6 when the outcomes are (1,5), (2,4), (3,3), (4,2),
(5,1). Hence P(E1) = 5/36.

25
Independence of Two Events
How about P(E1 ∩ F)? What outcome of the two consecutive rolls satisfy 1) a
sum of six 2) 4 on the first roll? Simple! Only one outcome, that is (4,2). What is
the probability of getting a (4,2)? 1/36. Then P(E1 F) =1/36, as well.
One final step before we understand whether E1 and F are independent. Let’s
compute P(E1).P(F) = 1/6 * 5/36 < 1/36. Hence these two events are not
independent.
Now, consider another event E2. E2 is the event that the sum equals 7 (not 6).
Following the same sequence of steps, let’s check whether E2 and F are
independent. First, using parallel arguments, one could see that

The outcome will bring the sum of 7 if we observe (1,6), (2,5), (3,4), (4,3), (5,2),
(6,1). Then P(E2)=1/6 and P(E2).P(F) = 1/6 * 1/6 = 1/36. Since this is equal to the
number above, E2 and F are independent.
26
Independence of Two Events
Is it not an interesting conclusion to see in this example that E2 and F are
independent while E1 and F are NOT. Can you think of an intuitive
explanation for this?
Hint: Think along the lines of how learning about F could change your probability
assessment about the other event.

If F occurs, then E1 will occur if you get a 2 in the second trial. Hence, in your
judgment, the probability of E1 should be 1/6, not 5/36 as we initially claimed.
For E2, though, both probabilities are 1/6. Hence, as far as E2 is concerned, F does
not provide us extra information.

27
Bayes Formula
As we mentioned in an earlier example, conditioning is sometimes key to
computing probabilities. This is partly because of the following set theoretic
identity for two events E and F:

This identity helps us derive a very useful formula:

28
Bayes Formula
Consider the following example:

One piece of information is not clear in the question statement:

If the coin comes up heads, ball is selected from the first urn. Any suggestions
on the solution?
Let W be the event that a white ball is selected, and H be the event that the
outcome of the coin toss is heads. What is the probability that we are
interested in?

29
Bayes Formula
Solution:

30
Bayes Formula
Another example is regarding the result of a lab. test:

What events are of interest here? The last sentence gives the answer: D, the
event that the person is actually sick and E, the event that the test result is
positive. Hence, we need P(D|E).

Then it is useful to review the probability information given in the question. Do


we have probability assessments conditioning on whether the test is
positive or whether the person is sick or not?

It’s the health status that matters!


31
Bayes Formula
Following similar steps as in the previous example:
Solution:

32
Bayes Formula
We need a generalization of the following form: Suppose F1, F2, ...,Fn are mutually
exclusive events such that Ui=1,…,n Fi=S. Then we should have:

Since Fi’s are mutually exclusive, EFi’s are mutually exclusive as well. Then, it follows
that:

33
Bayes Formula
The last expression on the previous slide leads us to the Bayes formula:

34

You might also like