0% found this document useful (0 votes)
34 views14 pages

Basic Probability

A random experiment is an observation with unpredictable outcomes, where all possible outcomes can be described beforehand. Key concepts include sample points, sample space, events, and types of events (certain, impossible, complementary). Probability definitions and theorems are discussed, along with random variables, their types, and the probability mass function and cumulative distribution function.

Uploaded by

bullshtbps
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views14 pages

Basic Probability

A random experiment is an observation with unpredictable outcomes, where all possible outcomes can be described beforehand. Key concepts include sample points, sample space, events, and types of events (certain, impossible, complementary). Probability definitions and theorems are discussed, along with random variables, their types, and the probability mass function and cumulative distribution function.

Uploaded by

bullshtbps
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Random Experiment

An experiment or observation that may be repeated a large number of times under


very nearly identical conditions, where the possible outcome of any particular
observation is unpredictable but all possible outcomes can be described prior to its
performance, is known as a Random Experiment.

For example, the experiment of tossing a coin is a random experiment since the
possible outcomes are 'Heads' (H) or 'Tails' (T), but the outcome of a particular toss
cannot be predicted.

Sample Points / Event Points

The outcomes of a random experiment are called sample points or event points.​
For example, in the experiment of tossing a coin, the sample points are:​
H (Head) and T (Tail).

Sample Space

The set of all sample points, i.e., the set of all possible outcomes of a random
experiment, is called the sample space. It is denoted by S.

●​ Example 1: If we throw two coins once, then:​


S = {HH, HT, TH, TT}
●​ Example 2: If we roll a die once, then:​
S = {1, 2, 3, 4, 5, 6}

Event

Any subset of the sample space S of a random experiment is called an Event.​


For example, in the experiment of throwing two coins, let:​
A = {TH, HT}​
Since A is a subset of S, it is an event.

Certain Event

An event that’ll always occur. Since every set is a subset of itself, the sample space
(S) is a subset of itself.Thus, the sample space itself is an event, which is called a
certain event. E.g: Probability of getting face 1-6 after rolling a dice.

Impossible Event

An event that contains no sample points is called an impossible event. It is


denoted by ϕ.
●​ Example: In the experiment of throwing a die, the event "Face 7 appears"
is an impossible event, as no face of a die shows the number 7.​
Hence, Face 7 = ϕ.

Complementary Event

For any event A, there is an event containing all the sample points in the
sample space that are not in A.​
This event is called complementary event of A and is denoted by A' or Ā or
Ac.

●​ Example: If A = {TH, HT} and S = {HH, TT, TH, HT}, then:​


A' = {HH, TT} (i.e., the complement of A contains all elements in S
except those in A).

Important Notes:

●​ S̄ = ϕ & ϕ' = S (complement of S is an empty set & vice versa)


●​ (A')' = A (double complement of A gives back A)

Simultaneous Occurrence of Two Events

Let A1 and A2 be two events. The set A1 ∩ A2 represents the simultaneous


occurrence of both events A1 and A2.​
This event is also denoted by A1 A2.

●​ Example: In the experiment of rolling a die:


○​ Let A1 = {Even face} = {2, 4, 6}
○​ Let A2 = {Multiples of three} = {3, 6}
○​ Then, A1 ∩ A2 = {6}, meaning that 6 is the only outcome that
satisfies both events.

At Least One of Two Events

Let A1 and A2 be two events. The set A1 ∪ A2 represents the occurrence of


at least one of A1 or A2.​
This event is also denoted by A1 + A2.
●​ Example: In the experiment of rolling a die:
○​ A1 = {Even face} = {2, 4, 6}
○​ A2 = {Multiple of three} = {3, 6}
○​ Then, A1 ∪ A2 = {2, 4, 6, 3}, meaning that these values satisfy
at least one of the two conditions.

Disjoint or Mutually Exclusive (m.e.) Events

If two events A1 and A2 have no common sample points, they are called
Mutually Exclusive Events.​
That is, if A1 ∩ A2 = ϕ, then A1 and A2 are mutually exclusive.

●​ Example 1: If A1 = {HH, TT} and A2 = {HT, TH}, then:​


A1 ∩ A2 = ϕ (no common elements).​

●​ Example 2: If rolling a die:​

○​ A1 = {1, 3, 5}
○​ A2 = {2, 4, 6}
○​ A1 ∩ A2 = ϕ, so these two events are mutually exclusive.

Since mutually exclusive events have no overlap, two m.e. events cannot
occur simultaneously.

Pairwise Disjoint Events

Let A1, A2, ..., An be n events. Events Ai (for i = 1, 2, ..., n) are said to be
pairwise disjoint if no two of them have any common event points.​
Mathematically, if Ai ∩ Aj = ϕ for i ≠ j, then Ai and Aj are pairwise disjoint.

●​ Example: Let there be 10 events such that A1 ∩ A2 = ϕ, A2 ∩ A3 = ϕ,


..., A9 ∩ A10 = ϕ, then these events are pairwise disjoint.

Exhaustive Events

Two or more events are said to be exhaustive if union of all those events
gives the sample space.​
In other words, the events A1, A2, ..., An are exhaustive if:​
A1 ∪ A2 ∪ A3 ∪ ... = S (where S is the sample space).

●​ Example: In the experiment of throwing two coins once:


○​ A1 = {HH}
○​ A2 = {TT}
○​ A3 = {HT, TH}​
These events are exhaustive because their union forms the
entire sample space:​
S = {HH, HT, TH, TT}, meaning at least one of these events must
occur.

Equally Likely Sample Points

The sample points of a sample space are said to be equally likely if no


particular outcome is more expected than the others.

Classical Definition of Probability

Let us suppose that a random experiment E has a finite sample space S


containing n(S) sample points, all of which are equally likely. Then, the
probability of an event A, which contains n(A) sample points, is defined as:

𝑛(𝐴)
𝑃(𝐴) = 𝑛(𝑆)
n(A): The number of favorable outcomes for event A.
n(S): The total number of possible outcomes in the sample space S.

Criticism of the Classical Definition

1.​ Circular Definition:


○​ To define probability, it assumes that all sample points are
equally likely, meaning each has the same probability. This
makes the definition circular.
2.​ Lack of a Criterion for Equally Likely Outcomes:
○​ The definition does not provide a way to determine whether
different possible outcomes of an experiment are actually equally
likely.
3.​ Unsuitability for Infinite Sample Spaces:
○​ In many experiments, the number of possible outcomes is
infinite, making this definition inapplicable in those cases.
4.​ Limited Applicability:
○​ This definition is useful only in simple and unimportant cases,
such as games of chance.
5.​ Difficulty in Complex Scenarios:
○​ In some real-world problems, it is difficult to count all possible
outcomes and favorable cases. Examples include:
■​ Determining the sex of a newborn child.
■​ Predicting the outcome of an unfair coin toss.

Important Sample Space and Probability Concepts in a 52-Card Deck

Composition of a 52-Card Deck

A standard deck consists of 52 cards divided into four suits, each containing
13 cards:

Each suit contains the following 13 ranks:

●​ Number Cards: 2, 3, 4, 5, 6, 7, 8, 9, 10 (9 total)


●​ Face Cards: Jack (J), Queen (Q), King (K) (3 total)
●​ Ace (A)
Axiomatic Definition of Probability

Let E be a random experiment and S be its sample space. Let P be a function


from the class of all events to the set of all real numbers satisfying the
following axioms:

Axiom I (Non-Negative Probability)

P(A) ≥ 0, for every event A.

Axiom II (Probability of the Sample Space)

P(S) = 1 (for a certain event).

Axiom III (Additivity for Mutually Exclusive Events)

If A₁, A₂, ... are a finite or infinite sequence of pairwise mutually exclusive
events, then:

P(A₁ ∪ A₂ ∪ ...) = P(A₁) + P(A₂) + ...

Then for any event A, the real number P(A) is called its probability.

Frequency Definition of Probability

Suppose that we have a random experiment E with sample space S. Let A⊆S
be any event. Let us repeat the random experiment n times independently.
Define Xn​as the relative frequency of event A (no. of times A occurs / no.
of experiments) during these trials.

It is observed that as n→∞, Xn​approaches some fixed number. This behavior


is an example of statistical regularity. The limiting fixed number is called the
probability of the event. This is known as the frequency definition of
probability.

𝑓(𝐴)
𝑃(𝐴) = lim 𝑛
𝑛→∞
f(A) represents the frequency (or number of times) the event A occurs in n
independent trials of the experiment.

Below table is for tossing a coin 5 times.


Theorems on Probability

Theorem 1

0 ≤ P(A) ≤ 1, for any event A.

Theorem 2

P(S) = 1, P(Ø) = 0​
where S is the certain event, and Ø is the impossible event.

Theorem 3

If the events A1, A2, ..., An are mutually exclusive, then:​


P(A1 ∪ A2 ∪ ... ∪ An) = P(A1) + P(A2) + ... + P(An)

Theorem 4 (Total Probability Theorem)*

For any two events A1 and A2 (which may not be mutually exclusive):​
P(A1 ∪ A2) = P(A1) + P(A2) - P(A1 ∩ A2)

Theorem 5

If A1 and A2 are mutually exclusive events, then:​


P(A1 ∪ A2) = P(A1) + P(A2)

Theorem 6
If A1, A2, A3 are any three events (not necessarily mutually exclusive), then:​
P(A1 ∪ A2 ∪ A3) = P(A1) + P(A2) + P(A3) - P(A1 ∩ A2) - P(A1 ∩ A3) -
P(A2 ∩ A3) + P(A1 ∩ A2 ∩ A3)

Theorem 7

For any event A, the probability of its occurrence is given by:​


P(A) = 1 - P(A')​
where A' is the complementary event of A.

Theorem 8

If the events A1, A2, ..., An are mutually exclusive and exhaustive, then:

𝑛
∑ 𝑃(𝐴𝑖) = 1
1

Theorem 9

For any two events A1 and A2, where A1 ⊆ A2:

(i) P(A1) ≤ P(A2)

(ii) P(A2 - A1) = P(A2) - P(A1)

Independent Events

If for two events A and B, the chance of occurrence of event A is not affected
by the occurrence of event B, then A is said to be independent of B.

●​ The following theorem is the most important characterization of two


independent events.

Theorem:

Two events A and B are stochastically independent or statistically


independent or simply independent if and only if:​
P(A ∩ B) = P(A) . P(B)

Note:

When P(A ∩ B) ≠ P(A) . P(B), the events A and B are said to be dependent.
De Morgan’s law of set theory: If A and B are two sets, then,
1.​ 𝐴 ∪ 𝐵 = (𝐴 ∩ 𝐵)
2.​ 𝐴 ∩ 𝐵 = (𝐴 ∪ 𝐵)

Random Variable​
A random variable is a real-valued function whose domain is the sample
space of a random experiment.

●​ It assigns a numerical value to each outcome (sample point) of the


sample space according to a specific rule.
●​ For more details, refer to Class 12 Notes.

Types of Random Variables

1.​ Discrete Random Variable


○​ A random variable X is said to be discrete if the spectrum (set of
values it can take) is finite or countably infinite (i.e., an infinite
sequence of distinct values).​

2.​ Continuous Random Variable


○​ A random variable X is said to be continuous if it can assume
every value in an interval (uncountably infinite set of values).

Probability Mass Function (PMF)

●​ Let X be a discrete random variable.


●​ For every value of X, the probability of that value is called the
probability mass function.
●​ It is denoted by:​
fᵢ = P(X = i)​
where i represents the distinct values that X can assume.

Note:
●​ For this type of questions, we need to write the probability distribution
of the random variable. (Refer class 12th Notes).
●​ The sum of all probabilities (fᵢ) must satisfy the condition:​
Σ fᵢ = 1

Distribution Function or Cumulative Distribution Function (CDF)

The distribution function (also called the cumulative distribution function,


or CDF) of a random variable X is defined as:

F(x) = P(-∞ < X ≤ x) [for -∞ < x < ∞]

●​ x represents the values taken by the random variable X.

Example:​
Suppose in a random experiment of rolling a die, the random variable X is
the face number that appears.

Now, the probability distribution is:

X 1 2 3 4 5 6

fi = P(X = x) 1/6 1/6 1/6 1/6 1/6 1/6

Now the distribution function F(x) is given by:

●​ If -∞ < x < 1, F(x) = 0​

●​ If 1 ≤ x < 2, F(x) = f₁ = 1/6​

●​ If 2 ≤ x < 3, F(x) = f₁ + f₂ = 2/6​

●​ If 3 ≤ x < 4, F(x) = f₁ + f₂ + f₃ = 3/6​

●​ If 4 ≤ x < 5, F(x) = f₁ + f₂ + f₃ + f₄ = 4/6​


●​ If 5 ≤ x < 6, F(x) = f₁ + f₂ + f₃ + f₄ + f₅ = 5/6​

●​ If 6 ≤ x < ∞, F(x) = f₁ + f₂ + f₃ + f₄ + f₅ + f₆ = 6/6 = 1

Prove a Function is a Distribution Function (CDF):

To verify whether a given function is a valid distribution function, we must


show:

1.​ It is non-decreasing​

2.​ F(-∞) = 0 and F(∞) = 1​

3.​ F(x) is continuous at all points

To Convert a Distribution Function to PMF:

Use the following formula:

𝑃(𝑋 = 𝑎) = lim 𝐹(𝑥) − lim 𝐹(𝑥)


+ −
𝑥→𝑎 𝑥→𝑎

Three Important Properties of the Cumulative Distribution Function


(CDF):

1.​ P(a ≤ X ≤ b) = F(b) - F(a)


2.​ P(a < X < b) = F(b) - F(a) - P(X = b)
3.​ P(a ≤ X < b) = F(b) - F(a) - P(X = b) + P(X = a)

Expectation or Mean of a Discrete Random Variable:

Let X be a discrete random variable whose distribution is:

X 1 2 3 4 5 6

fi = P(X = x) 1/6 1/6 1/6 1/6 1/6 1/6


Then the mean or expectation or expected value of X, denoted by E(X) or
m(X), is defined as:

E(X) = 1 × 1/6 + 2 × 1/6 + 3 × 1/6 + 4 × 1/6 + 5 × 1/6 + 6 × 1/6 = 21/6

Note:

E(X²) = 1² × 1/6 + 2² × 1/6 + 3² × 1/6 + 4² × 1/6 + 5² × 1/6 + 6² × 1/6

E(2X) = 2·1 × 1/6 + 2·2 × 1/6 + 2·3 × 1/6 + 2·4 × 1/6 + 2·5 × 1/6 + 2·6 × 1/6

Properties of Expectation:

1.​ E(a) = a, where a is any constant


2.​ E(ax + b) = aE(x) + b, where a and b are constants
3.​ E(a·g(x) + b) = aE(g(x)) + b

Variance of a Random Variable:

Var(X) = E((X - m)²), where m = E(X)

Standard Deviation (σ):

σ = + 𝑉𝑎𝑟(𝑋)

Properties of Variance:

1.​ Var(X) = E(X²) - (E(X))²


2.​ Var(ax + b) = a²Var(X)​
(Can be proved using the definition Var(X) = E((X - m)²))
3.​ Var(k) = 0, where k is a constant

Bernoulli Distribution

A discrete random variable X is said to have a Bernoulli Distribution if its


probability mass function (p.m.f.) is given by:
●​ f(x) = p, if x = 1 (Success)
●​ f(x) = 1 - p, if x = 0 (Failure)
●​ f(x) = 0, otherwise

Here, p ∈ (0, 1) is a parameter.​


Such a random variable can take only two values: 0 and 1.​
We write: X ~ Bernoulli(p), which means X is a Bernoulli variate with
parameter p.

Bernoulli Trial

If a random experiment has only two possible outcomes: 0 and 1, and it is


repeated independently, then the resulting sequence of trials is called a
Bernoulli Sequence of trials.

The probability of r (≤ n) successes in a Bernoulli sequence of n trials is


given by:

𝑛 𝑟 𝑛−𝑟
𝐶𝑟 . 𝑃 . (1 − 𝑃)

P is the probability of success. r is also the random variable.

Note: Most random experiments can be represented as a Bernoulli trial.

In this type of Qs, we’ll be given no. of trials (n) & we will have to find no.
of successes (r).

Example Case:

Q: If we roll a die 5 times, what will be the probability of getting face 3 two
times?

Ans: If we try to solve this type of questions without the binomial formula, we’ll
have to count each possible combination, which can be very lengthy. The best
way to solve it is:

5 1 2 1 5−2
𝐶2 . ( 6
) . (1 − 6
)
Binomial Distribution:

Poisson Distribution:

You might also like