0% found this document useful (0 votes)
4 views14 pages

INT354 Unit 1 Part2

Probably Approximately Correct (PAC) Learning is a framework that analyzes the data requirements for learning algorithms to perform well on unseen data with high probability and low error. A concept class is PAC-learnable if there exists an algorithm that outputs a hypothesis with low error given sufficient labeled examples. The goal is to develop algorithms that can learn any function from a hypothesis space effectively, as illustrated by the example of determining if a person is medium built based on height and weight.

Uploaded by

ihtgoot
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views14 pages

INT354 Unit 1 Part2

Probably Approximately Correct (PAC) Learning is a framework that analyzes the data requirements for learning algorithms to perform well on unseen data with high probability and low error. A concept class is PAC-learnable if there exists an algorithm that outputs a hypothesis with low error given sufficient labeled examples. The goal is to develop algorithms that can learn any function from a hypothesis space effectively, as illustrated by the example of determining if a person is medium built based on height and weight.

Uploaded by

ihtgoot
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Probably Approximately

Correct (PAC) Learning


Probably Approximately Correct
(PAC) Learning
• PAC learning is a framework:
• Provides a formal analysis of how much data is needed for a
learning algorithm to perform well on unseen data.
• Ensures with high probability (1-ẟ), the learned model will
have low error (ϵ) on new data.
Definition
• A concept class C is PAC-learnable if there exists a
learning algorithm such that, for every ϵ>0, δ>0, and
every distribution D:
• The algorithm outputs a hypothesis h∈H such that h is ϵ-
accurate with probability at least 1−δ.
• The number of labeled examples required is polynomial in 1/ϵ,
1/δ, and the description size of c (the target concept).
Goal of PAC Learning
• To find a learning algorithm that can learn any function
from a given hypothesis space with high probability and
low error, given enough training data.
Example: Medium Built Person
• Training Set: Height and Weight of m individuals.
• Target: Whether the person is medium built or not.
Learner needs to learn the
concept from a small
representation of instance
space. Hypothesis is
approximately correct, if
error is <= epsilon.
Approximately Correct
• A hypothesis is said to be approximately correct, if the
error is less than or equal to ϵ, where 0<= ϵ <= ½.
• i.e. P(c h)<= ϵ
Probably Approximately Correct
• The goal is the achieve low generalization error with
high probability.
Pr(Error(h)<= ϵ)> 1-ẟ
i.e. Pr(P(c h) <=ϵ)>1-ẟ
Instance Space

You might also like