0% found this document useful (0 votes)
60 views10 pages

Machine Learning Reftest

Machine learning programs have been developed that can recognize speech, predict patient recovery rates, detect credit card fraud, drive autonomous vehicles, and play games like backgammon at expert levels. These programs learn from experience to improve their performance on tasks. There are different types of machine learning including supervised learning, unsupervised learning, and reinforcement learning. Concept learning can be viewed as searching a hypothesis space to find the hypothesis that best fits the training examples.

Uploaded by

Nitish Solanki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views10 pages

Machine Learning Reftest

Machine learning programs have been developed that can recognize speech, predict patient recovery rates, detect credit card fraud, drive autonomous vehicles, and play games like backgammon at expert levels. These programs learn from experience to improve their performance on tasks. There are different types of machine learning including supervised learning, unsupervised learning, and reinforcement learning. Concept learning can be viewed as searching a hypothesis space to find the hypothesis that best fits the training examples.

Uploaded by

Nitish Solanki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Machine Learning:

programs
have been developed that successfully learn to recognize spoken words
(Waibel 1989; Lee 1989

predict recovery rates of pneumonia patients (Cooper


et al. 1997

detect fraudulent use of credit cards

drive autonomous vehicles


on public highways (Pomerleau 1989)

play games such as backgammon at


levels approaching the performance of human world champions
(Tesauro 1992, 1995)

We
are beginning to obtain initial models of human and animal learning and
to understand
their relationship to learning algorithms developed for computers (e.g.,
Laird et al. 1986; Anderson 1991; Qin et al. 1992; Chi and Bassock
1989; Ahn
and Brewer 1993)

to include any computer


program that improves its performance at some task through experience.
Put more
precisely,
Definition: A computer program is said to learn from experience E with
respect
to some class of tasks T and performance measure P, if its performance
at tasks in
T, as measured by P, improves with experience E.
 Learning to recognize spoken words.
All of the most successful speech recognition systems employ machine
learning in some form.
For example, the SPHINX system (e.g., Lee 1989) learns speaker-
specific strategies for recognizing
the primitive sounds (phonemes) and words from the observed speech
signal. Neural network
learning methods (e.g., Waibel et al. 1989) and methods for learning
hidden Markov models
(e.g., Lee 1989) are effective for automatically customizing to,
individual speakers, vocabularies,
microphone characteristics, background noise, etc. Similar techniques
have potential applications
in many signal-interpretation problems.

 Learning to drive an autonomous vehicle.


Machine learning methods have been used to train computer-controlled
vehicles to steer correctly
when driving on a variety of road types. For example, the ALVINN
system (Pomerleau 1989)
has used its learned strategies to drive unassisted at 70 miles per hour for
90 miles on public
highways among other cars. Similar techniques have possible
applications in many sensor-based
control problems.

 Learning to classify new astronomical structures.


Machine learning methods have been applied to a variety of large
databases to learn general
regularities implicit in the data. For example, decision tree learning
algorithms have been used
by NASA to learn how to classify celestial objects from the second
Palomar Observatory Sky
Survey (Fayyad et al. 1995). This system is now used to automatically
classify all objects in the
Sky Survey, which consists of three terrabytes of image data.

Learning to play world-class backgammon.


The most successful computer programs for playing games such as
backgammon are based on
machine learning algorithms. For example, the world's top computer
program for backgammon,
TD-GAMMON(T esauro 1992, 1995). learned its strategy by playing
over one million practice
games against itself. It now plays at a level competitive with the human
world champion. Similar
techniques have applications in many practical problems where very
large search spaces must be
examined efficiently.

Issues in Machine Learning:

 What algorithm exist for learning general target function from


specific training examples? In what settings will particular
algorithm coverage to the desired function? Which algorithm
perform best for which type of problems and representation.
 How much training is sufficient? What general bound can be found
to relate the confidence in learned hypothesis to the amount of
training experience and the character of learner’s hypothesis space.
 When and how can prior knowledge held by the learner guide the
process
of generalizing from examples? Can prior knowledge be helpful
even when
it is only approximately, correct?
 What is the best strategy for choosing a useful next training
experience, and
how does the choice of this strategy alter the complexity of the
learning
problem?
 What is the best way to reduce the learning task to one or more
function
approximation problems? Put another way, what specific functions
should
the system attempt to learn? Can this process itself be automated?

 How can the learner automatically alter its representation to


improve its
ability to represent and learn the target function?

Types of Machine Learning:


Supervised Learning: Supervised learning is a type of machine learning that uses
labeled data to train machine learning models. In labeled data, the output is already
known. The model just needs to map the inputs to the respective outputs. 

An example of supervised learning is to train a system that identifies the image of an


animal. 

Supervised learning algorithms are generally used for solving classification and
regression problems. 

Unsupervised Learning:

Unsupervised learning is a type of machine learning that uses unlabeled data to train
machines. Unlabeled data doesn’t have a fixed output variable. The model learns from
the data, discovers the patterns and features in the data, and returns the output. 

Depicted below is an example of an unsupervised learning technique that uses the


images of vehicles to classify if it’s a bus or a truck. The model learns by identifying the
parts of a vehicle, such as a length and width of the vehicle, the front, and rear end
covers, roof hoods, the types of wheels used, etc. Based on these features, the model
classifies if the vehicle is a bus or a truck.
Unsupervised learning finds patterns and understands the trends in the data to discover
the output. So, the model tries to label the data based on the features of the input data.

The training process used in unsupervised learning techniques does not need any
supervision to build models. They learn on their own and predict the output.

Unsupervised learning is used for solving clustering and association problems.

Reinforcement Learning

Reinforcement Learning trains a machine to take suitable actions and maximize its
rewards in a particular situation. It uses an agent and an environment to produce
actions and rewards. The agent has a start and an end state. But, there might be
different paths for reaching the end state, like a maze. In this learning technique, there is
no predefined target variable.

An example of reinforcement learning is to train a machine that can identify the shape
of an object, given a list of different objects. In the example shown, the model tries to
predict the shape of the object, which is a square in this case.
Reinforcement learning follows trial and error methods to get the desired result. After
accomplishing a task, the agent receives an award. An example could be to train a dog to catch
the ball. If the dog learns to catch a ball, you give it a reward, such as a biscuit.

Reinforcement Learning methods do not need any external supervision to train models.

Reinforcement learning algorithms are widely used in the gaming industries to build games. It is
also used to train robots to do human tasks.

CONCEPT LEARNING AS SEARCH:


Concept learning can be viewed as the task of searching through a large
space of
hypotheses implicitly defined by the hypothesis representation. The goal
of this
search is to find the hypothesis that best fits the training examples. It is
important
to note that by selecting a hypothesis representation, the designer of the
learning
algorithm implicitly defines the space of all hypotheses that the program
can
ever represent and therefore can ever learn. Consider, for example, the
instances
X and hypotheses H in the Enjoy Sport learning task. Given that the
attribute
Sky has three possible values, and that AirTemp, Humidity, Wind,
Water, and
Forecast each have two possible values, the instance space X contains
exactly
3 .2 2 .2 2 .2 = 96 distinct instances. A similar calculation shows that
there are
5.4-4 -4 -4.4 = 5 120 syntactically distinct hypotheses within H. Notice,
however,
that every hypothesis containing one or more "IZI" symbols represents
the empty
set of instances; that is, it classifies every instance as negative.
Therefore, the
number of semantically distinct hypotheses is only 1 + (4.3.3.3.3.3) =
973. Our
EnjoySport example is a very simple learning task, with a relatively
small, finite
hypothesis space. Most practical learning tasks involve much larger,
sometimes
infinite, hypothesis spaces.

General-to-Specific Ordering of Hypotheses:

Many algorithms for concept learning organize the search through the
hypothesis
space by relying on a very useful structure that exists for any concept
learning
problem: a general-to-specific ordering of hypotheses. By taking
advantage of this
naturally occurring structure over the hypothesis space, we can design
learning
algorithms that exhaustively search even infinite hypothesis spaces
without explicitly
enumerating every hypothesis. To illustrate the general-to-specific
ordering,
consider the two hypotheses
hi = (Sunny, ?, ?, Strong, ?, ?)
h2 = (Sunny, ?, ?, ?, ?, ?)
Now consider the sets of instances that are classified positive by hl and
by h2.
Because h2 imposes fewer constraints on the instance, it classifies more
instances
as positive. In fact, any instance classified positive by hl will also be
classified
positive by h2. Therefore, we say that h2 is more general than hl.
This intuitive "more general than" relationship between hypotheses can
be
defined more precisely as follows. First, for any instance x in X and
hypothesis
h in H, we say that x satisjies h if and only if h(x) = 1. We now define
the
more-general~han_or.-equalr~eola tion in terms of the sets of instances
that satisfy
the two hypotheses: Given hypotheses hj and hk, hj is more-general-
than--
equal to hk if and only if any instance that satisfies hk also satisfies hj.
NAIVE BAYES CLASSIFIER:
One highly practical Bayesian learning method is the naive Bayes learner,
often
called the naive Bayes classijier. In some domains its performance has been
shown
to be comparable to that of neural network and decision tree learning. This
section
introduces the naive Bayes classifier; the next section applies it to the
practical
problem of learning to classify natural language text documents.
The naive Bayes classifier applies to learning tasks where each instance x
is described by a conjunction of attribute values and where the target function
f ( x ) can take on any value from some finite set V. A set of training
examples of
the target function is provided, and a new instance is presented, described by
the
tuple of attribute values (al, a2.. .a,). The learner is asked to predict the target
value, or classification, for this new instance.
The Bayesian approach to classifying the new instance is to assign the most
probable target value, VMAPg, iven the attribute values (al,a 2 . . .a ,) that
describe
the instance.
VNB = argmax P (vj) n P (ai 1vj)

You might also like