Machine Learning - Classification
Machine Learning - Classification
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
k-NN classifier
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Classification
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Classification Accuracy
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Data Instance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Figure: Nearest neighbour using the 11-NN rule, the point denoted by a “star”
is classified to the class.
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Euclidean Distance
The Euclidean distance between two points, x1 = (x11 , x12 , · · · , x1n ) and
x2 = (x21 , x22 , · · · , x2n ), is shown in Eq. 1
v
u n
uX
dist(x1 , x2 ) = t (x1i − x2i )2 (1)
i=1
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Distance Measures
The distance between the two points in the plane with coordinate (x,y)
and (a,b) is given by:
q
2 2
EuclideanDistance, (x, y )(a, b) = (x − a) + (y − b) (2)
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
I Using the sample data from Table 2 and the Output classification as
the training set output value, we classify the instance (Pat, F, 1.6).
I Only the height is used for distance calculation so that both the
Euclidean and Manhattan distance measures yield the same results;
that is, the distance is simply the absolute value of the difference
between the values.
I Suppose that K = 5 is given. We then have that the K nearest
neighbours to the input instance are (Kristina, F, 1.6), (Kathy, F,
1.6), (Stephanie, F, 1.7), (Dave, M, 1.7), and (Wynette, F,
1.75).
I Of these the five item, four are classified as short and one as
medium. Thus, the kNN will classify Pat as short.
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Advantages of NB classifier
The naı̈ve Bayes (NB classifier has several advantages such as:
1. Easy to use.
2. Only one scan of the training data required.
3. Handling missing attribute values.
4. Continuous data.
5. High classification performance.
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
NB classifier
The NB classifier predicts that the instance X belongs to the class Ci , if
and only if P(Ci |X ) > P(Cj |X ) for 1 ≤ j ≤ m, j 6= i. The class Ci for
which P(Ci |X ) is maximized is called the Maximum Posteriori
Hypothesis.
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
NB classifier (con.)
To compute P(X |Ci ) in a dataset with many attributes is extremely
computationally expensive. Thus, the naı̈ve assumption of class
conditional independence is made in order to reduce computation in
evaluating P(X |Ci ). The attributes are conditionally independent of one
another, given the class label of the instance. Thus, Eq. 5 and Eq. 6 are
used to produce P(X |Ci ).
n
Y
P(X |Ci ) = P(xk |Ci ) (5)
k=1
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
NB classifier (con.)
Moreover, the attributes in training datasets can be categorical or
continuous-valued. If the attribute value, Ak , is categorical, then
P(xk |Ci ) is the number of instances in the class Ci ∈ D with the value xk
for Ak , divided by |Ci,D |, i.e. the number of instances belonging to the
class Ci ∈ D. If Ak is a continuous-valued attribute, then Ak is typically
assumed to have a Gaussian distribution with a mean µ and standard
deviation σ, defined respectively by the following two equations:
1 (x−p)2
g (x, µ, σ) = √ e − 2σ2 (8)
2πσ
In Eq. 7, µCi is the mean and σCi is the standard deviation of the values
of the attribute Ak for all training instances in the class Ci . Now we can
bring these two quantities to Eq. 8, together with xk , in order to
estimate P(xk |Ci ).
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
NB classifier (con.)
To predict the class label of instance X , P(X |Ci )P(Ci ) is evaluated for
each class Ci ∈ D. The NB classifier predicts that the class label of
instance X is the class Ci , if and only if
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Laplace Correction
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Table: Prior probabilities for each class generated using the playing tennis
dataset
Probability Value
P(Play = Yes) 9/14 = 0.642
P(Play = No) 5/14 = 0.375
Table: Conditional probabilities for Outlook calculated using the playing tennis
dataset
Probability Value
P(Outlook = Sunny |Play = Yes) 2/9 = 0.222
P(Outlook = Sunny |Play = No) 3/5 = 0.6
P(Outlook = Overcast|Play = Yes) 4/9 = 0.444
P(Outlook = Overcast|Play = No) 0/5 = 0.0
P(Outlook = Rain|Play = Yes) 3/9 = 0.3
P(Outlook = Rain|Play = No) 2/5 = 0.4
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Tree
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Decision Tree
Decision tree (DT) induction is the learning of decision trees from
class-labeled training instances, which is a top-down recursive divide and
conquer algorithm. The goal of DT is to create a model (classifier) that
predicts the value of a target class for an unseen test instance based on
several input instances. DTs have various advantages:
1. Simple to understand.
2. Easy to implement.
3. Requiring little prior knowledge.
4. Able to handle both numerical and categorical data.
5. Robust.
6. Dealing with large and noisy datasets.
7. Nonlinear relationships between features do not affect the tree
performance.
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
ID3 (con.)
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
C4.5
Quinlan later presented C4.5 (a successor of ID3 algorithm) that became
a benchmark in supervised learning algorithms. C4.5 uses an extension of
Information Gain, which is known as Gain Ratio. It applies a kind of
normalisation of Information Gain using Split Information defined
analogously to Info(D) as shown in Eq. 13.
n
X |Dj | |Dj |
SplitInfoA (D) = − × log2 (13)
|D| |D|
j=1
Gain(A)
GainRatio(A) = (14)
SplitInfo(A)
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Gini Index
The Gini Index is used in Classification and Regression Trees (CART)
algorithm, which generates the binary classification tree for decision
making. It measures the impurity of D, a data partition or X , as shown
in Eq. 15, where, pi is the probability that xi ∈ D belongs to class, cl and
is estimated by |cl , D|/|D|. The sum is computed over M classes.
N
X
Gini(D) = 1 − pi2 (15)
i=1
|D1 | |D2 |
GiniA (D) = Gini(D1 ) + Gini(D2 ) (16)
|D| |D|
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
For each Aj , each of the possible binary splits is considered. The Aj that
maximises the reduction in impurity is selected as the splitting feature,
shown in Eq. 17.
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Gain Calculation
9 9 5 5
Info(D) = − log2 − log2 = 0.940
14 14 14 14
5 2 2 3 3 4 4 4
InfoOutlook (D) = ∗ − log2 − log2 + ∗ − log2
14 5 5 5 5 14 4 4
5 3 3 2 2
+ ∗ − log2 − log2 = 0.694
14 5 5 5 5
Therefore, the gain in information from such a partitioning would be:
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Decision Tree
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Tree Pruning
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Pre-pruning
I The leaf may hold the most frequent class among the subset
instances or the probability distribution of those instances.
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Post-pruning
I The leaf is labeled with the most frequent class among the subtree
being replaced.
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Pruning Set
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
I Pessimistic pruning does not require the use of a prune set. Instead,
it uses the training set to estimate error rates. Recall that an
estimate of accuracy or error based on the training set is overly
optimistic and, therefore, strongly biased.
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
RainForest Tree
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
I The trees are examined and used to construct a new tree, T 0 , that
turns out to be very close to the tree that would have been
generated if all the original training data had fit in memory.
I BOAT can use any attribute selection measure that selects binary
splits and that is based on the notion of purity of partitions such as
the Gini index.
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
BOAT (con.)
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Classification accuracy
The classification accuracy of a classifier on a given test set is the
percentage of test set instances that are correctly classified by the
classifier. In the pattern recognition literature, this is also referred to as
the overall recognition rate of the classifier, that is, it reflects how well
the classifier recognises instances of the various classes. The classification
accuracy is measured either by Equation 18 to Equation 19.
TP + TN
accuracy = (18)
P +N
TP + TN
accuracy = (19)
TP + TN + FP + FN
P|X |
i=1 assess(xi )
accuracy = , xi ∈ X (20)
|X |
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Error rate
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Sensitivity
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Specificity
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
The precision and recall measures are also widely used in classification.
Precision can be thought of as a measure of exactness (i.e., what
percentage of instances labeled as positive are actually such), whereas
recall is a measure of completeness (what percentage of positive
instances are labeled as such). If recall seems familiar, that?s because it
is the same as sensitivity (or the true positive rate).
TP
precision = (24)
TP + FP
TP TP
recall = = (25)
TP + FN P
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
k-fold Cross-Validation
I In k-fold cross-validation, the initial data are randomly partitioned
into k mutually exclusive subsets or “folds”, D1 , D2 , · · · , Dk , each of
which has an approximately equal size.
I Training and testing are performed k times.
I In iteration i, the partition Di is reserved as the test set, and the
remaining partitions are collectively used to train the classifier.
I 10-fold cross validation breaks data into 10 sets of size N/10.
I It trains the classifier on 9 datasets and tests it using the remaining
one dataset. This repeats 10 times and we take a mean accuracy
rate.
I For classification, the accuracy estimate is the overall number of
correct classifications from the k iterations, divided by the total
number of instances in the initial dataset.
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University
Outline k-NN classifier Naı̈ve Bayes Classifier Decision Tree Induction Tree Pruning & Scalability Evaluating Classifiers Performance
Prof. Dr. Dewan Md. Farid: Machine Learning - Classification Task Dept. of Computer Science & Engineering, United International University