0% found this document useful (0 votes)
34 views4 pages

Machine Learning Vapnik-Chervonenkis (VC) Dimension

The document provides an overview of key concepts in machine learning, including definitions of machine learning, inductive learning, linear regression, and clustering. It discusses various algorithms, techniques such as gradient descent and ensemble methods, and the advantages of Naive Bayes classification. Additionally, it highlights the role of linear algebra and the concept of PAC learning in machine learning applications.

Uploaded by

NAGULAN 11 A1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views4 pages

Machine Learning Vapnik-Chervonenkis (VC) Dimension

The document provides an overview of key concepts in machine learning, including definitions of machine learning, inductive learning, linear regression, and clustering. It discusses various algorithms, techniques such as gradient descent and ensemble methods, and the advantages of Naive Bayes classification. Additionally, it highlights the role of linear algebra and the concept of PAC learning in machine learning applications.

Uploaded by

NAGULAN 11 A1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

1. what is Machine Learning?

A computer program is said to learn from experience E with respect to


some class of task T and performance measure P, if its performance at
task in T, as measured by P, improves with experience E.

2. what is inductive learning?


In inductive learning, the learner is given a hypothesis space H from
which it must select an output hypothesis and a set of training
examples-
D = {(x1, f(x1))..., (Xn, f(x)))} where f(xi) is the target value for the
instance Xi the desired output of the learner is a hypothesis h from H
that is consistent with these training examples.

3. What are the benefits of linear algebra in ML?


linear algebra provides the mathematical foundation for data
handling, model building, and optimization in machine learning.

4. Compare sample error and true error.


An errorD(h) is the true error of hypothesis h with respect to the target
function f and data distribution D. It is the probability h will misclassify
an instance drawn at random according to D.

An errorS(h) is the sample error of hypothesis H with respect to the


target function F and data sample set S. It is the proportion of
examples in S that h misclassifies.

5. What is linear regression?


Linear regression is a statistical method that allows us to summarize
and study relationships between two continuous (quantitative)
variables.

6. Define shattering.
In machine learning, shattering refers to the ability of a hypothesis
class (a set of functions or models) to perfectly classify any possible
labeling of a given set of points. It is a key concept in Vapnik-
Chervonenkis (VC) dimension, which measures the capacity of a
model.

7. What are the types of gradient descent?


1. Batch gradient descent
2. Stochastic gradient descent
3. Minibatch gradient descent
8. Define cluster.
Cluster is a group of objects that belong to the same class. In other
words the similar object are grouped in one cluster and dissimilar are
grouped in other cluster.

9. Define clustering.
Clustering is a process of partitioning a set of data in a set of
meaningful subclasses. Every data in the subclass shares a common
trait. It helps a user understand the natural grouping or structure in a
data set.

10. Define hypothesis.


The hypothesis is defined as the supposition or proposed explanation
based on insufficient evidence or assumptions. It is just a guess based
on some known facts but has not yet been proven. A good hypothesis
is testable, which results in either true or false.

11. What is hypothesis space?


The hypothesis space is defined as the supposition or proposed
explaination based on insufficient evidence or assumption.

12. State baye’s theorem.


Bayes theorem is a method to revise the probability of an event given
additional information.bayes theorem calculates a conditional
probability called a posterior or revised probability.

13. What is gradient descent?


Gradient descent is a first-order optimization algorithm. To find a local
minimum of a function using gradient descent, one takes steps
proportional to the negative of the gradient of the function at the
current point.

14. What are ensemple techniques?

- Max Voting
- Averaging
- Weighted Averaging
15. List any three Advanced Ensemble techniques
- Stacking
- Blending
- Bagging
- Boosting

16. What are unsupervised learning algorithm?


Unsupervised learning is a type of machine learning where the model
learns from unlabeled data, meaning there are no predefined output
labels. The algorithm identifies patterns, structures, or
relationships in the data without explicit supervision.

unsupervised learning algorithm:


1. Clustering
2. association
17. What is PAC?
A concept class C is said to be PAC learnable using a hypothesis class H
if there exists a learning algorithm L such that for all concept in C for
all instance distribution D on an instance space X.

18. List some application ML?


1. Pattern recognition
2. Face recognition
3. Image recognition
4. Traffic prediction

19. What are the different types of line regression?


1. Simple Linear Regression – Uses one independent variable to
predict a dependent variable.
2. Multiple Linear Regression – Uses multiple independent variables for
prediction.
3. Polynomial Regression – Captures non-linear relationships by adding
polynomial terms.
4. Ridge Regression – Uses L2 regularization to reduce overfitting.

20. compare simple ensemble techniques and advance ensemble


techniques .

Simple ensemble techniques like Bagging and Voting combine


multiple models by averaging or majority voting, making them easy to
implement with low computational cost.
Advanced ensemble techniques like Boosting and Stacking use
iterative learning or meta-models, providing higher accuracy but
requiring more computation and careful tuning.

21. what are advantages of naive bayes classification?


The main advantages of Naive Bayes classification are its simplicity,
computational efficiency, ease of implementation, fast prediction
speed, ability to handle large datasets, and interpretability.

22. Define artificial intelligences.


Artificial Intelligence (AI) is the ability of machines to think, learn, and
make decisions like humans. It helps computers solve problems,
recognize patterns, and understand language.

You might also like