Logistic Regression in Machine Learning
Last Updated :
03 Jun, 2025
Logistic Regression is a supervised machine learning algorithm used for classification problems. Unlike linear regression which predicts continuous values it predicts the probability that an input belongs to a specific class. It is used for binary classification where the output can be one of two possible categories such as Yes/No, True/False or 0/1. It uses sigmoid function to convert inputs into a probability value between 0 and 1. In this article, we will see the basics of logistic regression and its core concepts.
Types of Logistic Regression
Logistic regression can be classified into three main types based on the nature of the dependent variable:
- Binomial Logistic Regression: This type is used when the dependent variable has only two possible categories. Examples include Yes/No, Pass/Fail or 0/1. It is the most common form of logistic regression and is used for binary classification problems.
- Multinomial Logistic Regression: This is used when the dependent variable has three or more possible categories that are not ordered. For example, classifying animals into categories like "cat," "dog" or "sheep." It extends the binary logistic regression to handle multiple classes.
- Ordinal Logistic Regression: This type applies when the dependent variable has three or more categories with a natural order or ranking. Examples include ratings like "low," "medium" and "high." It takes the order of the categories into account when modeling.
Assumptions of Logistic Regression
Understanding the assumptions behind logistic regression is important to ensure the model is applied correctly, main assumptions are:
- Independent observations: Each data point is assumed to be independent of the others means there should be no correlation or dependence between the input samples.
- Binary dependent variables: It takes the assumption that the dependent variable must be binary, means it can take only two values. For more than two categories SoftMax functions are used.
- Linearity relationship between independent variables and log odds: The model assumes a linear relationship between the independent variables and the log odds of the dependent variable which means the predictors affect the log odds in a linear way.
- No outliers: The dataset should not contain extreme outliers as they can distort the estimation of the logistic regression coefficients.
- Large sample size: It requires a sufficiently large sample size to produce reliable and stable results.
Understanding Sigmoid Function
1. The sigmoid function is a important part of logistic regression which is used to convert the raw output of the model into a probability value between 0 and 1.
2. This function takes any real number and maps it into the range 0 to 1 forming an "S" shaped curve called the sigmoid curve or logistic curve. Because probabilities must lie between 0 and 1, the sigmoid function is perfect for this purpose.
3. In logistic regression, we use a threshold value usually 0.5 to decide the class label.
- If the sigmoid output is same or above the threshold, the input is classified as Class 1.
- If it is below the threshold, the input is classified as Class 0.
This approach helps to transform continuous input values into meaningful class predictions.
How does Logistic Regression work?
Logistic regression model transforms the linear regression function continuous value output into categorical value output using a sigmoid function which maps any real-valued set of independent variables input into a value between 0 and 1. This function is known as the logistic function.
Suppose we have input features represented as a matrix:
X = \begin{bmatrix} x_{11} & ... & x_{1m}\\ x_{21} & ... & x_{2m} \\ \vdots & \ddots & \vdots \\ x_{n1} & ... & x_{nm} \end{bmatrix}
and the dependent variable is Y having only binary value i.e 0 or 1.
Y = \begin{cases} 0 & \text{ if } Class\;1 \\ 1 & \text{ if } Class\;2 \end{cases}
then, apply the multi-linear function to the input variables X.
z = \left(\sum_{i=1}^{n} w_{i}x_{i}\right) + b
Here x_i is the ith observation of X, w_i = [w_1, w_2, w_3, \cdots,w_m] is the weights or Coefficient and b is the bias term also known as intercept. Simply this can be represented as the dot product of weight and bias.
z = w\cdot X +b
At this stage, z is a continuous value from the linear regression. Logistic regression then applies the sigmoid function to z to convert it into a probability between 0 and 1 which can be used to predict the class.
Now we use the sigmoid function where the input will be z and we find the probability between 0 and 1. i.e. predicted y.
\sigma(z) = \frac{1}{1+e^{-z}}
Sigmoid functionAs shown above the sigmoid function converts the continuous variable data into the probability i.e between 0 and 1.
- \sigma(z)
tends towards 1 as z\rightarrow\infty
- \sigma(z)
tends towards 0 as z\rightarrow-\infty
- \sigma(z)
is always bounded between 0 and 1
where the probability of being a class can be measured as:
P(y=1) = \sigma(z) \\ P(y=0) = 1-\sigma(z)
Logistic Regression Equation and Odds:
It models the odds of the dependent event occurring which is the ratio of the probability of the event to the probability of it not occurring:
\frac{p(x)}{1-p(x)} = e^z
Taking the natural logarithm of the odds gives the log-odds or logit:
\begin{aligned}\log \left[\frac{p(x)}{1-p(x)} \right] &= z \\ \log \left[\frac{p(x)}{1-p(x)} \right] &= w\cdot X +b\\ \frac{p(x)}{1-p(x)}&= e^{w\cdot X +b} \;\;\cdots\text{Exponentiate both sides}\\ p(x) &=e^{w\cdot X +b}\cdot (1-p(x))\\p(x) &=e^{w\cdot X +b}-e^{w\cdot X +b}\cdot p(x))\\p(x)+e^{w\cdot X +b}\cdot p(x))&=e^{w\cdot X +b}\\p(x)(1+e^{w\cdot X +b}) &=e^{w\cdot X +b}\\p(x)&= \frac{e^{w\cdot X +b}}{1+e^{w\cdot X +b}}\end{aligned}
then the final logistic regression equation will be:
p(X;b,w) = \frac{e^{w\cdot X +b}}{1+e^{w\cdot X +b}} = \frac{1}{1+e^{-w\cdot X +b}}
This formula represents the probability of the input belonging to Class 1.
Likelihood Function for Logistic Regression
The goal is to find weights w and bias b that maximize the likelihood of observing the data.
For each data point i
- for y=1, predicted probabilities will be: p(X;b,w) = p(x)
- for y=0 The predicted probabilities will be: 1-p(X;b,w) = 1-p(x)
L(b,w) = \prod_{i=1}^{n}p(x_i)^{y_i}(1-p(x_i))^{1-y_i}
Taking natural logs on both sides:
\begin{aligned}\log(L(b,w)) &= \sum_{i=1}^{n} y_i\log p(x_i)\;+\; (1-y_i)\log(1-p(x_i)) \\ &=\sum_{i=1}^{n} y_i\log p(x_i)+\log(1-p(x_i))-y_i\log(1-p(x_i)) \\ &=\sum_{i=1}^{n} \log(1-p(x_i)) +\sum_{i=1}^{n}y_i\log \frac{p(x_i)}{1-p(x_i} \\ &=\sum_{i=1}^{n} -\log1-e^{-(w\cdot x_i+b)} +\sum_{i=1}^{n}y_i (w\cdot x_i +b) \\ &=\sum_{i=1}^{n} -\log1+e^{w\cdot x_i+b} +\sum_{i=1}^{n}y_i (w\cdot x_i +b) \end{aligned}
This is known as the log-likelihood function.
Gradient of the log-likelihood function
To find the best w and b we use gradient ascent on the log-likelihood function. The gradient with respect to each weight w_jis:
\begin{aligned} \frac{\partial J(l(b,w)}{\partial w_j}&=-\sum_{i=n}^{n}\frac{1}{1+e^{w\cdot x_i+b}}e^{w\cdot x_i+b} x_{ij} +\sum_{i=1}^{n}y_{i}x_{ij} \\&=-\sum_{i=n}^{n}p(x_i;b,w)x_{ij}+\sum_{i=1}^{n}y_{i}x_{ij} \\&=\sum_{i=n}^{n}(y_i -p(x_i;b,w))x_{ij} \end{aligned}
Terminologies involved in Logistic Regression
Here are some common terms involved in logistic regression:
- Independent Variables: These are the input features or predictor variables used to make predictions about the dependent variable.
- Dependent Variable: This is the target variable that we aim to predict. In logistic regression, the dependent variable is categorical.
- Logistic Function: This function transforms the independent variables into a probability between 0 and 1 which represents the likelihood that the dependent variable is either 0 or 1.
- Odds: This is the ratio of the probability of an event happening to the probability of it not happening. It differs from probability because probability is the ratio of occurrences to total possibilities.
- Log-Odds (Logit): The natural logarithm of the odds. In logistic regression, the log-odds are modeled as a linear combination of the independent variables and the intercept.
- Coefficient: These are the parameters estimated by the logistic regression model which shows how strongly the independent variables affect the dependent variable.
- Intercept: The constant term in the logistic regression model which represents the log-odds when all independent variables are equal to zero.
- Maximum Likelihood Estimation (MLE): This method is used to estimate the coefficients of the logistic regression model by maximizing the likelihood of observing the given data.
Implementation for Logistic Regression
Now, let's see the implementation of logistic regression in Python. Here we will be implementing two main types of Logistic Regression:
1. Binomial Logistic regression:
In binomial logistic regression, the target variable can only have two possible values such as "0" or "1", "pass" or "fail". The sigmoid function is used for prediction.
We will be using sckit-learn library for this and shows how to use the breast cancer dataset to implement a Logistic Regression model for classification.
Python
from sklearn.datasets import load_breast_cancer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
X, y = load_breast_cancer(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=23)
clf = LogisticRegression(max_iter=10000, random_state=0)
clf.fit(X_train, y_train)
acc = accuracy_score(y_test, clf.predict(X_test)) * 100
print(f"Logistic Regression model accuracy: {acc:.2f}%")
Output:
Logistic Regression model accuracy (in %): 96.49%
This code uses logistic regression to classify whether a sample from the breast cancer dataset is malignant or benign.
2. Multinomial Logistic Regression:
Target variable can have 3 or more possible types which are not ordered i.e types have no quantitative significance like “disease A” vs “disease B” vs “disease C”.
In this case, the softmax function is used in place of the sigmoid function. Softmax function for K classes will be:
\text{softmax}(z_i) =\frac{ e^{z_i}}{\sum_{j=1}^{K}e^{z_{j}}}
Here K represents the number of elements in the vector z and i, j iterates over all the elements in the vector.
Then the probability for class c will be:
P(Y=c | \overrightarrow{X}=x) = \frac{e^{w_c \cdot x + b_c}}{\sum_{k=1}^{K}e^{w_k \cdot x + b_k}}
Below is an example of implementing multinomial logistic regression using the Digits dataset from scikit-learn:
Python
from sklearn.model_selection import train_test_split
from sklearn import datasets, linear_model, metrics
digits = datasets.load_digits()
X = digits.data
y = digits.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=1)
reg = linear_model.LogisticRegression(max_iter=10000, random_state=0)
reg.fit(X_train, y_train)
y_pred = reg.predict(X_test)
print(f"Logistic Regression model accuracy: {metrics.accuracy_score(y_test, y_pred) * 100:.2f}%")
Output:
Logistic Regression model accuracy: 96.66%
This model is used to predict one of 10 digits (0-9) based on the image features.
How to Evaluate Logistic Regression Model?
Evaluating the logistic regression model helps assess its performance and ensure it generalizes well to new, unseen data. The following metrics are commonly used:
- Accuracy: Accuracy provides the proportion of correctly classified instances.
Accuracy = \frac{True \, Positives + True \, Negatives}{Total}
- Precision: Precision focuses on the accuracy of positive predictions.
Precision = \frac{True \, Positives }{True\, Positives + False \, Positives}
- Recall (Sensitivity or True Positive Rate): Recall measures the proportion of correctly predicted positive instances among all actual positive instances.
Recall = \frac{ True \, Positives}{True\, Positives + False \, Negatives}
- F1 Score: F1 score is the harmonic mean of precision and recall.
F1 \, Score = 2 * \frac{Precision * Recall}{Precision + Recall}
- Area Under the Receiver Operating Characteristic Curve (AUC-ROC): The ROC curve plots the true positive rate against the false positive rate at various thresholds. AUC-ROC measures the area under this curve which provides an aggregate measure of a model's performance across different classification thresholds.
- Area Under the Precision-Recall Curve (AUC-PR): Similar to AUC-ROC, AUC-PR measures the area under the precision-recall curve helps in providing a summary of a model's performance across different precision-recall trade-offs.
Differences Between Linear and Logistic Regression
Logistic regression and linear regression differ in their application and output. Here's a comparison:
Linear Regression | Logistic Regression |
---|
Linear regression is used to predict the continuous dependent variable using a given set of independent variables. | Logistic regression is used to predict the categorical dependent variable using a given set of independent variables. |
It is used for solving regression problem. | It is used for solving classification problems. |
In this we predict the value of continuous variables | In this we predict values of categorical variables |
In this we find best fit line. | In this we find S-Curve. |
Least square estimation method is used for estimation of accuracy. | Maximum likelihood estimation method is used for Estimation of accuracy. |
The output must be continuous value, such as price, age etc. | Output must be categorical value such as 0 or 1, Yes or no etc. |
It required linear relationship between dependent and independent variables. | It not required linear relationship. |
There may be collinearity between the independent variables. | There should be little to no collinearity between independent variables. |
Similar Reads
Machine Learning Algorithms
Machine learning algorithms are essentially sets of instructions that allow computers to learn from data, make predictions, and improve their performance over time without being explicitly programmed. Machine learning algorithms are broadly categorized into three types: Supervised Learning: Algorith
8 min read
Top 15 Machine Learning Algorithms Every Data Scientist Should Know in 2025
Machine Learning (ML) Algorithms are the backbone of everything from Netflix recommendations to fraud detection in financial institutions. These algorithms form the core of intelligent systems, empowering organizations to analyze patterns, predict outcomes, and automate decision-making processes. Wi
14 min read
Linear Model Regression
Ordinary Least Squares (OLS) using statsmodels
Ordinary Least Squares (OLS) is a widely used statistical method for estimating the parameters of a linear regression model. It minimizes the sum of squared residuals between observed and predicted values. In this article we will learn how to implement Ordinary Least Squares (OLS) regression using P
3 min read
Linear Regression (Python Implementation)
Linear regression is a statistical method that is used to predict a continuous dependent variable i.e target variable based on one or more independent variables. This technique assumes a linear relationship between the dependent and independent variables which means the dependent variable changes pr
14 min read
Multiple Linear Regression using Python - ML
Linear regression is a statistical method used for predictive analysis. It models the relationship between a dependent variable and a single independent variable by fitting a linear equation to the data. Multiple Linear Regression extends this concept by modelling the relationship between a dependen
4 min read
Polynomial Regression ( From Scratch using Python )
Prerequisites Linear RegressionGradient DescentIntroductionLinear Regression finds the correlation between the dependent variable ( or target variable ) and independent variables ( or features ). In short, it is a linear model to fit the data linearly. But it fails to fit and catch the pattern in no
5 min read
Bayesian Linear Regression
Linear regression is based on the assumption that the underlying data is normally distributed and that all relevant predictor variables have a linear relationship with the outcome. But In the real world, this is not always possible, it will follows these assumptions, Bayesian regression could be the
10 min read
How to Perform Quantile Regression in Python
In this article, we are going to see how to perform quantile regression in Python. Linear regression is defined as the statistical method that constructs a relationship between a dependent variable and an independent variable as per the given set of variables. While performing linear regression we a
4 min read
Isotonic Regression in Scikit Learn
Isotonic regression is a regression technique in which the predictor variable is monotonically related to the target variable. This means that as the value of the predictor variable increases, the value of the target variable either increases or decreases in a consistent, non-oscillating manner. Mat
6 min read
Stepwise Regression in Python
Stepwise regression is a method of fitting a regression model by iteratively adding or removing variables. It is used to build a model that is accurate and parsimonious, meaning that it has the smallest number of variables that can explain the data. There are two main types of stepwise regression: F
6 min read
Least Angle Regression (LARS)
Regression is a supervised machine learning task that can predict continuous values (real numbers), as compared to classification, that can predict categorical or discrete values. Before we begin, if you are a beginner, I highly recommend this article. Least Angle Regression (LARS) is an algorithm u
3 min read
Linear Model Classification
K-Nearest Neighbors (KNN)
ML - Stochastic Gradient Descent (SGD)
Stochastic Gradient Descent (SGD) is an optimization algorithm in machine learning, particularly when dealing with large datasets. It is a variant of the traditional gradient descent algorithm but offers several advantages in terms of efficiency and scalability, making it the go-to method for many d
8 min read