Activation Functions in Neural Networks
Activation Functions in Neural Networks
An activation function is a mathematical function applied to the output of a neuron. It introduces non-
linearity into the model, allowing the network to learn and represent complex patterns in the data.
Without this non-linearity feature, a neural network would behave like a linear regression model, no
matter how many layers it has.
Activation function decides whether a neuron should be activated by calculating the weighted sum of
inputs and adding a bias term. This helps the model make complex decisions and predictions by
introducing non-linearities to the output of each neuron.
Non-linearity means that the relationship between input and output is not a straight line. In simple
terms, the output does not change proportionally with the input.
Imagine you want to classify apples and bananas based on their shape and color.
• If we use a linear function, it can only separate them using a straight line.
• But real-world data is often more complex (e.g., overlapping colors, different lighting).
• By adding a non-linear activation function (like ReLU, Sigmoid, or Tanh), the network can
create curved decision boundaries to separate them correcty.
Linear Activation Function resembles straight line define by y=x. No matter how many layers the neural
network contains, if they all use linear activation functions, the output is a linear combination of the
input.
• Linear activation function is used at just one place i.e. output layer.
• Using linear activation across all layers makes the network’s ability to learn complex patterns
limited.
Linear activation functions are useful for specific tasks but must be combined with non-linear functions
to enhance the neural network’s learning and predictive capabilities.
Linear Activation Function or Identity Function returns the input as the output
1. Sigmoid Function
This formula ensures a smooth and continuous output that is essential for gradient-based optimization
methods.
• It allows neural networks to handle and model complex patterns that linear equations cannot.
• The output ranges between 0 and 1, hence useful for binary classification.
• The function exhibits a steep gradient when x values are between -2 and 2. This sensitivity
means that small changes in input x can cause significant changes in output y, which is critical
during the training process.
Sigmoid or Logistic Activation Function Graph
Tanh function (hyperbolic tangent function), is a shifted version of the sigmoid, allowing it to stretch
across the y-axis. It is defined as:
• Use in Hidden Layers: Commonly used in hidden layers due to its zero-centered output,
facilitating easier learning for subsequent layers.
Tanh Activation Function
ReLU activation is defined by A(x)=max(0,x), this means that if the input x is positive, ReLU returns x, if
the input is negative, it returns 0.
• Value Range: [0,∞)[0,∞), meaning the function only outputs non-negative values.
• Nature: It is a non-linear activation function, allowing neural networks to learn complex patterns
and making backpropagation more efficient.
• Advantage over other Activation: ReLU is less computationally expensive than tanh and sigmoid
because it involves simpler mathematical operations. At a time only a few neurons are activated
making the network sparse making it efficient and easy for computation.
ReLU Activation Function
1. Softmax Function
Softmax function is designed to handle multi-class classification problems. It transforms raw output
scores from a neural network into probabilities. It works by squashing the output values of each class
into the range of 0 to 1, while ensuring that the sum of all probabilities equals 1.
• The Softmax function ensures that each class is assigned a probability, helping to identify which
class the input belongs to.
Softmax Activation Function
2. SoftPlus Function
This equation ensures that the output is always positive and differentiable at all points, which is an
advantage over the traditional ReLU function.
• Range: The function outputs values in the range (0,∞)(0,∞), similar to ReLU, but without the
hard zero threshold that ReLU has.
The choice of activation function has a direct impact on the performance of a neural network in several
ways:
1. Convergence Speed: Functions like ReLU allow faster training by avoiding the vanishing gradient
problem, while Sigmoid and Tanh can slow down convergence in deep networks.
2. Gradient Flow: Activation functions like ReLU ensure better gradient flow, helping deeper layers
learn effectively. In contrast, Sigmoid can lead to small gradients, hindering learning in deep
layers.
3. Model Complexity: Activation functions like Softmax allow the model to handle complex multi-
class problems, whereas simpler functions like ReLU or Leaky ReLU are used for basic layers.
Activation functions are the backbone of neural networks, enabling them to capture non-linear
relationships in data. From classic functions like Sigmoid and Tanh to modern variants like ReLU and
Swish, each has its place in different types of neural networks. The key is to understand their behavior
and choose the right one based on your model’s needs.