0% found this document useful (0 votes)
12 views38 pages

Unit 5 Neural Networks and Types of Learning

Uploaded by

SHREE LEKHA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views38 pages

Unit 5 Neural Networks and Types of Learning

Uploaded by

SHREE LEKHA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

UNIT 5 NEURAL NETWORKS AND TYPES OF LEARNING

22CS503 MACHINE LEARNING CSE DEPARTMENT R.M.D ENGINEERING COLLEGE


SYLLABUS

Biological Neuron – Artificial Neuron – Types of Activation function –


Implementations of ANN –Architectures of Neural Networks – Learning
Process in ANN – Back propagation – Deep Learning – Representation
Learning – Active Learning – Instance based Learning – Association Rule
Learning – Ensemble Learning Algorithm – Regularization Algorithm-
Reinforcement Learning – Elements- Model-based- Temporal Difference
Learning
ARTIFICIAL NEURAL NETWORKS

◼ Artificial Neural Networks contain artificial neurons which are called units .

◼ These units are arranged in a series of layers that together constitute the whole Artificial Neural Network in a system.

◼ Commonly, Artificial Neural Network has an input layer, an output layer as well as hidden layers.

◼ The input layer receives data from the outside world which the neural network needs to analyze or learn about.

◼ Then this data passes through one or multiple hidden layers that transform the input into data that is valuable for the
output layer.

◼ Finally, the output layer provides an output in the form of a response of the Artificial Neural Networks to input data
provided.
ARTIFICIAL NEURAL NETWORKS

◼ Each of these connections has weights that determine the influence of one unit on another unit.
◼ As the data transfers from one unit to another, the neural network learns more and more about the data
which eventually results in an output from the output layer.
ARTIFICIAL NEURON

◼ Each neuron has three major components: synapses, summation junction, threshold activation function

1. A set of ‘i’ synapses having weight wi . The value of weight w may be positive or negative. A positive weight
has an excitatory effect, while a negative weight has an inhibitory effect on the output of the summation
junction, y .

2. A summation junction for the input signals is weighted by the respective synaptic weight. Because it is a linear
combiner or adder of the weighted input signals, the output of the summation junction, y ,b is bias can be
expressed as follows

or
ARTIFICIAL NEURON

3. A threshold activation function (or simply activation function, also called squashing function) results in an
output signal only when an input signal exceeding a specific threshold value comes as an input. It is similar in
behavior to the biological neuron which transmits the signal only when the total input signal meets the firing
threshold.
TYPES OF ACTIVATION FUNCTION

1. Identity function
2. Threshold/step function
3. ReLU (Rectified Linear Unit) function
4. Sigmoid function
5. Hyperbolic tangent function
TYPES OF ACTIVATION FUNCTION

1. Identity function is used as an activation function for the input layer.

◼ It is a linear function having the form

◼ As obvious, the output remains the same as the input.

2. Step function gives 1 as output if the input is either 0 or positive. If


the input is negative, the step function gives 0 as output.
TYPES OF ACTIVATION FUNCTION

◼ The threshold function is almost like the step function, with the only difference being the fact that θ is used
as a threshold value instead of 0
TYPES OF ACTIVATION FUNCTION

◼ ReLU (Rectified Linear Unit) function

ReLU is the most popularly used activation function in the areas of


convolutional neural networks and deep learning. This means that f(x)
is zero when x is less than zero and f(x) is equal to x when x is above or
equal to zero.
4. Sigmoid function
There are two types of sigmoid function:
1. Binary sigmoid function
2. Bipolar sigmoid function
Binary sigmoid function
A binary sigmoid function is of the Form

where k = steepness or slope parameter of the sigmoid


function.It has range of (0, 1)
TYPES OF ACTIVATION FUNCTION

◼ Bipolar Sigmoid Function is of the form

where k = steepness or slope parameter of the sigmoid


function. It has range of (-1, +1)
TYPES OF ACTIVATION FUNCTION

◼ Hyperbolic tangent function is another continuous activation function, which is bipolar in nature. It is a
widely adopted activation function for a special type of neural network known as backpropagation network
ARCHITECTURES OF NEURAL NETWORK

◼ ANN is a computational system consisting of a large number of interconnected units called artificial neurons.
The connection between artificial neurons can transmit signal from one neuron to another.

◼ Types

◼ Single-layer feed forward network

◼ Multi-layer feed forward ANNs

◼ Competitive network

◼ Recurrent network
SINGLE LAYER FEED FORWARD NETWORK

◼ It consists of only two layers as – the input layer and the output layer.
◼ The input layer consists of a set of ‘m’ input neurons X 1 , X2 ,…, X m connected to each of the ‘n’ output
neurons Y1 , Y2 , …,Yn . The connections carry weights w11 , w12 , …, wmn .
◼ The input layer of neurons does not conduct any processing – they pass the input signals to the output
neurons.
◼ The computations are performed only by the neurons in the output layer.
◼ So, though it has two layers of neurons, only one layer is performing the computation. This is the reason why
the network is known as single layer in spite of having two layers of neurons.
◼ Also, the signals always flow from the input layer to the output layer.
◼ Hence, this network is known as feed forward.
MULTI-LAYER FEED FORWARD ANNS

◼ The multi-layer feed forward


network is quite similar to the
single-layer feed forward
network, except for the fact that
there are one or more
intermediate layers of neurons
between the input and the
output layers. Hence, the
network is termed as multi-layer.
COMPETITIVE NETWORK
◼ The competitive network is
almost the same in structure as
the single-layer feed forward
network. The only difference is
that the output neurons are
connected with each other
(either partially or fully).
RECURRENT NETWORK

◼ In the case of recurrent neural


networks, there is a feedback loop,
from the neurons in the output layer
to the input layer neurons. There may
also be self-loops
LEARNING PROCESS IN ANN

◼ There are four major aspects which need to be decided:


1. The number of layers in the network
Example : Single layer Neural Networks, Multiple Layer Neural Networks
2. The direction of signal flow
Example: Feed forward Network, Recurrent network
3. The number of nodes in each layer
The number of output nodes will depend on possible outcomes,
e.g. number of classes in the case of supervised learning.
4. The value of weights attached with each interconnection between neurons
BACKPROPAGATION ALGORITHM

◼ The backpropagation algorithm is applicable for multi-layer feed forward networks.

◼ It is a supervised learning algorithm which continues adjusting the weights of the connected neurons with an
objective to reduce the deviation of the output signal from the target output.

◼ This algorithm consists of multiple iterations, also known as epochs.

◼ Each epoch consists of two phases – Forward phase and backward phase

◼ A forward phase in which the signals flow from the neurons in the input layer to the neurons in the output
layer through the hidden layers. T he weights of the interconnections and activation functions are used
during the flow. In the output layer, the output signals are generated.
BACKPROPAGATION ALGORITHM
◼ A backward phase in which the output
signal is compared with the expected
value. The computed errors are
propagated backwards from the output
to the preceding layers. The errors
propagated back are used to adjust the
interconnection weights between the
layers.

◼ The iterations continue till a stopping


criterion is reached.
BACKPROPAGATION ALGORITHM

◼ One main part of the algorithm is adjusting the interconnection weights. This is done using a technique
termed as gradient descent.
◼ In simple terms, the algorithm calculates the partial derivative of the activation function by each
interconnection weight to identify the ‘gradient’ or extent of change of the weight required to minimize the
cost function.
◼ then the cost function defined as the squared error of the output layer is given by

◼ So, as a part of the gradient descent algorithm, partial derivative of the cost function E has to be done with
respect to each of the interconnection weights
TYPES OF LEARNING

◼ Representation learning
◼ Active learning
◼ Instance -based Learning
◼ Association rule
◼ Ensemble learning.
ASSOCIATION RULE LEARNING

You might also like