LIET III-II CSE AIML IV UNIT Previous Yrs QN Papers Qns and Answers
LIET III-II CSE AIML IV UNIT Previous Yrs QN Papers Qns and Answers
Syllabus
UNIT-IV: INTRODUCTION TO NEURAL NETWORK
1
Previous yrs Qn Papers Qns and Answers
1. What is a Neural Network? Discuss the working of an
artificial Neuron.
2
Each neuron has an internal state, which is called an activation
signal. Output signals, which are produced after combining the
input signals and activation rule, may be sent to other units.
Biological Neuron
A nerve cell neuron is a special biological cell that processes information.
According to an estimation, there are huge number of neurons, approximately
1011 with numerous interconnections, approximately 1015.
Schematic Diagram
3
Axon − It is just like a cable through which neurons send the information.
Synapses − It is the connection between the axon and other neuron dendrites.
Soma Node
Dendrites Input
Axon Output
Before taking a look at the differences between Artificial Neural
Network ANN and Biological Neural Network BNN, let us take a look at the
similarities based on the terminology between these two.
4
Perceptron
Developed by Frank Rosenblatt by using McCulloch and Pitts model, perceptron is
the basic operational unit of artificial neural networks. It employs supervised
learning rule and is able to classify the data into two classes.
Operational characteristics of the perceptron: It consists of a single neuron with an
arbitrary number of inputs along with adjustable weights, but the output of the
neuron is 1 or 0 depending upon the threshold. It also consists of a bias whose
weight is always 1. Following figure gives a schematic representation of the
perceptron.
5
3. Discuss the Back Propagation Algorithm
Back Propagation Neural Networks
Back Propagation Neural (BPN) is a multilayer neural network consisting of the
input layer, at least one hidden layer and output layer. As its name suggests, back
propagating will take place in this network. The error which is calculated at the
output layer, by comparing the target output and the actual output, will be
propagated back towards the input layer.
Architecture
As shown in the diagram, the architecture of BPN has three interconnected layers
having weights on them. The hidden layer as well as the output layer also has bias,
whose weight is always 1, on them. As is clear from the diagram, the working of
BPN is in two phases. One phase sends the signal from the input layer to the output
layer, and the other phase back propagates the error from the output layer to the
input layer.
Training Algorithm
For training, BPN will use binary sigmoid activation function. The training of BPN
will have the following three phases.
Phase 1 − Feed Forward Phase
Phase 2 − Back Propagation of error
Phase 3 − Updating of weights
6
All these steps will be concluded in the algorithm as follows
Step 1 − Initialize the following to start the training −
Weights
Learning rate αFor easy calculation and simplicity, take some small random
values.
Step 2 − Continue step 3-11 when the stopping condition is not true.
Step 3 − Continue step 4-10 for every training pair.
Phase 1
Step 4 − Each input unit receives input signal xi and sends it to the hidden unit for
all i = 1 to n
Step 5 − Calculate the net input at the hidden unit using the following relation −
Step 6 − Calculate the net input at the output layer unit using the following relation
−
Phase 2
Step 7 − Compute the error correcting term, in correspondence with the target
pattern received at each output unit, as follows −
Step 8 − Now each hidden unit will be the sum of its delta inputs from the output
units.
Phase 3
Step 9 − Each output unit (ykk = 1 to m) updates the weight and bias as follows −
Step 10 − Each output unit (zjj = 1 to p) updates the weight and bias as follows –
Step 11 − Check for the stopping condition, which may be either the number of
epochs reached or the target output matches the actual output.
7
4. Write a Short Note On Adaptive Resonance theory
Adaptive Resonance Theory
This network was developed by Stephen Grossberg and Gail Carpenter in 1987. It
is based on competition and uses unsupervised learning model. Adaptive
Resonance Theory ARTnetworks, as the name suggests, is always open to new
learning adaptive without losing the old patterns resonance. Basically, ART
network is a vector classifier which accepts an input vector and classifies it into
one of the categories depending upon which of the stored pattern it resembles the
most.
Operating Principal
The main operation of ART classification can be divided into the following phases
−
Recognition phase − The input vector is compared with the classification
presented at every node in the output layer. The output of the neuron
becomes “1” if it best matches with the classification applied, otherwise it
becomes “0”.
Comparison phase − In this phase, a comparison of the input vector to the
comparison layer vector is done. The condition for reset is that the degree of
similarity would be less than vigilance parameter.
Search phase − In this phase, the network will search for reset as well as the
match done in the above phases. Hence, if there would be no reset and the
match is quite good, then the classification is over. Otherwise, the process
would be repeated and the other stored pattern must be sent to find the
correct match.
ART1
It is a type of ART, which is designed to cluster binary vectors. We can understand
about this with the architecture of it.
Architecture of ART1
It consists of the following two units −
Computational Unit − It is made up of the following −
Input unit (F1 layer) − It further has the following two portions −
8
o F1a layer Inputportion − In ART1, there would be no processing in
this portion rather than having the input vectors only. It is connected to
F1blayer interfaceportion.
o F1b layer Interfaceportion− This portion combines the signal from the
input portion with that of F2 layer. F1b layer is connected to F2 layer
through bottom up weights bij and F2 layer is connected to F1b layer
through top down weights tji.
Cluster Unit (F2 layer) − This is a competitive layer. The unit having the
largest net input is selected to learn the input pattern. The activation of all
other cluster unit are set to 0.
Reset Mechanism − The work of this mechanism is based upon the
similarity between the top-down weight and the input vector. Now, if the
degree of this similarity is less than the vigilance parameter, then the cluster
is not allowed to learn the pattern and a rest would happen.
Supplement Unit − Actually the issue with Reset mechanism is that the
layer F2 must have to be inhibited under certain conditions and must also be
available when some learning happens. That is why two supplemental units
namely, G1 and G2 is added along with reset unit, R. They are called gain control
units. These units receive and send signals to the other units present in the
network. ‘+’ indicates an excitatory signal, while ‘−’ indicates an inhibitory signal.
9
Auto Associative Memory
This is a single layer neural network in which the input training vector and the
output target vectors are the same. The weights are determined so that the network
stores a set of patterns.
Architecture
As shown in the following figure, the architecture of Auto Associative memory
network has ‘n’ number of input training vectors and similar ‘n’ number of output
target vectors.
6. Explain the Difference in Supervised and Unsurpervised
Learning.
Supervised Learning
As the name suggests, this type of learning is done under the supervision of a
teacher. This learning process is dependent.
During the training of ANN under supervised learning, the input vector is
presented to the network, which will give an output vector. This output vector is
compared with the desired output vector. An error signal is generated, if there is a
difference between the actual output and the desired output vector. On the basis of
this error signal, the weights are adjusted until the actual output is matched with
the desired output.
10
Unsupervised Learning
As the name suggests, this type of learning is done without the supervision of a
teacher. This learning process is independent.
During the training of ANN under unsupervised learning, the input vectors of
similar type are combined to form clusters. When a new input pattern is applied,
then the neural network gives an output response indicating the class to which the
input pattern belongs.
There is no feedback from the environment as to what should be the desired output
and if it is correct or incorrect. Hence, in this type of learning, the network itself
must discover the patterns and features from the input data, and the relation for the
input data over the output.
11
7. What is self-organizing map and discuss the algorithm and features of
kohonen’s map.
12
Rectangular Grid Topology
This topology has 24 nodes in the distance-2 grid, 16 nodes in the distance-1 grid,
and 8 nodes in the distance-0 grid, which means the difference between each
rectangular grid is 8 nodes. The winning unit is indicated by #.
13
Architecture
The architecture of KSOM is similar to that of the competitive network. With the
help of neighborhood schemes, discussed earlier, the training can take place over
the extended region of the network.
14
The output of each neuron should be the input of other neurons but not the
input of self.
Weight/connection strength is represented by wij.
Connections can be excitatory as well as inhibitory. It would be excitatory, if
the output of the neuron is same as the input, otherwise inhibitory.
Weights should be symmetrical, i.e. wij = wji
15