0% found this document useful (0 votes)
74 views15 pages

LIET III-II CSE AIML IV UNIT Previous Yrs QN Papers Qns and Answers

The document discusses the syllabus for a course on soft computing. It covers topics like neural networks, perceptrons, backpropagation algorithm and adaptive resonance theory. It also includes sample questions and answers related to these topics.

Uploaded by

Abdul Azeez 312
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views15 pages

LIET III-II CSE AIML IV UNIT Previous Yrs QN Papers Qns and Answers

The document discusses the syllabus for a course on soft computing. It covers topics like neural networks, perceptrons, backpropagation algorithm and adaptive resonance theory. It also includes sample questions and answers related to these topics.

Uploaded by

Abdul Azeez 312
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

LORDS INSTITUTE OF ENGINEERING & TECHNOLOGY

Approved by AICTE/Affiliated to Osmania University/Estd.2002.


Department of Computer Science and Engineering- AIML
Course Title: SOFT COMPUTING

Course Code: PC604CSM

Syllabus
UNIT-IV: INTRODUCTION TO NEURAL NETWORK

PREVIOUS YRS QN PAPERS QNS AND ANSWERS IN SOFT COMPUTING (FROM


IV UNIT)

1. Introduction to Neural Network,


2. Fundamental Concept, Evaluation of Neural Networks,
3. Models of Neural Networks & Important technologies, Applications,
McCulloch, Pitts Neuron, Linear Separability & Hebb Network,
4. Supervised Learning Network: Perception Networks,
5. Adaptive Linear Neurons,
6. Back Propagation Network,
7. Radial Basis Network,
8. Unsupervised Learning Networks: Kohonensel-Organizing Feature maps,
Learning Vector Quantization, Counter Propagation Networks,
9. Adaptive Resonance theory Network.

SUGGESTED READINGS (SR):


SR1 J.S.R. Jang, C.T Sun and E Mizutani, “ Neuro-Fuzzy and Soft Computing”,
Pearson Education 2004
SR2 S.N Sivanandam, S.N Deepa “Principles of Soft Computing” Second Edition,
Wiley Publication

1
Previous yrs Qn Papers Qns and Answers
1. What is a Neural Network? Discuss the working of an
artificial Neuron.

 Artificial Neural Networks are parallel computing devices, which


are basically an attempt to make a computer model of the brain.
 The main objective is to develop a system to perform various
computational tasks faster than the traditional systems.
 Artificial Neural Networks ANN is an advanced topic, hence the
reader must have basic knowledge of Algorithms, Programming,
and Mathematics.

 Neural networks are parallel computing devices, which is basically


an attempt to make a computer model of the brain.
 The main objective is to develop a system to perform various
computational tasks faster than the traditional systems.
 These tasks include pattern recognition and classification,
approximation, optimization, and data clustering.
 Artificial Neural Network ANN is an efficient computing system
whose central theme is borrowed from the analogy of biological
neural networks.
 ANNs are also named as “artificial neural systems,” or “parallel
distributed processing systems,” or “connectionist systems.”
 ANN acquires a large collection of units that are interconnected in
some pattern to allow communication between the units. These
units, also referred to as nodes or neurons, are simple processors
which operate in parallel.
 Every neuron is connected with other neuron through a connection
link. Each connection link is associated with a weight that has
information about the input signal.
 This is the most useful information for neurons to solve a
particular problem because the weight usually excites or inhibits
the signal that is being communicated.

2
 Each neuron has an internal state, which is called an activation
signal. Output signals, which are produced after combining the
input signals and activation rule, may be sent to other units.
Biological Neuron
A nerve cell neuron is a special biological cell that processes information.
According to an estimation, there are huge number of neurons, approximately
1011 with numerous interconnections, approximately 1015.

Schematic Diagram

Working of a Biological Neuron


As shown in the above diagram, a typical neuron consists of the following four
parts with the help of which we can explain its working –

 Dendrites − They are tree-like branches, responsible for receiving the


information from other neurons it is connected to. In other sense, we can say
that they are like the ears of neuron.
 Soma − It is the cell body of the neuron and is responsible for processing of
information, they have received from dendrites.

3
 Axon − It is just like a cable through which neurons send the information.
 Synapses − It is the connection between the axon and other neuron dendrites.

ANN versus BNN


Biological Neural Network BNN Artificial Neural Network ANN

Soma Node

Dendrites Input

Synapse Weights or Interconnections

Axon Output
Before taking a look at the differences between Artificial Neural
Network ANN and Biological Neural Network BNN, let us take a look at the
similarities based on the terminology between these two.

2. Examine the Role of a Perceptron for Creating a Neural


Network
Perceptron Learning Rule
 This rule is an error correcting the supervised learning algorithm of single
layer feedforward networks with linear activation function, introduced by
Rosenblatt.
 Basic Concept − As being supervised in nature, to calculate the error, there
would be a comparison between the desired/target output and the actual
output. If there is any difference found, then a change must be made to the
weights of connection.
 Mathematical Formulation − To explain its mathematical formulation,
suppose we have ‘n’ number of finite input vectors, xn, along with its
desired/target output vector tn, where n = 1 to N.

4
Perceptron
Developed by Frank Rosenblatt by using McCulloch and Pitts model, perceptron is
the basic operational unit of artificial neural networks. It employs supervised
learning rule and is able to classify the data into two classes.
Operational characteristics of the perceptron: It consists of a single neuron with an
arbitrary number of inputs along with adjustable weights, but the output of the
neuron is 1 or 0 depending upon the threshold. It also consists of a bias whose
weight is always 1. Following figure gives a schematic representation of the
perceptron.

Perceptron thus has the following three basic elements −


 Links − It would have a set of connection links, which carries a weight
including a bias always having weight 1.
 Adder − It adds the input after they are multiplied with their respective
weights.
 Activation function − It limits the output of neuron. The most basic
activation function is a Heaviside step function that has two possible outputs.
This function returns 1, if the input is positive, and 0 for any negative input.

5
3. Discuss the Back Propagation Algorithm
Back Propagation Neural Networks
Back Propagation Neural (BPN) is a multilayer neural network consisting of the
input layer, at least one hidden layer and output layer. As its name suggests, back
propagating will take place in this network. The error which is calculated at the
output layer, by comparing the target output and the actual output, will be
propagated back towards the input layer.
Architecture
As shown in the diagram, the architecture of BPN has three interconnected layers
having weights on them. The hidden layer as well as the output layer also has bias,
whose weight is always 1, on them. As is clear from the diagram, the working of
BPN is in two phases. One phase sends the signal from the input layer to the output
layer, and the other phase back propagates the error from the output layer to the
input layer.

Training Algorithm
For training, BPN will use binary sigmoid activation function. The training of BPN
will have the following three phases.
 Phase 1 − Feed Forward Phase
 Phase 2 − Back Propagation of error
 Phase 3 − Updating of weights

6
All these steps will be concluded in the algorithm as follows
Step 1 − Initialize the following to start the training −
 Weights
 Learning rate αFor easy calculation and simplicity, take some small random
values.
Step 2 − Continue step 3-11 when the stopping condition is not true.
Step 3 − Continue step 4-10 for every training pair.
Phase 1
Step 4 − Each input unit receives input signal xi and sends it to the hidden unit for
all i = 1 to n
Step 5 − Calculate the net input at the hidden unit using the following relation −
Step 6 − Calculate the net input at the output layer unit using the following relation

Phase 2
Step 7 − Compute the error correcting term, in correspondence with the target
pattern received at each output unit, as follows −
Step 8 − Now each hidden unit will be the sum of its delta inputs from the output
units.

Phase 3
Step 9 − Each output unit (ykk = 1 to m) updates the weight and bias as follows −
Step 10 − Each output unit (zjj = 1 to p) updates the weight and bias as follows –

Step 11 − Check for the stopping condition, which may be either the number of
epochs reached or the target output matches the actual output.

7
4. Write a Short Note On Adaptive Resonance theory
Adaptive Resonance Theory
This network was developed by Stephen Grossberg and Gail Carpenter in 1987. It
is based on competition and uses unsupervised learning model. Adaptive
Resonance Theory ARTnetworks, as the name suggests, is always open to new
learning adaptive without losing the old patterns resonance. Basically, ART
network is a vector classifier which accepts an input vector and classifies it into
one of the categories depending upon which of the stored pattern it resembles the
most.

Operating Principal
The main operation of ART classification can be divided into the following phases

 Recognition phase − The input vector is compared with the classification
presented at every node in the output layer. The output of the neuron
becomes “1” if it best matches with the classification applied, otherwise it
becomes “0”.
 Comparison phase − In this phase, a comparison of the input vector to the
comparison layer vector is done. The condition for reset is that the degree of
similarity would be less than vigilance parameter.
 Search phase − In this phase, the network will search for reset as well as the
match done in the above phases. Hence, if there would be no reset and the
match is quite good, then the classification is over. Otherwise, the process
would be repeated and the other stored pattern must be sent to find the
correct match.
ART1
It is a type of ART, which is designed to cluster binary vectors. We can understand
about this with the architecture of it.
Architecture of ART1
It consists of the following two units −
Computational Unit − It is made up of the following −
 Input unit (F1 layer) − It further has the following two portions −

8
o F1a layer Inputportion − In ART1, there would be no processing in
this portion rather than having the input vectors only. It is connected to
F1blayer interfaceportion.
o F1b layer Interfaceportion− This portion combines the signal from the
input portion with that of F2 layer. F1b layer is connected to F2 layer
through bottom up weights bij and F2 layer is connected to F1b layer
through top down weights tji.
 Cluster Unit (F2 layer) − This is a competitive layer. The unit having the
largest net input is selected to learn the input pattern. The activation of all
other cluster unit are set to 0.
 Reset Mechanism − The work of this mechanism is based upon the
similarity between the top-down weight and the input vector. Now, if the
degree of this similarity is less than the vigilance parameter, then the cluster
is not allowed to learn the pattern and a rest would happen.
Supplement Unit − Actually the issue with Reset mechanism is that the
layer F2 must have to be inhibited under certain conditions and must also be
available when some learning happens. That is why two supplemental units
namely, G1 and G2 is added along with reset unit, R. They are called gain control
units. These units receive and send signals to the other units present in the
network. ‘+’ indicates an excitatory signal, while ‘−’ indicates an inhibitory signal.

5. Explain Bidirectional Associative Memory


Associate Memory Network
These kinds of neural networks work on the basis of pattern association, which
means they can store different patterns and at the time of giving an output they can
produce one of the stored patterns by matching them with the given input pattern.
These types of memories are also called Content-Addressable Memory CAM.
Associative memory makes a parallel search with the stored patterns as data files.
Following are the two types of associative memories we can observe −
 Auto Associative Memory
 Hetero Associative memory

9
Auto Associative Memory
This is a single layer neural network in which the input training vector and the
output target vectors are the same. The weights are determined so that the network
stores a set of patterns.
Architecture
As shown in the following figure, the architecture of Auto Associative memory
network has ‘n’ number of input training vectors and similar ‘n’ number of output
target vectors.
6. Explain the Difference in Supervised and Unsurpervised
Learning.
Supervised Learning
As the name suggests, this type of learning is done under the supervision of a
teacher. This learning process is dependent.
During the training of ANN under supervised learning, the input vector is
presented to the network, which will give an output vector. This output vector is
compared with the desired output vector. An error signal is generated, if there is a
difference between the actual output and the desired output vector. On the basis of
this error signal, the weights are adjusted until the actual output is matched with
the desired output.

10
Unsupervised Learning
As the name suggests, this type of learning is done without the supervision of a
teacher. This learning process is independent.
During the training of ANN under unsupervised learning, the input vectors of
similar type are combined to form clusters. When a new input pattern is applied,
then the neural network gives an output response indicating the class to which the
input pattern belongs.
There is no feedback from the environment as to what should be the desired output
and if it is correct or incorrect. Hence, in this type of learning, the network itself
must discover the patterns and features from the input data, and the relation for the
input data over the output.

11
7. What is self-organizing map and discuss the algorithm and features of
kohonen’s map.

Kohonen Self-Organizing Feature Maps

Suppose we have some pattern of arbitrary dimensions, however, we need them in


one dimension or two dimensions. Then the process of feature mapping would be
very useful to convert the wide pattern space into a typical feature space. Now, the
question arises why do we require self-organizing feature map? The reason is,
along with the capability to convert the arbitrary dimensions into 1-D or 2-D, it
must also have the ability to preserve the neighbor topology.

Neighbor Topologies in Kohonen SOM


There can be various topologies, however the following two topologies are used
the most −

12
Rectangular Grid Topology
This topology has 24 nodes in the distance-2 grid, 16 nodes in the distance-1 grid,
and 8 nodes in the distance-0 grid, which means the difference between each
rectangular grid is 8 nodes. The winning unit is indicated by #.

Hexagonal Grid Topology


This topology has 18 nodes in the distance-2 grid, 12 nodes in the distance-1 grid,
and 6 nodes in the distance-0 grid, which means the difference between each
rectangular grid is 6 nodes. The winning unit is indicated by #.

13
Architecture
The architecture of KSOM is similar to that of the competitive network. With the
help of neighborhood schemes, discussed earlier, the training can take place over
the extended region of the network.

8. Draw the Architecture of Hopefield Network?

Artificial Neural Network - Hopfield Networks


Hopfield neural network was invented by Dr. John J. Hopfield in 1982. It consists
of a single layer which contains one or more fully connected recurrent neurons.
The Hopfield network is commonly used for auto-association and optimization
tasks.
Discrete Hopfield Network
A Hopfield network which operates in a discrete line fashion or in other words, it
can be said the input and output patterns are discrete vector, which can be either
binary 0,10,1 or bipolar +1,−1+1,−1 in nature. The network has symmetrical
weights with no self-connections i.e., wij = wji and wii = 0.
Architecture
Following are some important points to keep in mind about discrete Hopfield
network −
 This model consists of neurons with one inverting and one non-inverting
output.

14
 The output of each neuron should be the input of other neurons but not the
input of self.
 Weight/connection strength is represented by wij.
 Connections can be excitatory as well as inhibitory. It would be excitatory, if
the output of the neuron is same as the input, otherwise inhibitory.
 Weights should be symmetrical, i.e. wij = wji

The output from Y1 going to Y2, Yi and Yn have the


weights w12, w1i and w1n respectively. Similarly, other arcs have the weights on
them.

15

You might also like