0% found this document useful (0 votes)
28 views

Neural Networks

Uploaded by

nagpalshruti30
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

Neural Networks

Uploaded by

nagpalshruti30
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 75

ARTIFICIAL

NEURAL
NETWORK
By
D r. A n j u B h a n d a r i G a n d h i , P r o f e s s o r, C S E
INTRODUCTION TO
ARTIFICIAL NEURAL
NETWORKS
UNIT-1
(ANN)
Outline

Definition, why and how are neural


networks being used in solving problems
Human biological neuron

Artificial Neuron

Applications of ANN

Comparison of ANN vs conventional AI


methods
The idea of ANNs..?
NNs learn relationship between cause and effect or
organize large volumes of data into orderly and
informative patterns.
It’s a frog
frog

lion

bird

What is that?
Neural networks to the rescue…

• Neural network: information processing


paradigm inspired by biological nervous
Structure: large number
Like people, they
of highly interconnected
learn from
systems, such as our brain processing elements
experience
(neurons) working
(by example)
together

5
Definition of ANN
“Data processing system consisting of a
large number of simple, highly
interconnected processing elements

(artificial neurons) in an architecture


inspired by the structure of the
cerebral cortex of the brain”

6
Collect data. Create the network Configure the
— Create Neural network — Configure
Network Object. Shallow Neural
Network Inputs and
Outputs.

Initialize the weights Train the network — Validate the


and biases. Neural Network network.
Training Concepts.

Use the network.


Inspiration from Neurobiology
Human Biological Neuron

8
Biological Neural Networks

Biological neuron
Biological Neural Networks

A biological neuron has


three types of main
components; dendrites,
soma (or cell body) and
axon.
Dendrites receives
signals from other
neurons.
The soma, sums the incoming signals. When
sufficient input is received, the cell fires; that is it
transmit a signal over its axon to other cells.
Artificial Neurons

ANN is an information processing system that has


certain performance characteristics in common
with biological nets.

Several key features of the processing elements of ANN are suggested by the properties
of biological neurons:

1. The processing element receives many signals.


2. Signals may be modified by a weight at the receiving
synapse.
3. The processing element sums the weighted inputs.
4. Under appropriate circumstances (sufficient input), the
neuron transmits a single output.
5. The output from a particular neuron may go to many other
neurons.
Artificial Neurons
• From experience: A physical neuron
examples / training
data
• Strength of connection
between the neurons
is stored as a weight-
value for the specific
connection.
• Learning the solution
to a problem =
changing the
connection weights

An artificial neuro11n
Artificial Neurons

ANNs have been developed as generalizations of


mathematical models of neural biology, based on
the assumptions that:

1. Information processing occurs at many simple elements


called neurons.
2. Signals are passed between neurons over connection links.
3. Each connection link has an associated weight, which, in
typical neural net, multiplies the signal transmitted.
4. Each neuron applies an activation function to its net input
to determine its output signal.
Artificial Neuron

Four basic components of a human biological The components of a basic artificial neuron
neuron

14
Model Of A Neuron
Wa
X1

Wb Y
X2  f()

Wc
X3

Input units Connection Summing


computation
weights function

(dendrite) (synapse) (axon)


(soma)
15
A neural net consists of a large number of simple
processing elements called neurons, units, cells or
nodes.

Each neuron is connected to other neurons by


means of directed communication links, each with
associated weight.

The weight represent information being used by the


net to solve a problem.

16
• Each neuron has an internal
state, called its activation or
activity level, which is a
function of the inputs it has
received. Typically, a neuron
sends its activation as a signal
to several other neurons.

• It is important to note that a


neuron can send only one
signal at a time, although
that signal is broadcast to
17

several other neurons.


• Neural networks are configured
for a specific application, such as
pattern recognition or data
classification, through a
learning process
• In a biological system,
learning involves
adjustments to the
synaptic connections
between neurons
•same for artificial neural networks
(ANNs)

18
Artificial Neural Network
Synapse Nukleus

x1 w1
 
y
Axon
x2 w2 Activation Function:
yin = x1w1 + x2w 2 (y-in) = 1 if y-in >= 
and (y-in) = 0

Dendrite
-A neuron receives input, determines the strength or the weight of the input, calculates the total
weighted input, and compares the total weighted with a value (threshold)

-The value is in the range of 0 and 1

- If the total weighted input greater than or equal the threshold value, the neuron will produce the
output, and if the total weighted input less than the threshold value, no output will be produced

19
Limitations
McCulloch- Perceptron
(Minsky,
Pitts neurons (Rosenblatt)
Papert)

History 1949 1960 1972

(self 1943 1958 1969

study)
Adaline, Kohonen nets,
Hebb‟s law better associative
learning rule memory
(Widrow,
Huff)

20
Neocognitron,
character
Brain State in a ART (Carpenter, recognition
Box (Anderson) Grossfield) (Fukushima)

1982 1986

1977 1985 1988

Hopfield net, Backpropagation


constraint (Rumelhart,
satisfaction Hinton,
McClelland)

21
Characterization
a pattern of connections
between neurons
Architecture • Single Layer Feedforward
• Multilayer Feedforward
• Recurrent

a method of determining the


connection weights
Strategy / Learning
• Supervised
Algorithm • Unsupervised
• Reinforcement

Function to compute output


Activation Function signal from input signal

22
Single Layer Feedforward NN

x1 w11

w12 ym

w21

yn
x2

w22
output layer
Input layer

Contoh: ADALINE, AM, Hopfield, LVQ, Perceptron, SOFM


23
Multilayer Neural Network
z1
V11
x1
  w11
w12
V1n
w12 y1

x2 z2

•   y2


• zn

xm Vmn  
Input layer Output layer
Hidden layer
Contoh: CCN, GRNN, MADALINE, MLFF with BP, Neocognitron, RBF, R2C3E
Recurrent NN
Input Outputs

Hidden nodes

Contoh: ART, BAM, BSB, Boltzman Machine, Cauchy Machine,


Hopfield, RNN 25
Strategy / Learning Algorithm
Supervised Learning

• Learning is performed by presenting pattern with target


• During learning, produced output is compared with the desired output
– The difference between both output is used to modify learning
weights according to the learning algorithm
• Recognizing hand-written digits, pattern recognition and etc.
• Neural Network models: perceptron, feed-forward, radial basis function,
support vector machine.

26
Targets are not provided

Appropriate for clustering task


Unsupervised
Learning •– Find similar groups of documents in
the web, content addressable memory,
clustering.

Neural Network models: Kohonen,


self organizing maps, Hopfield
networks.

27
Reinforcement
Learning

• Target is provided, but the


desired output is absent.
• The net is only provided with
guidance to determine the
produced output is correct or
vise versa.
• Weights are modified in the
units that have errors
28
Activation Functions
Identity f(x) = x

Binary step f(x) = 1 if x >= q


f(x) = 0 otherwise

Binary f(x) = 1 / (1 + e-sx)


sigmoid

Bipolar f(x) = -1 + 2 / (1 + e-sx)


sigmoid

Hyperbolic f(x) = (ex – e-x) / (ex + e-x)


tangent

29
Exercise
• 2 input AND • 2 input OR
1 1 1 1 1 1
1 0 0 1 0 1
0 1 0 0 1 1
0 0 0 0 0 0

30
x1 w1= 0.5

  y
x2 w2 = 0.3

Activation Function:
yin = x1w1 + x2w 2 Binary Step Function
 = 0.5,

(y-in) = 1 if y-in >= 


dan (y-in) = 0

31
Where can neural
network systems help…

• when we can't formulate an


algorithmic solution.
• when we can get lots of examples
of the behavior we require.
•"learning from
experience"
• when we need to pick out the
structure from existing data. 32
Who is interested?...

• Electrical Engineers – signal processing,


control theory
• Computer Engineers – robotics
• Computer Scientists – artificial
intelligence, pattern recognition
• Mathematicians – modelling tool when
explicit relationships are unknown

33
Problem Domains

• Storing and recalling


patterns
• Classifying patterns
• Mapping inputs onto
outputs
• Grouping similar patterns
• Finding solutions to
constrained optimization
problems 34
Coronary
Disease

Classification

STOP

01 10
Neural
11
Net 11 10 00 00 11 Input patterns
00

Input layer

Output layer

00 01 10 11
Sorted
00 10 11 pa. tterns

00 11 34
Clustering

00 11
10
11 10

00 11

00
01

35
ANN Applications

Medical Applications

Information
Searching & retrieval
Chemistry

Education
Business & Management
Applications of ANNs

Pattern recognition,
Diagnosis or
e.g. handwritten
Signal processing mapping symptoms
characters or face
to a medical case.
identification.

Human Emotion Educational Loan


Speech recognition
Detection Forecasting

38
Abdominal Pain Prediction

Intensity Duration
Male Age Temp WBC Pain Pain
adjustable
1 20 37 10 1 1
weights

AppendicitisDiverticulitis Ulcer Pain Cholecystitis Obstruction Pancreatitis


Duodenal Non-specific Small Bowel
Perforated

39
Voice Recognition

40
Educational Loan
Forecasting
System

41
Advantages Of NN
It can model non-linear INPUT-OUTPUT
NON-LINEARITY
systems MAPPING

The ability to learn


It can derive a
allows the network to
relationship between a
ADAPTIVITY adapt to changes in the
set of input & output
surrounding
responses
environment

It can provide a
EVIDENTIAL
confidence level to a
RESPONSE
given solution

42
Advantages Of NN

•CONTEXTUAL INFORMATION
•Knowledge is presented by the structure of the network. Every
neuron in the network is potentially affected by the global
activity of all other neurons in the network. Consequently,
contextual information is dealt with naturally in the network.

•FAULTTOLERANCE
•Distributed nature of the NN gives it fault tolerant capabilities

•NEUROBIOLOGY ANALOGY
•Models the architecture of the brain 43
Comparison of ANN with conventional AI methods

44
Linear Separability Problem

• If two classes of patterns can be separated by a decision boundary,


represented by the linear nequation
b + i =1 x i w i = 0
then they are said to be linearly separable. The simple network can
correctly classify any patterns.
• Decision boundary (i.e., W, b or ) of linearly separable classes can
be determined either by some learning procedures or by solving
linear equation systems based on representative patterns of each
classes
• If such a decision boundary does not exist, then the two classes are
said to be linearly inseparable.
• Linearly inseparable problems cannot be solved by the simple
network , more sophisticated architecture is needed.
• Examples of linearly separable classes
- Logical AND function o x
patterns (bipolar) decision boundary
x1 x2 y w1 = 1
-1 -1 -1 w2 = 1 o o
-1 1 -1 b = -1
1 -1 -1 =0 x: class I (y = 1)
1 1 1 -1 + x1 + x2 = 0 o: class II (y = -1)
- Logical OR function
patterns (bipolar) decision boundary x x
x1 x2 y w1 = 1
-1 -1 -1 w2 = 1
-1 1 1 b=1
1 -1 1 =0 o x
1 1 1 1 + x1 + x2 = 0
x: class I (y = 1)
o: class II (y = -1)
• Examples of linearly inseparable classes o
x
- Logical XOR (exclusive OR) function
patterns (bipolar) decision boundary
x1 x2 y
o x
-1 -1 -1
-1 1 1
x: class I (y = 1)
1 -1 1 o: class II (y = -1)
1 1 -1
No line can separate these two classes, as can be seen from the
fact that the following linear inequality system has no solution
because we have b < 0 from
b − w1 − w 2  0 (1)
b − w1 + w 2(1)+0(4), and
(2)
b >= 0 from
b + w − w (2)+0(3), which
(3)
is a
 1 2
contradiction
b + w1 + w 2  0 (4)

You might also like