Artificial Intelligence Artificial Neural Networks - : Introduction
Artificial Intelligence Artificial Neural Networks - : Introduction
Axon
There is a close analogy between the
structure of a biological neuron and the
artificial neuron.
A biological neuron has three components:
The Dendrites
They receive (from other neuron) an electrical
impulse transmitted across a synaptic gap by means
of a chemical transmitter which modifies the
incoming signals in a manner similar to the action of
the weights in an ANN
The Cell Body (Soma)
Which sums the incoming signals and when
sufficient input is received it fires (transmits a
signal) over its axon to other cells. A cell either fires
or dose not at any instant of time (so that transmitted
signals can be treated as binary)
Bias
X1 W1k bk
Activation
X2 W2k
Function
YK
Input .
F (.)
Signals
Output
.
Summing
. Junction
Xn Wnk
Synaptic
Weight
Sets of inputs are applied, each representing the
output of another neuron. Each input is multiplied by a
corresponding weight, and all of the weighted inputs
are then summed to determine the activation level of
the neurons. If this summation exceeds a certain
threshold, the neuron responds by issuing a new pulse,
which propagates along its output connection.
Otherwise the neuron remains inactive .
+1 +1
x x
0
-1
Binary
BinaryStep
StepFunction
Function Hard
HardLimiter
LimiterFunction
Function
f (x) f (x)
+1
+1
x
x
0
-1
Log-
Log-Sigmoid
Sigmoidfunction
function Tangent
TangentFunction
Function
Fig(1)
Fig (1)Activation
ActivationSignal
SignalFunctions
Functions
Some of specific properties of neural networks
responsible for the interest that they arouse today are:
W221
W121
W131
. .
W143
. .
. W224
.
W143
.
. W225
W153
.
W235
Fig(2)
Fig (2)MLFF
MLFFNetwork
Network. .
NEURAL NETWORK LEARNING
The artificial neural networks learning ability is its
most intriguing property. Like the biological system,
networks modify themselves as a result of experience
to produce a more desirable behavior pattern.
Adjusting the weights is commonly called “ training ”
and the network is said to “ learn “.
Training is accomplished by sequentially applying input
vector, while adjusting network weights according to a
pre-determined procedure. During training, the network
weights gradually converge to values such that each
input vector produces the desired output vector.
Learning can be defined as:
Supervised Learning
A teacher is assumed to be present during the
learning process and each example pattern used
to train the network includes an input pattern
together with the a target or desired output
pattern, the correct answer.
During the learning process, a comparison can be made
between the computed output by the network and the
correct output to determine the error. The error can be
used to change network parameters, which result in an
improvement in performance of network .
Unsupervised Learning
The network has no feedback on the desired or
correct output. There is no teacher to present target
patterns. Its also referred to as self- organization, its
expected to organize itself into some useful
configuration, and modifies the weights to produce
output vectors that are consistent .
Methods of Learning:-
Hebbian learning rule.
Perceptron layer network.
Widrow-Hoff (Adaline)
Delta learning rule
Winner
EBP
The Error Back Propagation
The error back propagation (EBP) method
is the most effective and most widely used
learning method for the training of
multilayered neural networks (MLP), that
uses differentiable activation functions and
supervised training. One of the most
common activation functions used in EBP
is the sigmoid function and it’s defined as:
1
f ( x)
1 exp(x )
f ( x ) f ( x )1 f ( x )
'
w jk (n ) k h j w jk (n 1)
Sums delta input for each hidden unit and
calculate error term.
j k k w jk f ( ix i w ij )
'
w ij (n ) w ij (n 1) w ij (n )
(d
2
SSE p
k y ) p
k
p k
if SSE 10-4
Stop.
X 1 X 2 X 3 Xi
X i w ij (n ) w ij (n 1) w ij (n )
w ij ( n ) j x i
w ij (n 1 )
w ij
j
f ' ( ix i w ij
) k
( k w jk
)
h1 h2 hj h j
f ( ix i w ij )
w jk
(n ) w jk
(n 1) w jk
(n )
w jk
w jk ( n ) k h j
w jk (n 1)
y k
f ( j h jw jk )
(d y ) f ' ( jh j w )
y
k k k jk
1 y k
Outputs