0% found this document useful (0 votes)
3 views37 pages

Neural Network Copy

A neural network is an artificial representation of the human brain designed to simulate its learning process through interconnected artificial neurons. These networks can be categorized into single-layer and multi-layer feed-forward networks, as well as recurrent networks, each serving different computational purposes. Learning methods in neural networks include supervised, unsupervised, and reinforced learning, with applications ranging from clustering and classification to function approximation and prediction systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views37 pages

Neural Network Copy

A neural network is an artificial representation of the human brain designed to simulate its learning process through interconnected artificial neurons. These networks can be categorized into single-layer and multi-layer feed-forward networks, as well as recurrent networks, each serving different computational purposes. Learning methods in neural networks include supervised, unsupervised, and reinforced learning, with applications ranging from clustering and classification to function approximation and prediction systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Neural Network

What is Neural Net ?


• A neural net is an artificial representation of the human brain that tries to simulate
its learning process. An artificial neural network (ANN) is often called a "Neural Network"
or simply Neural Net (NN).

• Traditionally, the word neural network is referred to a network of biological neurons in


the nervous system that process and transmit information.

• Artificial neural network is an interconnected group of artificial neurons that uses a


mathematical model or computational model for information processing based on a
connectionist approach to computation.
• The artificial neural networks are made of interconnecting artificial neurons which may
share some properties of biological neural networks.
• Artificial Neural network is a network of simple processing elements (neurons) which can
exhibit complex global behavior, determined by the connections between the processing
elements and element parameters.
Introduction
• Neural Computers mimic certain processing capabilities of the human brain.

- Neural Computing is an information processing paradigm, inspired by biological


system, composed of a large number of highly interconnected processing elements
(neurons) working in unison to solve specific problems.
- Artificial Neural Networks (ANNs), like people, learn by example.

- An ANN is configured for a specific application, such as pattern recognition or data


classification, through a learning process.

- Learning in biological systems involves adjustments to the synaptic connections


that exist between the neurons. This is true of ANNs as well.
Biological Neuron
Model
Dendrites are branching fibers that extend from the cell
body or soma.Soma or cell body of a neuron contains
the nucleus and other structures, support chemical
processing and production of neurotransmitters.

Axon is a singular fiber carries information away from the


soma to the synaptic sites of other neurons (dendrites and
somas), muscles, or glands.

Synapse is the point of connection between two neurons


or a neuron and a muscle or a gland. Electrochemical
communication between neurons takes place at these
junctions.
Information
flow in a
Neural Cell
ll a Neural Cell
■ Dendrites receive activation from other neurons.
■ Soma processes the incoming activations and converts them into
output activations.
■ Axons act as transmission lines to send activation to other neurons.
■ Synapses the junctions allow signal transmission
between the axons and dendrites.

■ The processof transmission is by diffusion of


chemicals called neuro-transmitters.
• An artificial neuron is a mathematical function conceived as a
Artificial •
simple model of a real (biological) neuron.
The McCulloch-Pitts Neuron
Neuron Model • This is a simplified model of real neurons, known as a
Threshold Logic Unit.
■ A set of input connections brings in activations from other
neurons.
■ A processing unit sums the inputs, and then applies a
non-linear activation function (i.e. squashing / transfer /
threshold function).
■ An output line transmits the result to other neurons.
• The Function y= f(x) describes a relationship, an
Functions input-output mapping, from x to y.
• Threshold or Sign function :sgn(x) defined as
• Threshold or Sign function :
sigmoid(x) defined as a smoothed
(differentiable) form of the threshold function
• Neuron consists of three basic components
Artificial Neuron - weights, thresholds, and a single
- Basic Elements activation function.
Weighting
Factors w

The values w1 , w2 , . . . wn are weights to determine the strength of


input vector X = [x1 , x2 , . . . , xn]T. Each input is multiplied by the
associated weight of the neuron connection XT W. The +ve weight
excites and the -ve weight inhibits the node output.
Threshold F

• The node’s internal threshold F is the magnitude offset. It affects the


activation of the node output y as:

• To generate the final output Y , the sum is passed on to a non-linear filter


f called Activation Function or Transfer function or Squash function which
releases the output Y.
Activation Function

• An activation function f performs a mathematical operation on the


signal output. The most common activation functions are:
• Linear Function,
• Piecewise Linear Function,
• Tangent hyperbolic function
• Threshold Function,
• Sigmoidal (S shaped) function,
The activation functions are chosen depending upon the type of
problem to be solved by the network.
Threshold • A threshold (hard-limiter) activation function is either
Function a binary type or a bipolar type as shown below.
Bipolar
threshold
• The nonlinear curved S-shape function is called the
sigmoid function. This is most common type of activation
Sigmoidal Function used to construct the neural networks. It is mathematically
(S-shape function) well behaved, differentiable and strictly increasing
function.
• The neuron shown consists of four inputs with the
Example : weights.

The output I of the network, prior to the activation function stage, is


• The Single Layer Feed-forward Network
consists of a single layer of weights ,
where the inputs are directly connected to
Single Layer the outputs, via a series of weights. The
synaptic links carrying weights connect
Feed- every input to every output , but not other
way. This way it is considered a network of
forward feed-forward type. The sum of the
Network products of the weights and the inputs is
calculated in each neuron node, and if the
value is above some threshold (typically 0)
the neuron fires and takes the activated
value (typically 1); otherwise it takes the
deactivated value (typically -1).
Multi Layer Feed-forward Network

• The name suggests, it consists of multiple layers. The architecture


of this class of network, besides having the input and the output
layers, also have one or more intermediary layers called hidden
layers. The computational units of the hidden layer are known as
hidden neurons.
Multi Layer Feed-forward
Network
-The hidden layer does intermediate computation
before directing the input to output layer.
-The input layer neurons are linked to the hidden layer
neurons; the weights on these links are referred to as
input-hidden layer weights.
-The hidden layer neurons and the corresponding
weights are referred to as output-hidden layer weights.
-A multi-layer feed-forward network with ℓ input
neurons, m1 neurons in the first hidden layers, m2
neurons in the second hidden layers, and n output
neurons in the output layers is written as (ℓ - m1 - m2 –
n ).
Recurrent
Networks

• The Recurrent Networks


differ from feed-forward
architecture.
• A Recurrent network has
at least one feed back
loop.
Learning Methods in Neural Networks
• The learning methods in neural networks are classified into three
basic types :
• -Supervised Learning,
• -Unsupervised Learning and
• -Reinforced Learning
Supervised Learning

• -A teacher is present during learning process and presents


expected output.
• -Every input pattern is used to train the network.
• -Learning process is based on comparison, between network's
computed output and the correct expected output, generating
"error".
• -The "error" generated is used to change network parameters that
result improved performance.
Unsupervised Learning

• No teacher is present.
• The expected or desired output is not presented to the network.
• The system learns of it own by discovering and adapting to the
structural features in the input patterns.
Reinforced learning

-A teacher is present but does not present the expected or desired


output but only indicated if the computed output is correct or
incorrect.
-The information provided helps the network in its learning process.
-A reward is given for correct answer computed and a penalty for a
wrong answer.
• Definition :An arrangement of one input layer of
Single layer neurons feed forward to one output layer of neurons
Perceptron is known as Single Layer Perceptron.
Single layer
Perceptron
Algorithm of
Perceptron
Network
Flowchart for
Perceptron
Network
ADAptive LINear Element (ADALINE)
• An ADALINE consists of a single neuron of the McCulloch-Pitts
type, where its weights are determined by the normalized least
mean square (LMS) training law. The LMS learning rule is also
referred to as delta rule. It is a well-established supervised
training method that has been used over a wide range of diverse
applications.
Architecture
of a simple
ADALINE
Algorithm of
Adaline
Network
Flowchart for
Adaline training
process
Applications of Neural Network
Clustering:
• A clustering algorithm explores the similarity between patterns and places similar patterns in
a cluster. Best known applications include data compression and data mining.
Classification/Pattern recognition:
• The task of pattern recognition is to assign an input pattern (like handwritten symbol) to
one of many classes. This category includes algorithmic implementations such as
associative memory.
Function approximation :
• The tasks of function approximation is to find an estimate of the unknown function subject to
noise. Various engineering and scientific disciplines require function approximation.
Prediction Systems:
• The task is to forecast some future values of a time-sequenced data. Prediction has a
significant impact on decision support systems. Prediction differs from function
approximation by considering time factor. System may be dynamic and may produce
different results for the same input data based on system state (time).

You might also like