ML.Unit-5
ML.Unit-5
~fU:IT - 5)
~
'\
~~
'•
--) ~ J • • I
.
-~ ~
Intro to Artificial
neural network
Input Weights
Activation
function
Network Input
ne~
0,
r, • Activation
Transfer
function
'•
Thrnhold
value
ll] lnputs(x 1 ; i x2 ; ••• ,_i:; xn)
• Inputs are the raw data or features sent to the neuron.
Formula:
II Threshold Value ( ej )
A fixed value the total Input must reach or exceed.
Converts input into the final output value (usually between O and 1 or another range).
II Output ( O; )
The final result produced by the neuron.
• Can represent decisions like YES/ NO, Class A/B, or numerical values.
In an ANN:
1. Input Layer:
Takes the raw data {like image pixels or numbers from a file).
2. Hidden Layer(s):
These layers process the data. There can be one or many hidden layers. Each layer
helps the network "learn' better.
3. Output Layer:
Gives the final result or prediction (e.g., yes/no, cat/dog, etc.).
• Example:
If you're feeding a photo of a fruit, the Input layer might take In the image pixels, the hidden
layers process the color and shape, and the output layer might say 'This is an Apple."
Adds a bias.
.. The ANN learns by adjusting the weights using a process called backpropagation
during training.
Medical diagnosis
Self-driving cars
Language translation
Intro to neural network
• Neural Networks are inspired by the human brain and ,ts way of thinking
• They help solve complex problems like image detection. speech recognition. and
pred1ct1on
• Neural Networks learn from examples rather than following fixed rules.
• They are structured in three types of layers Input Layer. Hidden Layers. and Output
Layer
• Weights control the strength of connections and are adjusted while learning
• Activation functions decide whether to pass a signal to the next neuron or not
y
Output
Nucleus
e
•••
linear
function
Activation
function
t ~ Basic Concepts of Neural Networks:
• Neurons:
• Weights:
• Activation Function:
• Backpropagation:
Training: Giving data to the network and adjusting weights based on the prediction
error.
1. Input Layer
• This 1s the first layer
It takes the raw data like numbers, images. or text
Example If you·re predicting 1f a person 1s healthy, the input layer may take height.
weight. and age
2. Hidden Layers
O These are the middle layers that do the thinking •
Each neuron takes inputs, does math (using weights and bias), and passes output to
the next layer
More hidden layers = deep neural network
3. Output Layer
This gives the final result of the network
Example It might say "Healthy" or "Not Healthy", or 1t could be a number like a price
pred1ct1on
V Summary:
• Neural Networks use layers of neurons to learn and solve problems like humans
• Backpropagation and Train ing help the network learn from its mistakes
• Learning improves the network over time. making 1t smarter with more data
# Appropriate Problems for Neural Networks
Neural Networks are super smart models that can learn from complex, messy, and large
data. They're perfect for tasks where regular rules don't work well. Let's look at where they
really shine:
0 1. Image Recognition
Why NN is great: It learns from pixels and patterns that are hard to describe with
rules.
~ 2. Speech Recognition
Problem: Understanding what someone Is saying from their voice r"•.
~ 4. Predicting Trends
Problem: Forecasting stock prices Ill. weather ~, or sales.
Why NN is great: It finds hidden patterns In data that changes over time.
~ 5. Medical Diagnosis
~ 6. Self-Driving Cars
Problem: Making the car understand roads, tra ffi c signs, people ..-.. I .
Why NN is great: It processes vi suals, sensors, and situations quickly and smartly.
~ 7 . Fraud Detection
Problem: Finding fake transactions or cheating behaviors i i .
Why NN is great: It learns the difference between normal and suspicious behavior.
• Perceptron:
4 . It add s up the weighted input s then checks ,f the total Is h,gh enough
6 . Perceptron s use an activation function (like step function) to decide the output
7 . It can only solve simple problems (where data can be separated by a c.tr a,gh t line)
8 . It learns using training data - by ad1ust1ng the weights until the output becomes
correct
10. Introduced by Frank Rosenblatt in 1958 - so ,ts been around a long time but Is still
important'
* Simple Example:
If you re t rying to decide whether to wear a Jacket
• Input Is ,t cold 7 I 1 or 01
Weights
Constant 0 -
\.:_J-----.. WO
inputs
Source
Single-layer Perceptron
0
I r· ' ; I : •. ' 'I •
@1RucT u0
-~-~~~~ · ,
• 1 "' ,,, •
·
_ • l ; J, , \ r It', (,
1 • , ·,, )J I"
1
, ·1 ) • • , •
~ -
-~~-4 ~~ Pu,-1 o:drw±·
***'k* ~ 1n d<!fl' n'i-HonJ)
~ ~ e ne..tu:rllJc. J:lll.d& e.xJt.EJt .kt< ,.,,'Jo.1..<i ffewm t:X...d:(l,1-d -J.,e lll(U£±
Jn.u
___ ~v«. J
d,.n ,t
a~1 ~
" .a:t t..CLC..h ru.uxun :fe ~,cP .,u, fdv ,u
~.
(Ot ruL D.L t~)
~ Juu n ~ fu.a1 .a.nd_ e:u_d L ( F 11.J<L Ju.,.m,a.ri) .
n"u- II " (! " .JL "
~...,,,,.,.. ... ,~,.,,
~ ~
• __ ./JD.
l'1.0.JLa.J..L ~ , , 'f v,..cJ~ ~ ..u.J(.L ~e .. ~ r a ~ <..a vtr() ,
~e~~
~~
~ 4~ ~ eiJjea ~
~
t} How it Works (Simple Steps):
1. Forward Pa ss:
Data enters the input layer. passes through hidden layers. and reaches the output
layer to give an initial prediction.
2. Error Calculation:
The network checks how wrong the pred1ct1on 1s by comparing 1t to the correct
answer (called ·error ")
4. Repeat:
This cycle repeats many times until the network learns to give correct outputs.
Backpropagati on
Error is :ent back to
~ each neuron in backward
Gracknt of error is
\• direction
c:d.ailattd with re:ptd to
eadlweight
(DError - differene2
Error between prcckttd
output and actud.
output
II Initialize Weights
• Assign random small values to all weights and biases in the network.
Ii Update Weights
• Adjust the weights using:
mt Repeat
• Repeat steps F21: to (51; for multiple iterations (called epochs) until the error is
minimized.
~ Donel
This is the full algorithm in simple, clear steps - easy to write and understand.
@EE P lE"ARfUC.n)G, .)
T n.JTRODUCTTO I\J:
~ -(UJJd ~ mDdwu ~ .
~ 11 ~ ~ :te .l.t.a:xn ftom .M 4, .da:lo. ~ max.,. unatl dw90fU -
•
~ il ,..,, a• ., • -···~-0
L.ULJ vixz•f ,,af '1.U.l,UU. ne:li.ve~
'- )
- c ru ru { 6d' »n.llff u )
- Rmru ( fill u.qµ.vt.r.u , -Jud, ~ )
_ u,AIU~ { ¥' .awf/n7 cla1.o) .
~ Ubtd JJ) :u...a.1 -.JlJt. .ll{l(I .' l. ~ ,dxl_w"a UVU. , ~ ~ -> t.ha.f 1tetJ. .
----> 31 Lffif"B"" wW, kn, - -n., me:iu .i.l ha,,,,, , -v., Ynax.tv< : ' b,,,,,.,,._I
~ 1:1.
..
,u hatl .ef- rna:l..vu-,
-- -- ·- \
Ar .
- ·'
J
\
.- \ I \ l ~ I I ·' '
rnL
\ I \ \ \~ ·'
\ \ t I
\ \ \ '. . '
@ Why Use Deep Learning?
Because It can:
Deep learning works the same way - 11 learns In steps or layers. from simple to complex.
• Output Layer - where the answer comes out (like ·Irs a cat 1")
Each layer passes the knowledge to the next one. like students passing notes in class ■
••••
Recognizing faces 9
Self-driving cars
Here are the main types of Deep Learning Architectures with cute, easy examples fl
How it works: Data flows in one direction - from input - hidden layers - output.
How it works: It scans images In parts to find shapes, edges, and patterns.
How it works: It remembers previous inputs to make better predictions for the next
one.
How it works: One part tries to make fake data (like Images), the other tries to catch
it.
Example: Making new human faces that look real fl!ti lil.
l'sl Autoencoders
What it is: A network that learns to compress and then rebuild data.
How it works: It learns to find Important patterns and ignore useless details.
Structure:
• Hidden Layers: Layers between input and output that process the input using
weighted connections.
Characteristics:
Use Cases:
(F~fD FORWARD
.. '
I •
I) t \ I I .
►
'· I !
►
, , .- •
JI..) '-- , .J
1.
)J , ! ► ,)
\
,,
►
• J ' 'Jl~
\\ ). ► t l ', l ~
►!
\. I
':!'t..'-. \
'
-~ ) ) '
' ' '- ~ur\_. ,_,' .·.h,'&ter,) } · tiutput., .
) J,; \ ; l l
•.
~e..l 4~~
• \) • H ~d --~ ) .
tar-~
\_ r7·, '~'- -~"" ,. .: ( I
I
• Convolutional Neural Network (CNN)
Small Description:
A specialized network designed to process grid-like data such as images by automatically
detecting spatial features.
Structure:
• Convolutional Layers: Apply filters to extract features such as edges, textures, etc.
Characteristics:
• Weight Sharing: Uses shared filters across the image, which reduces the number of
parameters.
Use Cases:
<..IM51fic.o.Bon in 00¥ ~•
• Recurrent Neural Network (RNN)
Small Description:
A neural network designed to handle sequential data like time-series, speech, or text, by
using its internal memory to store past information.
Structure:
• Hidden Layers: Each unit processes data sequentially, passing information from
previous steps (memory).
• Output Layer: Produces output for each time step or the final output after processing
the sequence.
Characteristics:
• Memory: RNNs can store information from previous time steps, useful for sequential
tasks.
• Sensitive to Input Order: The order of the data matters, which is key for tasks like
time-series analysis.
Use Cases:
• Natural Language Processing: Tasks like sentiment analysis, text generation, and
machine translation.
/ : ---- • J
RfCVRRt=IUT nJEURA L • tU;TWORt
,
lJ ;
(Rroru) ' ,Y , - .,
- - -- ~ ~:..::_!_- - - -/-/~ __,, ~•
'
hidde.o output
•o~e." la~e.Y .
Generative Adversarial Network (GAN)
Small Description:
A framework where two networks, a Generator and a Discriminator, compete to create
realistic synthetic data from noise _
Structure:
• Adversarial Process: The Generator tries to improve to fool the Discriminator, whi le
the Discriminator improves to correctly identify fake data.
Characteristics:
• Data Generation: GANs can create new, realistic samples (e.g., fake images, videos).
Use Cases:
• Data Augmentation: Generating additional data to train other models when real dat a
is limit ed.
6--------:
''
•
''
'
•••
•
•'
GJ GJ
Long Short-Term Memory Network (LSTM)
Small Description:
An advanced type of RNN designed to solve long-term dependency problems in sequence
prediction tasks using gates to control information flow
Structure:
• LSTM Cells Contain gates (Forget Gate, Input Gate. Output Gate) to control which
data to remember and which to forget
• Output Layer Produces the final output after processing the sequence
Characteristics:
• Gated Mechanism Gates help decide which information 1s important and should be
kept
Use Cases:
- - ~-
__ -···--··-········..
.-···························-···· ............._.... .............
.f c,- 1 __.
-·················•.
c, ·=
.
....................... _............... .............. ... ........... ....... .................... . . ..... ..... . ........ .... .......... ...........................:
•
Hddenatale
•
( ;;;~·;···-··········· ···-····;;;······.1
·•.................. ... ..... ......................
x, ~
bcelcta:e
upda!o
Fogel~ ~
g:ttO pe gale