Learning Algorithms in AI Explained
Learning Algorithms in AI Explained
Learning Algorithms
• Bayesian Networks
• Hidden Markov Models
• Genetic Algorithms
• Neural Networks
Bayesian Networks: Basics
• Requires models of how data behaves
– Set of Hypothesis: {H}
• Keeps track of likelihood of each model
being accurate as data becomes available
– P(H)
• Predicts as a weighted average
– P(E) = Sum( P(H)*H(E) )
Bayesian Network Example
• What color hair will Paul Schaffer’s
kids have if he marries Redhead?
– Hypothesis
• Ha(rr) rr x rr: 100% Redhead
• Hb(Rr) rr x Rr: 50% Redhead 50% Not
• Hc(RR) rr x RR: 100% Not
• Initially clueless:
– So P(Ha) = P(Hb) = P(Hc) = 1/3
Bayesian Network: Trace
History Hypothesis
• Bayesian Networks
• Hidden Markov Models
• Genetic Algorithms
• Neural Networks
Hidden Markov Models(HMM)
• Discrete learning algorithm
– Programmer must be able to categorize predictions
• HMMs also assume a model of the world
working behind the data
• Models are also extractable
• Common Uses
– Speech Recognition
– Secondary structure prediction
– Intron/Exon predictions
– Categorization of data
Hidden Markov Models: Take a
Step Back
• 1st order Markov Models:
– Q{States}
– Pr{Transition}
– Sum of all P(T) out of state = 1
P3
Q2
P1 1
P2
Q1 Q3
1-P1-P2
1-P3
P4 Q4
1-P4
1 order Markov Model Setup
st
1-P4
1 order Markov Model Trace
st
1-P4
What else can Markov do?
• Higher Order Models
– Kth order
• Metropolis-Hastings
– Determining thermodynamic equilibrium
• Continuous Markov Models
– Time step varies according to continuous
distribution
• Hidden Markov Models
– Discrete model learning
Hidden Markov Models (HMMs)
• A Markov Model drives the world but it is hidden
from direct observation and its status must be
inferred from a set of observables.
– Voice recognition
• Observable: Sound waves
• Hidden states: Words
– Intron/Exon prediction
• Observable: nucleotide sequence
• Hidden State: Exon, Intron, Non-coding
– Secondary structure prediction for protein
• Observable: Amino acid sequence
• Hidden State: Alpha helix, Beta Sheet, Unstructured
Hidden Markov Models: Example
• Secondary Structure Prediction
His Asp Arg Phe Ala Cis Ser Gln Glu Lys
Gly
Observable
Leu Met Asn Ser Tyr Thr Ile Trp Pro Val
States
Hidden
States
Alpha Unstructured
Beta
Helix Sheet
Hidden Markov Models: Smaller Example
• Exon/Intron Mapping
A P(A|It) P(T|Ex)
T P(T|It) P(G|Ex) G P(G|It) P(C|Ex) C
P(A|Ex) P(C|It)
P(T|Ig) P(G|Ig)
P(A|Ig) P(C|Ig)
Observable
States
Hidden
States
Intergenic
Exon P(Ex|Ig) P(Itr|Ig)
Intron
P(In|Ex)
P(Ig|Ig)
P(Ex|It)
Hidden Markov Models: Smaller Example
• Exon/Intron Mapping
Hidden State Transition Probabilities Observable State Probabilities
To Observable
A T G C
Hidden State
Ex Ig It
Ex 0.7 0.1 0.2 Ex 0.33 0.42 0.11 0.14
From
Starting Distribution
Ex Ig It
0.1 0.89 0.01
Hidden Markov Model
• How to predict outcomes from a HMM
• Brute force:
– Try every possible Markov chain
• Which chain has greatest probability of
generating observed data?
– Viterbi algorithm
• Dynamic programming approach
Viterbi Algorithm: Trace
Hidden State Transition Probabilities
Example Sequence: ATAATGGCGAGTG
To Exon Introgenic Intron
A 3.3*10-2 2.2*10-1 1.4*10-3
Ex Ig It
T
Ex 0.7 0.1 0.2
From
A
Ig 0.09 0.9 0.01
A
It 0.18 0.02 0.8
T
G
Starting Distribution
Exon = P(A|Ex) * Start Exon = 3.3*10-2
Ex Ig It
Introgenic = P(A|Ig) * Start Ig = 2.2*10-1
0.1 0.89 0.01
Intron = P(A|It) * Start It = 0.14 * 0.01 = 1.4*10-3
Viterbi Algorithm: Trace
Hidden State Transition Probabilities
Example Sequence: ATAATGGCGAGTG
Exon Introgenic Intron
To
A 3.3*10-2 2.2*10-1 1.4*10-3
Ex Ig It
T 4.6*10-2 2.8*10-2 1.1*10-3
Ex 0.7 0.1 0.2
From
A
Ig 0.49 0.5 0.01 A
It 0.18 0.02 0.8 T
G
G
Observable State Probabilities
C
Observable G
A T G C A
Hidden State
G
Ex 0.33 0.42 0.11 0.14
A
Ig 0.25 0.25 0.25 0.25
T
It 0.14 0.16 0.5 0.2 G
Observable G
A T G C A
Hidden State
G
Ex 0.33 0.42 0.11 0.14
A
Ig 0.25 0.25 0.25 0.25
T
It 0.14 0.16 0.5 0.2 G
Observable G
A T G C A
Hidden State
G
Ex 0.33 0.42 0.11 0.14
A
Ig 0.25 0.25 0.25 0.25
T
It 0.14 0.16 0.5 0.2 G
Starting Distribution
Exon = Max( P(Ex|Ex)*Pn-1(Ex), P(Ex|Ig)*Pn-1(Ig), P(Ex|It)*Pn-1(It) ) *P(T|Ex)
Ex Ig It Introgenic =Max( P(Ig|Ex)*P n-1(Ex), P(Ig|Ig)*Pn-1(Ig), P(Ig|It)*Pn-1(It) ) * P(T|Ig)
0.1 0.89 0.01 Intron = Max( P(It|Ex)*P n-1(Ex), P(It|Ig)*Pn-1(Ig), P(It,It)*Pn-1(It) ) * P(T|It)
Viterbi Algorithm: Trace
Hidden State Transition Probabilities
Example Sequence: ATAATGGCGAGTG
Exon Introgenic Intron
To
A 3.3*10-2 2.2*10-1 1.4*10-3
Ex Ig It
T 4.6*10-2 2.8*10-2 1.1*10-3
Ex 0.7 0.1 0.2
From
Observable G
A T G C A
Hidden State
G
Ex 0.33 0.42 0.11 0.14
A
Ig 0.25 0.25 0.25 0.25
T
It 0.14 0.16 0.5 0.2 G
Starting Distribution
Exon = Max( P(Ex|Ex)*Pn-1(Ex), P(Ex|Ig)*Pn-1(Ig), P(Ex|It)*Pn-1(It) ) *P(T|Ex)
Ex Ig It Introgenic =Max( P(Ig|Ex)*P n-1(Ex), P(Ig|Ig)*Pn-1(Ig), P(Ig|It)*Pn-1(It) ) * P(T|Ig)
0.1 0.89 0.01 Intron = Max( P(It|Ex)*P n-1(Ex), P(It|Ig)*Pn-1(Ig), P(It,It)*Pn-1(It) ) * P(T|It)
Viterbi Algorithm: Trace
Hidden State Transition Probabilities
Example Sequence: ATAATGGCGAGTG
Exon Introgenic Intron
To
A 3.3*10-2 2.2*10-1 1.4*10-3
Ex Ig It
T 4.6*10-2 2.8*10-2 1.1*10-3
Ex 0.7 0.1 0.2
From
Observable G
A T G C A
Hidden State
G
Ex 0.33 0.42 0.11 0.14
A
Ig 0.25 0.25 0.25 0.25
T
It 0.14 0.16 0.5 0.2 G
Starting Distribution
Exon = Max( P(Ex|Ex)*Pn-1(Ex), P(Ex|Ig)*Pn-1(Ig), P(Ex|It)*Pn-1(It) ) *P(T|Ex)
Ex Ig It Introgenic =Max( P(Ig|Ex)*P n-1(Ex), P(Ig|Ig)*Pn-1(Ig), P(Ig|It)*Pn-1(It) ) * P(T|Ig)
0.1 0.89 0.01 Intron = Max( P(It|Ex)*P n-1(Ex), P(It|Ig)*Pn-1(Ig), P(It,It)*Pn-1(It) ) * P(T|It)
Viterbi Algorithm: Trace
Hidden State Transition Probabilities
Example Sequence: ATAATGGCGAGTG
Exon Introgenic Intron
To
A 3.3*10-2 2.2*10-1 1.4*10-3
Ex Ig It
T 4.6*10-2 2.8*10-2 1.1*10-3
Ex 0.7 0.1 0.2
From
Observable G
A T G C A
Hidden State
G
Ex 0.33 0.42 0.11 0.14
A
Ig 0.25 0.25 0.25 0.25
T
It 0.14 0.16 0.5 0.2 G
Starting Distribution
Exon = Max( P(Ex|Ex)*Pn-1(Ex), P(Ex|Ig)*Pn-1(Ig), P(Ex|It)*Pn-1(It) ) *P(T|Ex)
Ex Ig It Introgenic =Max( P(Ig|Ex)*P n-1(Ex), P(Ig|Ig)*Pn-1(Ig), P(Ig|It)*Pn-1(It) ) * P(T|Ig)
0.1 0.89 0.01 Intron = Max( P(It|Ex)*P n-1(Ex), P(It|Ig)*Pn-1(Ig), P(It,It)*Pn-1(It) ) * P(T|It)
Viterbi Algorithm: Trace
Hidden State Transition Probabilities
Example Sequence: ATAATGGCGAGTG
Exon Introgenic Intron
To
A 3.3*10-2 2.2*10-1 1.4*10-3
Ex Ig It
T 4.6*10-2 2.8*10-2 1.1*10-3
Ex 0.7 0.1 0.2
From
Starting Distribution
Exon = Max( P(Ex|Ex)*Pn-1(Ex), P(Ex|Ig)*Pn-1(Ig), P(Ex|It)*Pn-1(It) ) *P(T|Ex)
Ex Ig It Introgenic =Max( P(Ig|Ex)*P n-1(Ex), P(Ig|Ig)*Pn-1(Ig), P(Ig|It)*Pn-1(It) ) * P(T|Ig)
0.1 0.89 0.01 Intron = Max( P(It|Ex)*P n-1(Ex), P(It|Ig)*Pn-1(Ig), P(It,It)*Pn-1(It) ) * P(T|It)
Hidden Markov Models
• How to Train an HMM
– The forward-backward algorithm
• Ugly probability theory math:
CENSORED
• Starts with an initial guess of parameters
• Refines parameters by attempting to reduce the
errors it provokes with fitted to the data.
– Normalized probability of the “Forward probability” of
arriving at the state given the observable cross multiplied
by the backward probability of generating that
observable given the parameter.
The Algorithms
• Bayesian Networks
• Hidden Markov Models
• Genetic Algorithms
• Neural Networks
Genetic Algorithms
• Individuals are series of bits which
represent candidate solutions
– Functions
– Structures
– Images
– Code
• Based on Darwin evolution
– individuals mate, mutate, and are selected
based on a Fitness Function
Genetic Algorithms
• Encoding Rules
– “Gray” bit encoding
• Bit distance proportional to value distance
• Selection Rules
– Digital / Analog Threshold
– Linear Amplification Vs Weighted Amplification
• Mating Rules
– Mutation parameters
– Recombination parameters
Genetic Algorithms
• When are they useful?
– Movements in sequence space are funnel shaped
with fitness function
• Systems where evolution actually applies!
• Examples
– Medicinal chemistry
– Protein folding
– Amino acid substitutions
– Membrane trafficking modeling
– Ecological simulations
– Linear Programming
– Traveling salesman
The Algorithms
• Bayesian Networks
• Hidden Markov Models
• Genetic Algorithms
• Neural Networks
Neural Networks
• 1943 McCulloch and Pitts Model of how
Neurons process information
– Field immediately splits
• Studying brain’s
– Neurology
• Studying artificial intelligence
– Neural Networks
Neural Networks:
A Neuron, Node, or Unit
Wa,c
Wb,c
Neural Networks:
Activation Functions
Sigmoid Function Threshold Function
(logistic function)
out out
+1 +1
In In
W0,c = 1.5
Σ(W)- W0,c Output
Logical And (Bias) a z (Bias)
B
∩ 1 0 Wb,c = 1
A
1 1 0
0 0 0 If ( Σ(w) – Wo,c > 0 )
Then FIRE
Else
Don’t
And Gate: Trace
Off Wa,c = 1
W0,c = 1.5
(Bias) -1.5 -1.5 < 0 Off
Wb,c = 1
Off
And Gate: Trace
On Wa,c = 1
W0,c = 1.5
(Bias) -0.5 -0.5 < 0 Off
Wb,c = 1
Off
And Gate: Trace
Off Wa,c = 1
W0,c = 1.5
(Bias) -0.5 -0.5 < 0 Off
Wb,c = 1
On
And Gate: Trace
On Wa,c = 1
W0,c = 1.5
(Bias) 0.5 0.5 > 0 On
Wb,c = 1
On
Threshold Functions can make
Logic Gates with Neurons!
Wa,c = 1
W0,c = 0.5
(Bias)
Σ(W)- W0,c
Logical Or a z (Bias)
B
U 1 0 Wb,c = 1
A
1 1 1
0 1 0 If ( Σ(w) – Wo,c > 0 )
Then FIRE
Else
Don’t
Or Gate: Trace
Off Wa,c = 1
W0,c = 0.5
(Bias) -0.5 -0.5 < 0 Off
Wb,c = 1
Off
Or Gate: Trace
On Wa,c = 1
W0,c = 0.5
(Bias) 0.5 0.5 > 0 On
Wb,c = 1
Off
Or Gate: Trace
Off Wa,c = 1
W0,c = 0.5
(Bias) 0.5 0.5 > 0 On
Wb,c = 1
On
Or Gate: Trace
On Wa,c = 1
W0,c = 0.5
(Bias) 1.5 1.5 > 0 On
Wb,c = 1
On
Threshold Functions can make
Logic Gates with Neurons!
Wa,c = -1
W0,c = -0.5
Logical Not (Bias)
Σ(W)- W0,c
a z (Bias)
1 0
!
0 1
If ( Σ(w) – Wo,c > 0 )
Then FIRE
Else
Don’t
Not Gate: Trace
Off Wa,c = -1
W0,c = -0.5
(Bias) -0.5 0.5 > 0 On
0 – (-0.5) = 0.5
Not Gate: Trace
On Wa,c = -1
W0,c = -0.5
(Bias) -0.5 -0.5 < 0 Off
-1 – (-0.5) = -0.5
Feed-Forward Vs.
Recurrent Networks
• Feed-Forward • Recurrent
– No Cyclic connections – Cyclic connections
– Function of its current – Dynamic behavior
inputs • Stable
– No internal state other • Oscillatory
then weights of • Chaotic
connections – Response depends on
• “Out of time” current state
• “In time”
– Short term memory!
Feed-Forward Networks
• “Knowledge” is represented by weight on edges
– Modeless!
• “Learning” consists of adjusting weights
• Customary Arrangements
– One Boolean output for each value
– Arranged in Layers
• Layer 1 = inputs
• Layer 2 to (n-1) = Hidden
• Layer N = outputs
– “Perceptron” 2 layer Feed-Forward network
Layers
Input Output
Hidden layer
Perceptron Learning
• Gradient Decent used to reduce error
CENSORED
• Essentially:
– New Weight = Old Weight + adjustment
– Adjustment = α X error X input X d(activation function)
• α = Learning Rate
Hidden Network Learning
• Back-Propagation
CENSORED
• Essentially:
– Start with Gradient Decent from output
– Assign “blame” to inputting neurons proportional to
their weights
– Adjust weights at previous level using Gradient
decent based on “blame”
They don’t get it either:
Issues that aren’t well understood
• α (Learning Rate)
• Depth of network (number of layers)
• Size of hidden layers
– Overfitting
– Cross-validation
• Minimum connectivity
– Optimal Brain Damage Algorithm
• No extractable model!
How Are Neural Nets Different
From My Brain?
1. Neural nets are feed forward
– Brains can be recurrent with feedback loops
2. Neural nets do not distinguish between + or –
connections
– In brains excitatory and inhibitory neurons have different
properties
“Fraser’s” Rules
• Inhibitory neurons short-distance
3. Neural nets exist “Out of time”
– Our brains clearly do exist “in time”
4. Neural nets learn VERY differently
– We have very little idea how our brains are learning
“In theory one can, of course, implement biologically realistic neural networks, but this is
a mammoth task. All kinds of details have to be gotten right, or you end up with a
network that completely decays to unconnectedness, or one that ramps up its
connections until it basically has a seizure.”
Frontiers in AI
• Applications of current algorithms
• New algorithms for determining
parameters from training data
– Backward-Forward
– Backpropagation
• Better classification of the mysteries of
neural networks
• Pathology modeling in neural networks
• Evolutionary modeling