0% found this document useful (0 votes)
14 views

Digital Design Notes

Uploaded by

Saurav Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Digital Design Notes

Uploaded by

Saurav Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Introduction to Electronic

Design Automation Testing


Jie-Hong Roland Jiang
江介宏

Department of Electrical Engineering


National Taiwan University

Spring 2013

Slides are by Courtesy of Prof. S.-Y. Huang and C.-M. Li


1 2

Testing Design Flow


 Recap IC Fabrication
 Design verification Idea
Wafer
 Is what I specified really what I wanted?
(hundreds of dies)
 Property checking
Architecture Design
 Implementation verification Sawing & Packaging
 Is what I implemented really what I specified?
 Equivalence checking Block
diagram Final chips
 Manufacture verification
 Is what I manufactured really what I implemented?
Circuit & Layout Design Final Testing
 Testing; post manufacture verification
 Quality control
 Distinguish between good and bad chips
Layout
customers
Bad chips Good chips
3 4
Manufacturing Defects Faults, Errors and Failures
 Processing faults  Faults
 missing contact windows  A physical defect within a circuit or a system
 parasitic transistors  May or may not cause a system failure
 oxide breakdown
 Errors
 Material defects
 bulk defects (cracks, crystal imperfections)  Manifestation of a fault that results in incorrect circuit (system)
 surface impurities outputs or states

 Time-dependent failures  Caused by faults


 dielectric breakdown  Failures
 electro-migration  Deviation of a circuit or system from its specified behavior
 Packaging failures  Fail to do what is supposed to do
 contact degradation  Caused by errors
 seal leaks  Faults cause errors; errors cause failures

5 6

Testing and Diagnosis Scenario of Manufacturing Test


Testing TEST VECTORS

 Exercise a system and analyze the response to


ensure whether it behaves correctly after
manufacturing
Manufactured
Circuits
Diagnosis
 Locate the causes of misbehavior after the CIRCUIT RESPONSE
incorrectness is detected

CORRECT PASS/FAIL
Comparator
RESPONSES

7 8
Test Systems Purpose of Testing
 Verify manufactured circuits
 Improve system reliability
 Reduce repair costs
 Repair cost goes up by an order of magnitude each step
away from the fab. line
1000
1000
500
100
Cost 100
Cost 50
Per per 10

Fault fault
(Dollars)
10
5
(dollars) 1
1
0.5
IC Board System Warranty
IC Test
Test Board
Test System
Test Warranty
Repair
Test Test Repair
B. Davis, “The Economics of Automatic Testing” McGraw-Hill 1982
9 10

Testing and Quality Fault Coverage


 Quality of shipped part can be expressed as a function of
the yield Y and test (fault) coverage T.
Fault coverage T
 Measure of the ability of a test set to detect a
given set of faults that may occur on the
Design Under Test (DUT)

ASIC Shipped Parts


# detected faults
Testing
Fabrication Yield: Quality: T=
Fraction of Defective parts # all possible faults
Good parts Per Million (DPM)

Rejects

11 12
Defect Level Defect Level vs. Fault Coverage
A defect level is the fraction of the
Defect Level
1.0
shipped parts that are defective Y = 0.1
Y = 0.01

0.8 Y = 0.25

DL = 1 – Y(1-T) 0.6
Y = 0.5
0.4
Y = 0.75

Y: yield 0.2 Y = 0.9


T: fault coverage 0 20 40 60 80 100
(Williams IBM 1980) Fault Coverage ( % )

High fault coverage Low defect level


13 14

DPM vs. Yield and Coverage Why Testing Is Difficult ?


Yield Fault Coverage DPM  Test time explodes exponentially in exhaustive
testing of VLSI
50% 90% 67,000  For a combinational circuit with 50 inputs, need 250 =
1.126 x 1015 test patterns.
75% 90% 28,000
90% 90% 10,000  Assume one test per 10-7sec, it takes 1.125x108sec =
3.57years.
95% 90% 5,000
99% 90% 1,000  Test generation for sequential circuits are even more
difficult due to the lack of controllability and
observability at flip-flops (latches)
90% 90% 10,000
90% 95% 5,000
90% 99% 1,000  Functional testing
90% 99.9% 100  may NOT be able to detect the physical faults

15 16
The Infamous Design/Test Wall Outline
30-years of experience proves that
test after design does not work!
Fault Modeling

Oops!
What does Fault Simulation
Functionally correct! this chip do?!
We're done!
Automatic Test Pattern Generation

Design for Testability

Design Engineer
Test Engineer
17 18

Functional vs. Structural Testing Why Fault Model ?


I/O functional testing is inadequate for  Fault model identifies target faults
 Model faults that are most likely to occur
manufacturing
 Need fault models  Fault model limits the scope of test generation
 Create tests only for the modeled faults

Exhaustive testing is daunting  Fault model makes testing effective


 Need abstraction and smart algorithms  Fault coverage can be computed for specific test
 Structural testing is more effective patterns to measure its effectiveness

 Fault model makes analysis possible


 Associate specific defects with specific test patterns

19 20
Fault Modeling vs. Physical Defects
Fault Modeling vs. Physical Defects (cont’d)
Fault modeling  Electrical effects
 Model the effects of physical defects on the  Shorts (bridging faults)
logic function and timing  Opens
 Transistor stuck-on/open
 Resistive shorts/opens
Physical defects  Change in threshold voltages
 Silicon defects
 Photolithographic defects  Logical effects
 Logical stuck-at-0/1
 Mask contamination
 Slower transition (delay faults)
 Process variation
 AND-bridging, OR-bridging
 Defective oxides

21 22

Typical Fault Types Single Stuck-At Fault


Stuck-at faults  Assumptions:
 Only one wire is faulty
Bridging faults  Fault can be at an input or output of a gate
Transistor stuck-on/open faults  Faulty wire permanently sticks at 0 or 1
Delay faults
faulty response
IDDQ faults test vector
ideal response
State transition faults (for FSM) 0 0
1/0
Memory faults 1

1
PLA faults 1/0
1
stuck-at-0
23 24
Multiple Stuck-At Faults Why Single Stuck-At Fault Model ?
Several stuck-at faults occur at the same  Complexity is greatly reduced
 Many different physical defects may be modeled by the
time same logical single stuck-at fault
 Common in high density circuits  Stuck-at fault is technology independent
 Can be applied to TTL, ECL, CMOS, BiCMOS etc.
 Design style independent
For a circuit with k lines  Gate array, standard cell, custom design
 There are 2k single stuck-at faults  Detection capability of un-modeled defects
 There are 3k-1 multiple stuck-at faults  Empirically, many un-modeled defects can also be
detected accidentally under the single stuck-at fault
A line could be stuck-at-0, stuck-at-1, or fault-free
model
One out of 3k resulting circuits is fault-free
 Cover a large percentage of multiple stuck-at
faults

25 26

Why Logical Fault Modeling ? Definition of Fault Detection


 Fault analysis on logic rather than physical problem  A test (vector) t detects a fault f iff t detects f
 Complexity is reduced
(i.e. z(t) ≠ zf(t))
 Technology independent
 Same fault model is applicable to many technologies  Example
 Testing and diagnosis methods remain valid despite changes in
technology X1
Z1

 Wide applications x
 The derived tests may be used for physical faults whose effect s-a-1 Z1=X1X2 Z2=X2X3
on circuit behavior is not completely understood or too X2
complex to be analyzed Z1f =X1 Z2f =X2X3

 Popularity Z2
X3
 Stuck-at fault is the most popular logical fault model

Test (x1,x2,x3) = (100) detects f because z1(100)=0 and z1f (100)=1


27 28
Fault Detection Requirement Fault Sensitization
 A test t that detects a fault f G1
 activates f (or generate a fault effect) by creating X1 1
different v and vf values at the site of the fault X2 0 G3 z(1011) = 0
1
 propagates the error to a primary output z by making all zf(1011) = 1
the wires along at least one path between the fault site X3 1
and z have different v and vf values
1
0/1
 Sensitized wire s-a-1
z
G2 G4
 A wire whose value in response to the test changes in 0/1 0/1
the presence of the fault f is said to be sensitized by the
test in the faulty circuit X4 1

 Sensitized path
Input vector 1011 detects the fault f (G2 stuck-at-1)
 A path composed of sensitized wires is called a
sensitized path v/vf : v = signal value in the fault free circuit
vf = signal value in the faulty circuit

29 30

Detectability Undetectable Fault


A fault f is said to be detectable  The stuck-at-0
 if there exists a test t that detects f fault at G1 output a can be removed !
G1
is undetectable
 otherwise, f is an undetectable fault
 Undetectable faults
do not change the
b s-a-0
function of the
For an undetectable fault f circuit
x z

 no test can simultaneously activate f and  The related circuit


create a sensitized path to some primary can be deleted to
output simplify the circuit
c

31 32
Test Set Typical Test Generation Flow
 Complete detection test set Start Select next target fault
 A set of tests that detects any detectable fault in a
designated set of faults
Generate a test (to be discussed)
 Quality of a test set for the target fault

 is measured by fault coverage


Fault simulation (to be discussed)
 Fault coverage
 Fraction of the faults detected by a test set
 can be determined by fault simulation Discard detected faults
 >95% is typically required under the single stuck-at
fault model
no
 >99.9% required in the ICs manufactured by IBM yes
More faults ? Done

33 34

Fault Equivalence Fault Equivalence


 AND gate:
Distinguishing test  all s-a-0 faults are equivalent
 A test t distinguishes faults and  if z(t)
≠z(t) for some PO function z  OR gate:
 all s-a-1 faults are equivalent
x
 NAND gate: x s-a-0
Equivalent faults  all the input s-a-0 faults and the output s-
s-a-0
a-1 faults are equivalent
 Two faults  and  are said to be equivalent in
a circuit iff the function under  is equal to the  NOR gate:
same effect
function under  for every input assignment  all input s-a-1 faults and the output s-a-0
faults are equivalent
(sequence) of the circuit.
 That is, no test can distinguish  and , i.e.,  Inverter:
 input s-a-1 and output s-a-0 are equivalent
test-set() = test-set()  input s-a-0 and output s-a-1 are equivalent

35 36
Equivalence Fault Collapsing Equivalent Fault Group
n+2, instead of 2(n+1), single stuck-at  In a combinational circuit
faults need to be considered for n-input  Many faults may form an equivalence group
 These equivalent faults can be found in a reversed
AND (or OR) gates topological order from POs to PIs

s-a-1 s-a-0 s-a-0 s-a-1


s-a-1 s-a-1 x
x
s-a-0 s-a-0
s-a-1 s-a-0

s-a-1
s-a-1 s-a-0 x
s-a-1 s-a-1
s-a-0 s-a-0
s-a-1 s-a-0 Three faults shown are equivalent !
37 38

Fault Dominance Fault Dominance


 Dominance relation  AND gate
 Output s-a-1 dominates any input s-a-1
 A fault  is said to dominate another fault  in an
easier to test
irredundant circuit iff every test (sequence) for  is also  NAND gate
a test (sequence) for i.e., test-set()  test-set()  Output s-a-0 dominates any input s-a-1
 No need to consider fault  for fault detection x
x
 OR gate s-a-1
 Output s-a-0 dominates any input s-a-0 s-a-1

 NOR gate harder to test


 Output s-a-1 dominates any input s-a-0

Test()  is dominated by 
Test()  Dominance fault collapsing
 Reducing the set of faults to be analyzed based on the
dominance relation

39 40
Stem vs. Branch Faults Analysis of a Single Gate
 Detect A s-a-1:  Fault Equivalence Class
z(t)zf(t) = (CDCE)(DCE) A
 (A s-a-0, B s-a-0, C s-a-0) C
= DCD  (C=0,D=1)
 Fault Dominance Relations B
 Detect C s-a-1:  (C s-a-1 > A s-a-1) and
z(t)zf(t) = (CDCE)(DE) D (C s-a-1 > B s-a-1)
 (C=0,D=1,E=0) or x AB C A B C A B C
A  Faults that can be ignored:
(C=0,D=0,E=1) sa1 sa1 sa1 sa0 sa0 sa0
C x  A s-a-0, B s-a-0, and C s-
 Hence, C s-a-1 does not B
a-1 00 0 1
dominate A s-a-1 x
01 0 1 1
E
 In general, there might be no
10 0 1 1
equivalence or dominance C: stem of a multiple fanout 11 1 0 0 0
relations between stem and A, B: branches
branch faults

41 42

Fault Collapsing Dominance Graph


 Collapse faults by fault equivalence and  Rule
dominance  When fault  dominates fault , then an arrow is
 For an n-input gate, we only need to consider n+1 faults pointing from  to 
in test generation
 Application
s-a-1  Find out the transitive dominance relations among faults
s-a-0

a s-a-0
s-a-1 a d s-a-0
a s-a-1
b d s-a-1
d

e s-a-0
c e e s-a-1

43 44
Fault Collapsing Flow Prime Fault
 is a prime fault if every fault that is
Sweeping the netlist from PO to PI Equivalence
Start
to find the equivalent fault groups analysis
dominated by  is also equivalent to 
Sweeping the netlist Dominance
to construct the dominance graph analysis
Representative Set of Prime Fault (RSPF)
 A set that consists of exactly one prime fault
Discard the dominating faults from each equivalence class of prime faults
 True minimal RSPF is difficult to find
Select a representative fault from
each remaining equivalence group

Generate collapsed fault list Done


45 46

Why Fault Collapsing ? Checkpoint Theorem


 Save memory and CPU time  Checkpoints for test generation
 A test set detects every fault on the primary inputs and
 Ease testing generation and fault simulation fanout branches is complete
 I.e., this test set detects all other faults, too
 Therefore, primary inputs and fanout branches form a
 Exercise sufficient set of checkpoints in test generation
 In fanout-free combinational circuits (i.e., every gate has
only one fanout), primary inputs are the checkpoints

* 30 total faults  12 prime faults Stem is not a checkpoint !

47 48
Why Inputs + Branches Are Enough ? Fault Collapsing + Checkpoint
 Example  Example:
 Checkpoints are marked in blue  10 checkpoint faults
 Sweeping the circuit from PI to PO to examine every  a s-a-0 <=> d s-a-0 , c s-a-0 <=> e s-a-0
gate, e.g., based on an order of (A->B->C->D->E) b s-a-0 > d s-a-0 , b s-a-1 > d s-a-1
 For each gate, output faults are detected if every input
fault is detected  6 faults are enough

A a a
d
D f
B
h
b
g
E e
C
c
49 50

Outline Why Fault Simulation ?


Fault Modeling To evaluate the quality of a test set
 I.e., to compute its fault coverage

Fault Simulation
Part of an ATPG program
 A vector usually detects multiple faults
Automatic Test Pattern Generation  Fault simulation is used to compute the faults
that are accidentally detected by a particular
vector
Design for Testability
To construct fault-dictionary
 For post-testing diagnosis

51 52
Conceptual Fault Simulation Some Basics for Logic Simulation
Patterns Response
 In fault simulation, our main concern is functional faults;
(Sequences) Comparison
gate delays are assumed to be zero unless delay faults are
(Vectors) Faulty Circuit #n (D/0) considered

 Logic values can be either {0, 1} (for two-value simulation)


Faulty Circuit #2 (B/1) or {0, 1, X} (for three-value simulation)
Detected?
Faulty Circuit #1 (A/0)  Two simulation mechanisms:
 Compiled-code valuation:
Fault-free Circuit  A circuit is translated into a program and all gates are executed for
Primary A B each pattern (may have redundant computation)
Inputs D  Event-driven valuation:
(PIs) C  Simulating a vector is viewed as a sequence of value-change
Primary Outputs events propagating from PIs to POs
(POs)  Only those logic gates affected by the events are re-evaluated

Logic simulation on both good (fault-free) and faulty circuits


53 54

Event-Driven Simulation Complexity of Fault Simulation


Start Initialize the events at PIs #Gate (G)
in the event-queue

Pick an event
Evaluate its effect #Fault (F)
#Pattern (P)
Schedule the newly born events
in the event-queue, if any

yes no  Complexity ~ F‧P‧G ~ O(G3)


More event in Q ? Done
 The complexity is higher than logic simulation by a factor of
F, while it is usually much lower than ATPG
1 1 A ? 0  The complexity can be greatly reduced using
1 0 B G1 E  fault collapsing and other advanced techniques
1 0 C G2 ? 0
Z
0 0 D
55 56
Characteristics of Fault Simulation Fault Simulation Techniques
 Fault activity with respect to fault-free circuit Parallel Fault Simulation
 is often sparse both in time and space.
Deductive Fault Simulation
 For example
 F1 is not activated by the given pattern, while F2 affects
only the lower part of this circuit.

0 F1(s-a-0)
×
1
F2(s-a-0)
1
×

57 58

Parallel Fault Simulation Parallel Fault Simulation


 Example fault-free
 Simulate multiple circuits simultaneously
 Consider three faults:
 The inherent parallel operation of computer words to (J s-a-0, B s-a-1, and F s-a-0) J/0 B/1 F/0 FF
simulate faulty circuits in parallel with fault-free circuit  Bit-space: (FF denotes fault-free)
 The number of faulty circuits or faults can be processed
simultaneously is limited by the word length, e.g., 32 1 1 1 1
circuits for a 32-bit computer B/1 A J/0
1
0 0 0 0 0 1 0 0 00 1 0 1
 Complication 0
C 0 1 0 0 E G
× 1 0 1 1 1
 An event or a value change of a single faulty or fault- B ×
free circuit leads to the computation of an entire word D
J
 The fault-free logic simulation is repeated for each pass H 1 1 0 1
1 ×
F 1 0 0 1
1 1 1 1 1 1 00 1

59
F/0 60
Deductive Fault Simulation Illustration of Fault List Propagation
 Simulate all faulty circuits in one pass
 For each pattern, sweep the circuit from PIs to POs. LA A
 During the process, a list of faults is associated with Consider a two-input AND-gate: C LC
LB B
each wire
 The list contains faults that would produce a fault effect
on this wire
 The union fault list at every PO contains the detected Non-controlling case: Case 1: A=1, B=1, C=1 at fault-free,
faults by the simulated input vector LC = LA  LB  {C/0}
Controlling cases: Case 2: A=1, B=0, C=0 at fault-free,
 Main operation is fault list propagation LC = (LA  LB)  {C/1}
 Depending on gate types and values Case 3: A=0, B=0, C=0 at fault-free,
 The size of the list may grow dynamically, leading to the
potential memory explosion problem LC = (LA  LB)  {C/1}

LA is the set of all faults not in LA


61 62

Rule of Fault List Propagation Deductive Fault Simulation


 Example (1/4)
 Consider 3 faults: B/1, F/0, and J/0 under (A,B,F) = (1,0,1)

A G
1

B C
J
0 x
E 1 x 1
D
H
1
F
x

Fault list at PIs:


LB = {B/1}, LF = {F/0}, LA = , LC=LD = {B/1}
63 64
Deductive Fault Simulation Deductive Fault Simulation
 Example (2/4)  Example (3/4)
 Consider 3 faults: B/1, F/0, and J/0 under (A,B,F) = (1,0,1)  Consider 3 faults: B/1, F/0, and J/0 under (A,B,F) = (1,0,1)
A G
A G 1
1

B C
B C J
x J 0 x
0
x E 1 x 1
E 1 1
D
D H
1 x
H 1
F
x
F
LB = {B/1}, LF = {F/0}, LA = , LC = LD = {B/1} LB = {B/1}, LF = {F/0}, LA = , LC = LD = {B/1},
Fault lists at G and E: LG = {B/1, G/1} , LE = {B/1, E/0}
LG = (LA  LC)  G/1 = {B/1, G/1} Fault list at H:
LE = (LD)  E/0 = {B/1, E/0} 65 LH = (LE  LF)  LH = {B/1, E/0, F/0, H/0} 66

Deductive Fault Simulation Deductive Fault Simulation


 Example (4/4)  Example (cont’d)
 Consider 3 faults: B/1, F/0, and J/0 under (A,B,F) = (1,0,1)  Consider 3 faults: B/1, F/0, and J/0 under (A,B,F) = (0,0,1)
A G A
1 01 G
C
B C 0
J B
0 x 0 x 00 J
E 1 x 1
E x 1
D D
H 1 H
1 x 1
F 1
F
x

LB = {B/1}, LF = {F/0}, LA = , LC = LD = {B/1}, LG = Event driven updates:


{B/1, G/1} , LE = {B/1, E/0}, LH = {B/1, E/0, F/0, H/0} LB = {B/1}, LF = {F/0}, LA = , LC = LD = LE = {B/1},
Final fault list at PO J: LG = {G/1}, LH = {B/1, F/0}, LJ = {B/1, F/0, J/0}
LJ = (LH – LG)  LJ = {E/0, F/0, J/0} 67 68
Outline Typical ATPG Flow
 Fault Modeling  1st phase: random test pattern generation

 Fault Simulation

 Automatic Test Pattern Generation (ATPG)


 Functional approach
 Boolean difference
 Structural approach
 D-algorithm
 PODEM

 Design for Testability

69 70

Typical ATPG Flow (cont’d) Test Pattern Generation


 2nd phase: deterministic test pattern generation  The test set T of a fault  with respect to some PO z can be
computed by
T(x) = z(x)  z(x)
 A test pattern can be fully specified or partially specified
depending on whether the values of PIs are all assigned
 Example

abc z z
000 0 0
001 0 0 Input vectors (1,1,0) and (1,1,-) are fully
010 0 0 and partially specified test patterns of
011 0 0 fault , respectively.
100 0 0
101 1 1
110 1 0
111 1 0
71 72
Structural Test Generation Structural Test Generation
D-Algorithm D-Algorithm
 Test generation from circuit structure  Fault activation
 Two basic goals
 (1) Fault activation (FA)  Setting the faulty signal to either 0 or 1 is a Line Justification
 (2) Fault propagation (FP) problem
 Both of which requires Line Justification (LJ), i.e., finding input combinations that
force certain signals to their desired values
 Fault propagation
 Notations: 1. select a path to a PO  decisions
 1/0 is denoted as D, meaning that good-value is 1 while faulty value is 0 2. once the path is selected  a set of line justification (LJ)
 Similarly, 0/1 is denoted D’ problems are to be solved
 Both D and D’ are called fault effects (FE)
 Line justification
 Involves decisions or implications
1 a fault activation  Incorrect decisions: need backtracking
1/0
1 b
f To justify c=1  a=1 and b=1 (implication) a
c
To justify c=0  a=0 or b=0 (decision) b
0 fault propagation
0 c
73 74

Structural Test Generation Structural Test Generation


D-Algorithm: Fault Propagation D-Algorithm: Line Justification
d G2 corresponding decision tree
G5 f1 a k
a b q q=1
b G1 { G5, G6 }
c c l l=1 k=1
G6 f2 G5 G6 d
G3 m fail r=1
G4 fail success n r s o=1
e m=1
o n=1
decision tree success
e p
 Fault activation f
 G1=0  { a=1, b=1, c=1 }  { G3=0 } h J-frontier: is the set of gates
 Fault propagation: through G5 or G6 whose output value is known
(i.e., 0 or 1), but is not implied
 Decision through G5:  FA  set h to 0 by its input values.
 G2=1  { d=0, a=0 }  inconsistency at a  backtrack !!  FP  e=1, f=1 (o=0) ; FP  q=1, r=1 Ex: initially, J-frontier is {q=1, r=1}
 Decision through G6:  To justify q=1  l=1 or k=1
  G4=1  e=0  done !! The resulting test is (111x0)  Decision: l =1  c=1, d=1  m=0, n=0  r=0  inconsistency at r  backtrack !
 Decision: k=1  a=1, b=1
D-frontiers: are the gates whose output value is x, while one or more  To justify r=1  m=1 or n=1 (c=0 or d=0)  Done ! (J-frontier is )
Inputs are D or D’. For example, initially, the D-frontier is { G5, G6 }.
75 76
Test Generation Implication
 A branch-and-bound search  Implication
 Every decision point is a branching point  Compute the values that can be uniquely determined
 If a set of decisions lead to a conflict, a backtrack is taken  Local implication: propagation of values from one line to its
to explore other decisions immediate successors or predecessors
 A test is found when  Global implication: the propagation involving a larger area
1. fault effect is propagated to a PO, and of the circuit and re-convergent fanout
2. all internal lines are justified
 No test is found after all possible decisions are tried 
Then, target fault is undetectable  Maximum implication principle
 Since the search is exhaustive, it will find a test if one  Perform as many implications as possible
exists  It helps to either reduce the number of problems that
need decisions or to reach an inconsistency sooner
For a combinational circuit, an undetectable fault is also a redundant fault
 Can be used to simplify circuit.

77 78

Forward Implication Backward Implication


Before After Before After
0 x 0 0 x 1 1
x x 1 1
x
1 x 1 1 x 0
1 1 0 0
1 1
1 0 1 0
x J-frontier={ ...,a } 0 J-frontier={ ... } x x 0
a a 0 x J-frontier={ ...,a }
x a J-frontier={ ... } a
D' x D' 0 1 1
D D-frontier={ ...,a } D-frontier={ ... } x 1
a D a
x 1
79 80
D-Algorithm (1/4) D-Algorithm (2/4)
 Example Try to propagate  Example
 Five logic values {0, 1, x, D, D’} fault effect thru G1  Five logic values {0, 1, x, D, D’}
h  Set d to 1 h Try to propagate
d' 0 d' 0 fault effect thru G2
d  Set j,l,m to 1
d 1 Try to propagate 1 D’
G1 i D’ fault effect thru G2 G1 i
 Set j,k,l,m to 1

j 1 j 1
e'0 e' 0
n n
e G2 e G2
1 D 1 k D
a 0 g D k a 0 g D
b 1 D’ ≠ 1 b 1 D’ (next D-frontier chosen)
c 1 c 1
l Conflict at k l
f' 1  Backtrack !
f' 0 1
Conflict at m
 Backtrack !
f f
m 1 m
1 81 D’ ≠ 1 82

D-Algorithm (3/4) D-Algorithm (4/4)


 Example Try to propagate
 Five logic values {0, 1, x, D, D’}
Decision Implication Comments
fault effect thru G2
h 1  Set j,l to 1 e=1 Propagate via k
d' 0 a=0 Active the fault k=D’
Fault propagation h=1 e’=0
d 1 b=1 Unique D-drive
i D’ and line justification j=1
G1 c=1
are both complete l=1 Propagate via n
 A test is found ! g=D m=1
j 1 d=1 Propagate via i n=D
e' 0 i=D’ f’=0
n d’=0
e G2
j=1 Propagate via n
f=1
Contradiction
1 k D m=D’
a 0 g D k=1
b 1 D’
l=1
f=1 Propagate via m
c 1 m=D’
m=1
l This is a case of f’=0
f' 0 1 multiple path sensitization !
n=D l=1
e’=0
f e=1
n=D
1 m
83 k=D’ Contradiction 84
D’ (next D-frontier chosen)
Decision Tree on D-Frontier PODEM Algorithm
 PODEM: Path-Oriented DEcision Making
 The decision tree
 Fault Activation (FA) and Propagation (FP)
 Node  D-frontier
 lead to sets of Line Justification (LJ) problems. The LJ problems can be solved via
 Branch  decision taken value assignments.
 A Depth-First-Search (DFS) strategy is often used  In D-algorithm
 TG is done through indirect signal assignment for FA, FP, and LJ, that eventually
maps into assignments at PI’s
 The decision points are at internal lines
 The worst-case number of backtracks is exponential in terms of the number of
decision points (e.g., at least 2k for k decision nodes)
 In PODEM
 The test generation is done through a sequence of direct assignments at PI’s
 Decision points are at PIs, thus the number of backtracking might be fewer

85 86

PODEM Algorithm PODEM Algorithm


Search Space of PODEM Objective and Backtrace
 Complete search space  PODEM
 A binary tree with 2n leaf nodes, where n is the number of PIs  Also aims at establishing a sensitization path based on fault
activation and propagation like D-algorithm
 Fast test generation  Instead of justifying the signal values required for sensitizing
the selected path, objectives are setup to guide the decision
 Need to find a path leading to a SUCCESS terminal quickly process at PIs
a
 Objective
0  is a signal-value pair (w, vw)
1  Backtrace
 Backtrace maps a desired objective into a PI assignment that
b b is likely to contribute to the achievement of the objective
0 1 0 1  Is a process that traverses the circuit back from the objective
signal to PIs
c c c c  The result is a PI signal-value pair (x, vx)
0 1 0 1 0 1 0 1  No signal value is actually assigned during backtrace (toward
PI) !
d d d d d d d d

F F F S F S F F 87 88
PODEM Algorithm PODEM Algorithm
Objective Backtrace
 Objective routine involves  Backtrace routine involves
 selection of a D-frontier, G  finding an all-x path from objective site to a PI, i.e.,
 selection of an unspecified input gate of G every signal in this path has value x

Objective() { Backtrace(w, vw) {


/* The target fault is w s-a-v */ /* Maps objective into a PI assignment */
/* Let variable obj be a signal-value pair */ G = w; /* objective node */
if (the value of w is x) obj = ( w, v’ ); fault activation v = vw; /* objective value */
else { while (G is a gate output) { /* not reached PI yet */
select a gate (G) from the D-frontier; fault propagation inv = inversion of G;
select an input (j) of G with value x; select an input (j) of G with value x;
c = controlling value of G; G = j; /* new objective node */
obj = (j, c’); v = v⊕inv; /* new objective value */
} }
return (obj); /* G is a PI */ return (G, v);
} }
89 90

PODEM Algorithm
PI Assignment PODEM Algorithm
PODEM () /* using depth-first-search */

PIs: { a, b, c, d } a begin
Current Assignments: { a=0 } 0 If(error at PO) return(SUCCESS);
Decision: b=0  objective fails If(test not possible) return(FAILURE);
Reverse decision: b=1 (k, vk) = Objective(); /* choose a line to be justified */
Decision: c=0  objective fails 0
b (j, vj) = Backtrace(k, vk); /* choose the PI to be assigned */
1
Reverse decision: c=1 Imply (j, vj); /* make a decision */
Decision: d=0 If ( PODEM()==SUCCESS ) return (SUCCESS);
failure c Imply (j, vj’); /* reverse decision */
0 1
If ( PODEM()==SUCCESS ) return(SUCCESS);
failure Imply (j, x);
Failure means fault effect cannot be d
propagated to any PO under current Return (FAILURE);

PI assignments 0 end
S

91 92
PODEM Algorithm (1/4) PODEM Algorithm (2/4)
Example Example
h 1 Select D-frontier G2 and h 1 Select D-frontier G3 and
d' 0 set objective to (k,1) d' 0 set objective to (e,1)
d  e = 0 by backtrace d  No backtrace is needed
1  break the sensitization 1  Success at G3
i D’ i D’
G1 across G2 (j=0) G1
 Backtrack !
j 0 j 1
e' 1 e' 0
n n
e G2 e G2
0 k 1 1 k
a 0 g D a 0 g D G3
b 1 1 b 1
c 1 c 1
l l
f' f'
f f
m m
G4
93 94

PODEM Algorithm (3/4) PODEM Algorithm (4/4)


Objective PI assignment Implications D-frontier Comments
Example a=0 a=0 h=1 g
h 1 Select D-frontier G4 and
b=1 b=1 g
d' 0 set objective to (f,1)
c=1 c=1 g=D i,k,m
d  No backtrace is needed
1  Succeed at G4 and G2 d=1 d=1 d’=0
i D’
G1  D appears at one PO i=D’ k,m,n
 A test is found !! k=1 e=0 e’=1 Assignments need to be
j 1 j=0 reversed during backtracking
e' 0 k=1
n
e G2 n=1 m no solutions!  backtrack
1 k D
a 0 g D G3 e=1 e’=0 flip PI assignment
b 1 D’ j=1
c 1 d' h 1
0
d
l1 k=D’ m,n 1 i D’
f' 0 l=1 f=1 f’=0 e' 0
j 1
e n
f l=1 a 0
b g D 1 k
D’ D
c 11
1 m m=D’ f' 0 l
G4 f
1
D’ 95 n=D 1 m
D’ 96
PODEM Algorithm
Decision Tree Termination Conditions
 Decision node:
PI selected through backtrace for value assignment
 D-algorithm
 Branch:  Success:
value assignment to the selected PI (1) Fault effect at an output (D-frontier may not be empty)
(2) J-frontier is empty
0 a  Failure:
(1) D-frontier is empty (all possible paths are false)
b
(2) J-frontier is not empty
1
c  PODEM
 Success:
1
d  Fault effect seen at an output
1  Failure:
e  Every PI assignment leads to failure, in which D-frontier is
0 1
empty while fault has been activated
fail f

success 97 98

PODEM Overview Outline


 PODEM Fault Modeling
 examines all possible input patterns implicitly but exhaustively
(branch-and-bound) for finding a test
 complete like D-algorithm (i.e., will find a test if exists) Fault Simulation
 Other key features
 No J-frontier, since there are no values that require
justification Automatic Test Pattern Generation
 No consistency check, as conflicts can never occur
 No backward implication, because values are propagated only
forward
 Backtracking is implicitly done by simulation rather than by an Design for Testability
explicit and time-consuming save/restore process
 Experiments show that PODEM is generally faster than D-
algorithm

99 100
Why DFT ? Design for Testability
Direct testing is way too difficult !  Definition
 Large number of FFs  Design for testability (DFT) refers to those design
techniques that make test generation and testing cost-
 Embedded memory blocks effective
 Embedded analog blocks
 DFT methods
 Ad-hoc methods, full and partial scan, built-in self-test
(BIST), boundary scan

 Cost of DFT
 Pin count, area, performance, design-time, test-time,
etc.

101 102

Important Factors Test Point Insertion


Controllability  Employ test points to enhance controllability
and observability
 Measure the ease of controlling a line  CP: Control Points
Primary inputs used to enhance controllability
 OP: Observability Points
Observability
Primary outputs used to enhance observability
 Measure the ease of observing a line at PO 0
PO
Add 0-CP
DFT deals with ways of improving Add OP
 Controllability and observability 1
Add 1-CP

103 104
Control Point Insertion Control Point Selection
Goal
0 w  Controllability of the fanout-cone of the added
C1 MUX C2 point is improved
1

CP Common selections
CP_enable  Control, address, and data buses
Inserted circuit for controlling line w  Enable/hold inputs
 Enable and read/write inputs to memory
 Normal operation:
When CP_enable = 0
 Clock and preset/clear signals of flip-flops
 Inject 0:  Data select inputs to multiplexers and
Set CP_enable = 1 and CP = 0 demultiplexers
 Inject 1:
Set CP_enable = 1 and CP = 1
105 106

Observation Point Selection Problems with Test Point Insertion


 Goal  Large number of I/O pins
 Observability of the transitive fanins of the added point  Can be resolved by adding MUXs to reduce the number
is improved of I/O pins, or by adding shift-registers to impose CP
values

 Common choice
 Stem lines with more fanouts X Z
 Global feedback paths
Shift-register R1 Shift-register R2
 Redundant signal lines X’ Z’
 Output of logic devices having many inputs
 MUX, XOR trees
 Output from state devices
 Address, control and data buses
control Observe
107 108
What Is Scan ? Scan Concept
 Objective
 To provide controllability and observability at internal Combinational
state variables for testing Logic
Mode Switch
(normal or test)
 Method
 Add test mode control signal(s) to circuit Scan In
 Connect flip-flops to form shift registers in test mode
 Make inputs/outputs of the flip-flops in the shift register FF
controllable and observable

 Types FF
 Internal scan
 Full scan, partial scan, random access
 Boundary scan
FF
109
Scan Out 110

Logic Design before Scan Insertion Logic Design after Scan Insertion

Combinational Logic Combinational Logic


input output input q1
q2 output
 g stuck-at-0
pins pins pins q3 pins
q1 q3
q2
D D D Q
Q Q
scan-output

MUX

MUX
D D

MUX
scan-input Q Q D Q
11 11 11
clock
scan-enable
clock
Sequential ATPG is extremely difficult:
due to the lack of controllability and observability at flip-flops. Scan Chain provides an easy access to flip-flops
111
Pattern generation is much easier !! 112
Scan Insertion Overhead of Scan Design
 Example Case study
 3-stage counter
 #CMOS gates = 2000
Combinational Logic
 Fraction of flip-flops = 0.478
input q1 output  Fraction of normal routing = 0.471
q2  g stuck-at-0
pins q3 pins
Scan Predicted Actual area Normalized
q1
q2
q3 implementation overhead overhead operating
frequency
D D D
Q Q Q None 0 0 1.0
11 11 11
Hierarchical 14.05% 16.93% 0.87
clock Optimized 14.05% 11.9% 0.91

It takes 8 clock cycles to set the flip-flops to be (1, 1, 1), for detecting
the target fault g stuck-at-0 fault (220 cycles for a 20-stage counter !) 113 114

Full Scan Problems Scan-Chain Reordering


 Scan-chain order is often decided at gate-level without knowing
 Problems the cell placement
 Area overhead  Scan-chain consumes a lot of routing resources, and could be
minimized by re-ordering the flip-flops in the chain after layout is
 Possible performance degradation done
 High test application time
 Power dissipation
Scan-In Scan-In
 Features of commercial tools
 Scan-rule violation check (e.g., DFT rule check) Scan-Out Scan-Out
 Scan insertion (convert a FF to its scan version)
 ATPG (both combinational and sequential)
 Scan chain reordering after layout Scan cell

Layout of a cell-based design A better scan-chain order


115 116
Partial Scan Full Scan vs. Partial Scan
 Basic idea scan design
 Select a subset of flip-flops for scan
 Lower overhead (area and speed)
 Relaxed design rules
full scan partial scan
 Cycle-breaking technique
 Cheng & Agrawal, IEEE Trans. On Computers, April 1990 every flip-flop is a scan-FF NOT every flip-flop is a scan-FF
 Select scan flip-flops to simplify sequential ATPG
 Overhead is about 25% off than full scan
 Timing-driven partial scan scan time longer shorter
 Jou & Cheng, ICCAD, Nov. 1991 hardware overhead more less
 Allow optimization of area, timing, and testability
simultaneously fault coverage ~100% unpredictable
ease-of-use easier harder
117 118

Area Overhead vs. Test Effort Conclusions


test
 Testing
test  Conducted after manufacturing
effort
generation  Must be considered during the design process
area overhead
complexity
 Major fault models
 Stuck-at, bridging, stuck-open, delay fault, …

 Major tools needed


 Design-for-Testability
 By scan chain insertion or built-in self-test
 Fault simulation
 ATPG

 Other Applications in CAD


no scan partial scan full scan  ATPG is a way of Boolean reasoning and is applicable to may
logic-domain CAD problems
area overhead
119 120

You might also like