002 Functions and Circuits
002 Functions and Circuits
2
n
The Boolean Space B
• B = { 0,1}
• B2 = {0,1} X {0,1} = {00, 01, 10, 11}
Karnaugh Maps: Boolean Cubes:
B0
B1
B2
B3
B4
3
Boolean Functions
Boolean Function: f ( x ) : B n B
B {0,1}
x ( x1, x2,..., xn ) B n ; xi B
Example: x2
f {(( x1 0, x2 0),0),(( x1 0, x 2 1),1), 0 1 x2
(( x1 1, x2 0), 1),(( x1 1, x2 1),0)} x1 1 0
x1
4
Boolean Functions
- The Onset of f is { x | f ( x ) 1} f 1(1) f 1
- The Offset of f is { x | f ( x ) 0} f 1(0) f 0
f = x1 f = x1
x3 x3
x2 x2
x1 x1 5
Set of Boolean Functions
• Truth Table or Function table:
x1x2x3
000 1
001 0
010 1
x3 011 0
100 1
x2 101 0
x1 110 1
111 0
6
Boolean Operations -
AND, OR, COMPLEMENT
7
Cofactor and Quantification
8
Representation of Boolean Functions
• We need representations for Boolean Functions for two reasons:
– to represent and manipulate the actual circuit we are “synthesizing”
– as mechanism to do efficient Boolean reasoning
9
Truth Table
• Truth table (Function Table):
The truth table of a function f : Bn B is a tabulation of its value at
each of the 2n vertices of Bn.
Canonical means that if two functions are the same, then the
canonical representations of each are isomorphic.
10
Boolean Formula
• A Boolean formula is defined as an expression with the following syntax:
Example:
f = (x1x2) + (x3) + ^^(x4 (^x1))
typically the “” is omitted and the ‘(‘ and ‘^’ are simply reduced by priority,
e.g.
f = x1x2 + x3 + x4^x1
11
Cubes
• A cube is defined as the AND of a set of literal functions (“conjunction” of literals).
Example:
C = x1x2x3
represents the following function
f = (x1=1)(x2=0)(x3=1)
c = x1 f = x1x2 f = x1x2x3
x3 x3 x3
x2 x2 x2
x1 x1 x1
12
Cubes
• If C f, C a cube, then C is an implicant of f.
Example:
C = xy B3
k = 2 , n = 3 => |C| = 2 = 23-2.
C = {100, 101}
13
List of Cubes
• Sum of Products:
• A function can be represented by a sum of cubes (products):
f = ab + ac + bc
Since each cube is a product of literals, this is a “sum of products”
(SOP) representation
14
Binary Decision Diagram (BDD)
0 1 15
Boolean Circuits
• Used for two main purposes
– as representation for Boolean reasoning engine
– as target structure for logic implementation which gets restructured
in a series of logic synthesis steps until result is acceptable
16
Definitions
Definition:
A Boolean circuit is a directed graph C(G,N) where G are the gates and
N GG is the set of directed edges (nets) connecting the gates.
17
Definitions
The fanin FI(g) of a gate g are all predecessor vertices of g:
FI(g) = {g’ | (g’,g) N}
18
Example
8
1 7
4
2 6
9
5
3
FI(6) = {2,4} O
FO(6) = {7,9}
I
CONE(6) = {1,2,4,6}
SUPPORT(6) = {1,2}
19
Circuit Function
• Circuit functions are defined recursively:
• xi if gi I
If G is implemented using physical gates that have positive (bounded) delays for their evaluation, the computation of h g depends in general on those delays.
hgi
f gi (hg j ,..., hgk ), g j ,..., g k FI ( g i ) otherwise
Definition:
A circuit C is called combinational if for each input assignment of C for t the evaluation of hg for all outputs is independent of the internal state of C.
Proposition:
A circuit C is combinational if it is acyclic.
20
Cyclic Circuits
Definition:
A circuit C is called cyclic if it contains at least one loop of the form:
((g,g1),(g1,g2),…,(gn-1,gn),(gn,g)).
...
g
...
21
Cyclic Circuits
hloop
... ...
g v
g
... ...
22
Cyclic Circuits
xi
hloop
24
Circuit Representations
For general circuit manipulation (e.g. synthesis):
• Data structure allow very general mechanisms for insertion and deletion
of vertices, pins (connections to vertices), and nets
– general but far too slow for Boolean reasoning
25
Circuit Representations
For efficient Boolean reasoning (e.g. a SAT engine):
26
Boolean Reasoning Engine
Engine application:
- traverse problem data structure and build
Boolean problem using the interface
- call SAT to make decision
• Fundamental trade-off
– canonical data structure
• data structure uniquely represents function
• decision procedure is trivial (e.g., just pointer comparison)
• example: Reduced Ordered Binary Decision Diagrams
• Problem: Size of data structure is in general exponential
28
AND-INVERTER Circuits
• Base data structure uses two-input AND function for vertices and
INVERTER attributes at the edges (individual bit)
– use De’Morgan’s law to convert OR operation etc.
• Hash table to identify and reuse structurally isomorphic circuits
f f
g g
29
Data Representation
• Vertex:
– pointers (integer indices) to left and right child and fanout vertices
– collision chain pointer
– other data
• Edge:
– pointer or index into array
– one bit to represent inversion
30
Data Representation
Hash Table
one
8456 6423 ...
…. ….
0455
0456 0456
Constant zero
0 left 0457
One Vertex 1 right ...
1345 next
7463
…. fanout
….
hash value
left pointer
complement bits
right pointer 0456
next in collision chain 0 left
array of fanout pointers 0 right
next
fanout
31
Hash Table
Algorithm HASH_LOOKUP(Edge p1, Edge p2) {
index = HASH_FUNCTION(p1,p2)
p = &hash_table[index]
while(p != NULL) {
if(p->left == p1 && p->right == p2) return p;
p = p->next;
}
return NULL;
}
Tricks:
- keep collision chain sorted by the address (or index) of p
- that reduces the search through the list by 1/2
- use memory locations (or array indices) in topological order of circuit
- that results in better cache performance
32
Basic Construction Operations
Algorithm AND(Edge p1,Edge p2){
if(p1 == const1) return p2
if(p2 == const1) return p1
if(p1 == p2) return p1
if(p1 == ^p2) return const0
if(p1 == const0 || p2 == const0) return const0
33
Basic Construction Operations
Algorithm NOT(Edge p) {
return TOOGLE_COMPLEMENT_BIT(p)
}
34
Cofactor Operation
Algorithm POSITIVE_COFACTOR(Edge p,Edge v){
if(IS_VAR(p)) {
if(p == v) {
if(IS_INVERTED(v) == IS_INVERTED(p)) return const1
else return const0
}
else return p
}
if((c = GET_COFACTOR(p)) == NULL) {
left = POSITIVE_COFACTOR(p->left, v)
right = POSITIVE_COFACTOR(p->right, v)
c = AND(left,right)
SET_COFACTOR(p,c)
}
if(IS_INVERTED(p)) return NOT(c)
else return c 35
}
Cofactor Operation
- similar algorithm for NEGATIVE_COFACTOR
- existential and universal quantification build from AND, OR and COFACTORS
36
SAT and Tautology
• Tautology:
– Find an assignment to the inputs that evaluate a given vertex to “0”.
• SAT:
– Find an assignment to the inputs that evaluate a given vertex to “1”.
– Identical to Tautology on the inverted vertex
37
General Davis-Putnam Procedure
• search for consistent assignment to entire cone of requested vertex by
systematically trying all combinations (may be partial!!!)
• keep a queue of vertices that remain to be justified
– pick decision vertex from the queue and case split on possible
assignments
– for each case
• propagate as many implications as possible
– generate more vertices to be justified
– if conflicting assignment encountered
» undo all implications and backtrack
• recur to next vertex from queue
Algorithm SAT(Edge p) {
queue = INIT_QUEUE()
if(IMPLY(p) return TRUE
return JUSTIFY(queue)
}
38
General Davis-Putnam Procedure
Algorithm JUSTIFY(queue) {
if(QUEUE_EMPTY(queue)) return TRUE
mark = ASSIGNMENT_MARK()
v = QUEUE_NEXT(queue) // decision vertex
if(IMPLY(NOT(v->left)) {
if(JUSTIFY(queue)) return TRUE
} // conflict
UNDO_ASSIGNMENTS(mark)
if(IMPLY(v->left) {
if(JUSTIFY(queue)) return TRUE
} // conflict
UNDO_ASSIGNMENTS(mark)
return FALSE
}
39
Example
SAT(NOT(9))?? Queue Assignments
1 4 9 9
7
2 5 9 0
8
3 6
1 4 0 5 9
7 1
0 6 7
Note: 2 5 9 0 8
8 0
- vertex 7 is justified 6
1 5
3
by 8->5->7 0 6
1 4 0 9
7 1
0 0 7
2 5 9 0 8
8 0
1 5
3 6
0 0 6
2
3
Solution cube: 1 = x, 2 = 0, 3 = 0
41
Implication Procedure
• Fast implication procedure is key for efficient SAT solver!!!
– don’t move into circuit parts that are not sensitized to current SAT
problem
– detect conflicts as early as possible
• Table lookup implementation (27 cases):
– No implications:
x x 1 0 0 0
x x x 0 0 0
x 1 x x 1 0
x 1 1
0 0 1
0 0 1
– Implications:
0 x 0 x x 1
x x x 1 1 1
x 0 0 x 1 x
42
Implication Procedure
– Implications:
x 1 1 1 0
0 0 x x x
1 x 1 0 1
– Conflicts:
0 0 0 x 1 1
1 1 1 1 1 0
x 0 1 0 0 1
– Case Split:
x
0
x
43
Ordering of Case Splits
• various heuristics work differently well for particular problem classes
• often depth-first heuristic good because it generates conflicts quickly
• mixture of depth-first and breadth-first schedule
• other heuristics:
– pick the vertex with the largest fanout
– count the polarities of the fanout separately and pick the vertex with
the highest count in either one
– run a full implication phase on all outstanding case splits and count
the number of implications one would get
• some cases may already generate conflicts, the other case is
immediately implied
• pick vertices that are involved in small cut of the circuit
== 0?
“small cut”
44
Learning
• Learning is the process of adding “shortcuts” to the circuit structure that
avoids case splits
– static learning:
• global implications are learned
– dynamic learning:
• learned implications only hold in current part of the search tree
• Learned implications are stores as additional network
• Back to example:
– First case for vertex 9 lead to conflict
– If we were to try the same assignment again (e.g. for the next SAT call), we
would get the same conflict => merge vertex 7 with Zero-vertex
Zero Vertex
1 - if rehashing is invoked
1 4 1
1
0 1 7 0 vertex 9 is simplified and
2 1 5 9 0 and merged with vertex 8
8
3 6
45
Static Learning
• Implications that can be learned structurally from the circuit
– Example:
(( x y ) 0) ( x y ) 0) ( x 0)
– Add learned structure as circuit
1 4 0 5 9
7 1
0 6 7
2 5 9 0 8
8 0
1 5
3 6
0 6
1 4 0 9
7 1
0 7
2 5 9 0 8
8 0
1 5
3 6 0
0 6
a a
1 Zero Vertex 3
Solution cube: 1 = x, 2 = x, 3 = 0 b
47
Static Learning
• Socrates algorithm: based on contra-positive:
( x y) ( y x )
foreach vertex v {
mark = ASSIGNMENT_MARK() 0
IMPLY(v) 0
x y 1
LEARN_IMPLICATIONS(v)
0
UNDO_ASSIGNMENTS(mark)
IMPLY(NOT(v)) (( x 0) ( y 1)) (( y 0) ( x 1))
LEARN_IMPLICATIONS(NOT(v))
UNDO_ASSIGNMENTS(mark)
}
1
• x y 0
Problem: learned implications are far too many 0
– solution: restrict learning to non-trivial implications
– mask redundant implications
0
Zero Vertex
48
Recursive Learning
• Compute set of all implications for both cases on level i
– static implications (y=0/1)
– equivalence relations (y=z)
• Intersection of both sets can be learned for level i-1
(( x 1) ( y 1) ( x 0) ( y 1)) ( y 1)
1
1
1 x
x
y 0
x 1 x
y 0 y 0
0
x
1 x
y 0
• Apply learning recursively until all case splits
1 exhausted
– recursive learning is complete but very expensive in practice for levels > 2,3
– restrict learning level to fixed number -> becomes incomplete
49
Recursive Learning
Algorithm RECURSIVE_LEARN(int level) {
if(v = PICK_SPLITTING_VERTEX()) {
mark = ASSIGNMENT_MARK()
IMPLY(v)
IMPL1 = RECURSIVE_LEARN(level+1)
UNDO_ASSIGNMENTS(mark)
IMPLY(NOT(v))
IMPL0 = RECURSIVE_LEARN(level+1)
UNDO_ASSIGNMENTS(mark)
return IMPL1 IMPL0
}
else { // completely justified
return IMPLICATIONS
}
}
50
Dynamic Learning
Learn implications in sub-tree of search
• cannot simply add permanent structure because not globally valid
– add and remove learned structure (expensive)
– add the branching condition to the learned implication
• of no use unless we prune the condition (conflict learning)
– use implication and assignment mechanism to assign and undo
assigns
• e.g. dynamic recursive learning with fixed recursion level
51
Dynamic Learning
• Efficient implementation of dynamic recursive learning with level 1:
– consider both sub-cases in parallel
– use 27-valued logic in the IMPLY routine
{level0-value ´ level1-choice1 ´ level1-choice2}
{{0,1,x} ´ {0,1,x} ´ {0,1,x}}
– automatically set learned values for level0 if both level1 choices
agree
x
{x,1,0}
{1,1,1} 1
0 0 0
{x,x,1} x
52
Conflict-based Learning
• Idea: Learn the situation under which a particular conflict
occurred and assert it to 0
• imply will use this “shortcut” to detect similar conflict earlier
Definition:
An implication graph is a directed Graph I(G’,D), G’ G are the gates
of C with assigned values vgx, D GG are the edges, where each
edge (gi,gj) D reflects an implication for which an assignment of gate
gi lead to the assignment of gate gj.
• There is a strict implication order in the graph from the roots to the leafs
– We can completely cut the graph at any point and identical value
assignments to the cut vertices, we result in identical implications toward the
leafs
C1 C2 Cn-1 Cn (C1: Decision vertices)
54
Conflict-based Learning
• If an implication leads to a conflict, any cut assignment in the implication
graph between the decision vertices and the conflict will result in the same
conflict!
55
Non-chronological Backtracking
• If we learned only cuts on decision vertices, only the decision vertices that are in the support of the conflict are
needed
4 2
3
1 4
3 5
• The conflict is fully symmetric with respect to the unrelated decision vertices!!
– Learning the conflict would prevent checking the symmetric parts again
6 6
BUT: It is too expensive to learn all conflicts
56
Non-chronological Backtracking
• We can still avoid exploring symmetric parts of the decision tree by
tracking the supporting decision vertices for all conflicts.
If no conflict of the first choice on a decision vertex depends on that vertex,
the other choice(s) will result in symmetric conflicts and their evaluation can
be skipped!!
1
2
{2,0}
3
4
{2,3}
58
D-Algorithm
• In addition to controllability, we need to check observability of a
possible signal change at a vertex:
59
Five-Valued Implication Rules
B=~A C=A&B
A B A\B 0 1 X D ~D
0 1 0 0 0 0 0 0
1 0 1 0 1 X D ~D
X X X 0 X X X X
D ~D D 0 D X D 0
~D D ~D 0 ~D X 0 ~D
60