0% found this document useful (0 votes)
3 views55 pages

Lecture02_Comb

Chapter 2 covers combinational logic, including basic logic functions, logic realization, simplification techniques, and alternate representations such as Karnaugh maps. It discusses the cost of implementing different logic functions and the minimal set of functions required to create all logic functions using NOT, NOR, and NAND gates. The chapter also delves into Boolean algebra, its axioms, theorems, and applications in simplifying expressions, exemplified through a 1-bit binary adder.

Uploaded by

yeop411411
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views55 pages

Lecture02_Comb

Chapter 2 covers combinational logic, including basic logic functions, logic realization, simplification techniques, and alternate representations such as Karnaugh maps. It discusses the cost of implementing different logic functions and the minimal set of functions required to create all logic functions using NOT, NOR, and NAND gates. The chapter also delves into Boolean algebra, its axioms, theorems, and applications in simplifying expressions, exemplified through a 1-bit binary adder.

Uploaded by

yeop411411
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Chapter 2

Combinational logic

© Copyright 2004, Gaetano Borriello and


Randy H. Katz 1
Overview: Combinational logic
◼ Basic logic
❑ Boolean algebra, proofs by re-writing, proofs by perfect induction
❑ logic functions, truth tables, and switches
❑ NOT, AND, OR, NAND, NOR, XOR, . . ., minimal set
◼ Logic realization
❑ two-level logic and canonical forms
❑ incompletely specified functions
◼ Simplification
❑ uniting theorem
❑ grouping of terms in Boolean functions
◼ Alternate representations of Boolean functions
❑ cubes
❑ Karnaugh maps

2
Possible logic functions of two variables

◼ There are 16 possible functions of 2 input variables:


❑ in general, there are 2**(2**n) functions of n inputs
X F
Y

X Y F0 F1 F3 F6 F9 F12 F15
0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
0 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
1 0 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1
1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
0 1
X Y not Y not X
X and Y X xor Y X=Y X nand Y
X or Y X nor Y not (X and Y)
not (X or Y)

3
Cost of different logic functions

◼ Different functions are easier or harder to implement


❑ each has a cost associated with the number of switches needed
❑ 0 (F0) and 1 (F15): require 0 switches, directly connect output to
low/high
❑ X (F3) and Y (F5): require 0 switches, output is one of inputs
❑ X’ (F12) and Y’ (F10): require 2 switches for "inverter" or NOT-gate
❑ X nor Y (F8) and X nand Y (F14): require 4 switches
❑ X or Y (F7) and X and Y (F1): require 6 switches
❑ X = Y (F9) and X  Y (F6): require 16 switches

❑ thus, because NOT, NOR, and NAND are the cheapest they are the
functions we implement the most in practice

4
Minimal set of functions

◼ Can we implement all logic functions from NOT, NOR, and NAND?
❑ For example, implementing X and Y
is the same as implementing not (X nand Y)
◼ In fact, we can do it with only NOR or only NAND
❑ NOT is just a NAND or a NOR with both inputs tied together

X Y X nor Y X Y X nand Y
0 0 1 0 0 1
1 1 0 1 1 0

❑ NAND and NOR are "duals",


that is, its easy to implement one using the other
X nand Y  not ( (not X) nor (not Y) )
X nor Y  not ( (not X) nand (not Y) )

5
An algebraic structure

◼ An algebraic structure consists of


❑ a set of elements B Identity (element) 항등원
❑ binary operations { + , • }
❑ and a unary operation { ’ }
❑ such that the following axioms hold:

1. the set B contains at least two elements: a, b


2. closure: a + b is in B a • b is in B
3. commutativity: a + b = b + a a•b=b•a
4. associativity: a + (b + c) = (a + b) + c a • (b • c) = (a • b) • c
5. identity: a+0=a a•1=a
6. distributivity: a + (b • c) = (a + b) • (a + c) a • (b + c) = (a • b) + (a • c)
7. complementarity:a + a’ = 1 a • a’ = 0

6
Boolean algebra

◼ Boolean algebra
❑ B = {0, 1}
❑ variables
❑ + is logical OR, • is logical AND
❑ ’ is logical NOT
◼ All algebraic axioms hold

7
Logic functions and Boolean algebra

◼ Any logic function that can be expressed as a truth table


can be written as an expression in Boolean algebra using
the operators: ’, +, and •
X Y X•Y X Y X’ X’ • Y
0 0 0 0 0 1 0
0 1 0 0 1 1 1
1 0 0 1 0 0 0
1 1 1 1 1 0 0

X Y X’ Y’ X•Y X’ • Y’ ( X • Y ) + ( X’ • Y’ )
0 0 1 1 0 1 1
0 1 1 0 0 0 0
( X • Y ) + ( X’ • Y’ )  X = Y
1 0 0 1 0 0 0
1 1 0 0 1 0 1
Boolean expression that is
true when the variables X
X, Y are Boolean algebra variables
and Y have the same value
and false, otherwise

8
Axioms and theorems of Boolean algebra
◼ identity
1. X + 0 = X 1D. X • 1 = X
◼ null
2. X + 1 = 1 2D. X • 0 = 0
◼ idempotency:
Idempotency: one may derive
3. X + X = X 3D. X • X = X the same consequences from
many instances of a
◼ involution:
hypothesis as from just one
4. (X’)’ = X Involution: a function that is its
own inverse, so that f(f(x)) = x
◼ complementarity:
5. X + X’ = 1 5D. X • X’ = 0
◼ commutativity:
6. X + Y = Y + X 6D. X • Y = Y • X
◼ associativity:
7. (X + Y) + Z = X + (Y + Z) 7D. (X • Y) • Z = X • (Y • Z)

Note that suffix “D” means the dual of the original expression.
Dual is the other symmetric part of a pair, which will be discussed later.
(at this moment, use two rules: AND<-> OR, 0<->1)
9
Axioms and theorems of Boolean algebra (cont’d)

◼ distributivity:
8. X • (Y + Z) = (X • Y) + (X • Z) 8D. X + (Y • Z) = (X + Y) • (X + Z)
◼ uniting:
9. X • Y + X • Y’ = X 9D. (X + Y) • (X + Y’) = X
◼ absorption:
10. X + X • Y = X 10D. X • (X + Y) = X
11. (X + Y’) • Y = X • Y 11D. (X • Y’) + Y = X + Y
◼ factoring:
12. (X + Y) • (X’ + Z) = 12D. X • Y + X’ • Z =
X • Z + X’ • Y (X + Z) • (X’ + Y)
◼ concensus:
13. (X • Y) + (Y • Z) + (X’ • Z) = 13D. (X + Y) • (Y + Z) • (X’ + Z) =
X • Y + X’ • Z (X + Y) • (X’ + Z)

Theorem12. (X+Y)(X’+Z) = XX’+XZ+X’Y+YZ =XZ+X’Y+YZ(X+X’)


=XZ(1+Y)+X’Y(1+Z) =XZ+X’Y
10
Axioms and theorems of Boolean algebra (cont’d)

◼ de Morgan’s:
14. (X + Y + ...)’ = X’ • Y’ • ... 14D. (X • Y • ...)’ = X’ + Y’ + ...
◼ generalized de Morgan’s:
15. f’(X1,X2,...,Xn,0,1,+,•) = f(X1’,X2’,...,Xn’,1,0,•,+)

◼ establishes relationship between • and +

11
Axioms and theorems of Boolean algebra (cont’d)

◼ Duality
❑ a dual of a Boolean expression is derived by replacing
• by +, + by •, 0 by 1, and 1 by 0, and leaving variables unchanged
❑ any theorem that can be proven is thus also proven for its dual!
❑ a meta-theorem (a theorem about theorems)
◼ duality:
16. X + Y + ...  X • Y • ...
◼ generalized duality:
17. f (X1,X2,...,Xn,0,1,+,•)  f(X1,X2,...,Xn,1,0,•,+)
◼ Different than deMorgan’s Law
❑ this is a statement about theorems
❑ this is not a way to manipulate (re-write) expressions

12
Proving theorems (rewriting)
◼ Using the axioms of Boolean algebra:
❑ e.g., prove the theorem: X • Y + X • Y’ = X (uniting)

distributivity (8) X • Y + X • Y’ = X • (Y + Y’)


complementarity (5) X • (Y + Y’) = X • (1)
identity (1D) X • (1) = X✓

❑ e.g., prove the theorem: X+X•Y = X (absorption)

identity (1D) X + X•Y = X •1 + X•Y


distributivity (8) X •1 + X•Y = X • (1 + Y)
null (2) X • (1 + Y) = X • (1)
identity (1D) X • (1) = X ✓

13
Proving theorems (perfect induction)

◼ Using perfect induction (complete truth table):


❑ e.g., de Morgan’s:

X Y X’ Y’ (X + Y)’ X’ • Y’
(X + Y)’ = X’ • Y’ 0 0 1 1 1 1
NOR is equivalent to AND 0 1 1 0 0 0
with inputs complemented 1 0 0 1 0 0
1 1 0 0 0 0

X Y X’ Y’ (X • Y)’ X’ + Y’
(X • Y)’ = X’ + Y’ 0 0 1 1 1 1
NAND is equivalent to OR 0 1 1 0 1 1
with inputs complemented 1 0 0 1 1 1
1 1 0 0 0 0

14
A simple example: 1-bit binary adder
Cout Cin

◼ Inputs: A, B, Carry-in A A A A A
B B B B B
◼ Outputs: Sum, Carry-out
S S S S S

A B Cin Cout S
0 0 0 0 0 A
0 0 1 0 1 S
0 1 0 0 1 B
0 1 1 1 0 Cout
Cin
1 0 0 0 1
1 0 1 1 0
1 1 0 1 0 S = A’ B’ Cin + A’ B Cin’ + A B’ Cin’ + A B Cin
1 1 1 1 1
Cout = A’ B Cin + A B’ Cin + A B Cin’ + A B Cin

15
Apply the theorems to simplify expressions

◼ The theorems of Boolean algebra can simplify Boolean


expressions
❑ e.g., full adder’s carry-out function (same rules apply to any function)
Cout = A’ B Cin + A B’ Cin + A B Cin’ + A B Cin
= A’ B Cin + A B’ Cin + A B Cin’ + A B Cin + A B Cin
= A’ B Cin + A B Cin + A B’ Cin + A B Cin’ + A B Cin
= (A’ + A) B Cin + A B’ Cin + A B Cin’ + A B Cin
= (1) B Cin + A B’ Cin + A B Cin’ + A B Cin
= B Cin + A B’ Cin + A B Cin’ + A B Cin + A B Cin
= B Cin + A B’ Cin + A B Cin + A B Cin’ + A B Cin
= B Cin + A (B’ + B) Cin + A B Cin’ + A B Cin
= B Cin + A (1) Cin + A B Cin’ + A B Cin
adding extra terms
= B Cin + A Cin + A B (Cin’ + Cin)
creates new factoring
= B Cin + A Cin + A B (1) opportunities
= B Cin + A Cin + A B
16
From Boolean expressions to logic gates

X Y
◼ NOT X’ X ~X X Y 0 1
1 0

X Y Z
X 0 0 0
◼ AND X • Y XY XY Y Z 0 1 0
1 0 0
1 1 1

X Y Z
X 0 0 0
Z
◼ OR X+Y XY Y 0 1 1
1 0 1
1 1 1

17
From Boolean expressions to logic gates (cont’d)
X Y Z
X 0 0 1
◼ NAND Z 0 1 1
Y
1 0 1
1 1 0

X Y Z
◼ NOR X 0 0 1
Z 0 1 0
Y 1 0 0
1 1 0

◼ XOR X Y Z
X 0 0 0 X xor Y = X Y’ + X’ Y
XY Y
Z 0 1 1 X or Y but not both
1 0 1 ("inequality", "difference")
1 1 0

◼ XNOR X Y Z
X 0 0 1 X xnor Y = X Y + X’ Y’
X=Y Z 0 1 0 X and Y are the same
Y
1 0 0 ("equality", "coincidence")
1 1 1
The bubble at the tip indicates an inverter.
XNOR is the negation of XOR 18
From Boolean expressions to logic gates (cont’d)

◼ More than one way to map expressions to gates

❑ e.g., Z = A’ • B’ • (C + D) = (A’ • (B’ • (C + D)))


T2
T1
use of 3-input gate
A Z
A
B
T1 B Z
C C
T2
D D

19
Waveform view of logic functions
◼ Just a sideways truth table
❑ but note how edges don’t line up exactly
❑ it takes time for a gate to switch its output!
time

change in Y takes time to "propagate" through gates


There IS difference; it takes time for a signal to pass through each gate.
Waveform describes how a signal at each point changes over time
Suppose X and Y change at precise timing.
Depending on the gate type, the gate passing delay can be slightly different. e.g. an XOR
gate is complicated, which incurs a longer delay than other simple gates 20
Choosing different realizations of a function
A B C Z Z = A’B’C + A’BC + AB’C + ABC’
0 0 0 0
0 0 1 1 Z1 = ABC’ + A’C + B’C
0 1 0 0 Z2 = ABC’ + (AB)’C
0 1 1 1
1 0 0 0 Z3 = ABC
1 0 1 1
1 1 0 1 two-level realization
1 1 1 0 (we don’t count NOT gates)

p.53

multi-level realization
(gates with fewer inputs)

XOR gate (easier to draw


but costlier to build)
Z1=Z2=Z3=Z
Let’s consider Z1 first. 3 AND gates and 1 OR gate. Also we need to check the # of wires or inputs. In Z3, XOR is called a
complex gate, which requires several NAND or NOR gates. So Z3 is likely to have the worst delay.
21
XOR implementations
◼ Three levels of logic inside a XOR gate
◼ XY = X’Y + XY’
X

X Y

X Y

Y
22
Which realization is best?

◼ Reduce number of inputs


❑ literal: input variable (complemented or not)
• can approximate cost of logic gate as 2 transistors per literal
❑ fewer literals means less transistors
• smaller circuits
❑ fewer inputs implies faster gates
• gates are smaller and thus also faster
❑ fan-ins (# of gate inputs) are limited in some technologies
◼ Reduce number of gates
❑ fewer gates (and the packages they come in) means smaller circuits
• directly influences manufacturing costs

23
Which is the best realization? (cont’d)

◼ Reduce number of levels of gates


❑ fewer level of gates implies reduced signal propagation delays
❑ minimum delay configuration typically requires more gates
• wider, less deep circuits
◼ How do we explore tradeoffs between increased circuit
delay and size?
❑ automated tools to generate different solutions
❑ logic minimization: reduce number of gates and complexity
❑ logic optimization: reduction while trading off against delay

Depending on the criteria (e.g. minimize delay, minimize the # of gates),


the CAD tools may yield different solutions.

24
Are all realizations equivalent?
◼ Under the same input stimuli, the three alternative
implementations have almost the same waveform behavior
❑ delays are different
❑ glitches (hazards) may arise – these could be bad, it depends
❑ variations due to differences in number of gate levels and structure
◼ The three implementations are functionally equivalent

Different implementations for the same function are equivalent with a steady state
viewpoint, but the transient behavior may be a little bit different
Typically, a transient behavior takes place right after some input transition.
25
Choosing different realizations of a function
A B C Z A=1
0 0 0 0
0 0 1 1 B=0→1
0 1 0 0 C=1→0
0 1 1 1
1 0 0 0
1 0 1 1
1 1 0 1 two-level realization
1 1 1 0 (we don’t count NOT gates)

0→1 0→0→1 multi-level realization


A
Assume the same (gates with fewer inputs)
B
delay for all gates C’
1→ 1 → 0→ 1
1→1→0
C 1→ 0→ 0 → 0 Glitch!

XOR gate (easier to draw


but costlier to build)
Let’s see Z2. First, input variables are changing. B goes from 0 to 1 while C
goes from 1 to 0, and these changes are propagated through gates. The delays
are accumulated as the signal goes through more gates. 26
Implementing Boolean functions

◼ Technology independent
❑ canonical forms
❑ two-level forms
❑ multi-level forms
◼ Technology choices
❑ packages of a few gates
❑ regular logic
❑ two-level programmable logic
❑ multi-level programmable logic

A Boolean function can take one of various expressions.

27
Canonical forms

◼ Truth table is the unique signature of a Boolean function


◼ The same truth table can have many gate realizations
◼ Canonical forms
❑ standard forms for a Boolean expression
❑ provides a unique algebraic signature

28
Sum-of-products (S-o-P) canonical forms
◼ Also known as (aka) disjunctive normal form
◼ Also known as minterm expansion
F = 001 011 101 110 111
A’B’C + A’BC + AB’C + ABC’ + ABC

A B C F F’
0 0 0 0 1 F=
0 0 1 1 0
0 1 0 0 1
0 1 1 1 0
1 0 0 0 1
1 0 1 1 0
1 1 0 1 0
1 1 1 1 0 F’ = A’B’C’ + A’BC’ + AB’C’

Just check all the cases when F becomes true and each case forms the product of
input variables. And finally, ORing these products will yield the final expression.
This is also called minterm expansion; here, a minterm is a product of all the input
literals. Each literal should appear once in each minterm: asserted or complemented
29
Sum-of-products canonical form (cont’d)
◼ Product term (or minterm)
❑ ANDed product of literals – input combination for which output is true
❑ each variable appears exactly once, true or inverted (but not both)
A B C minterms
F in canonical form:
0 0 0 0 A’B’C’ m0
F(A, B, C) = m(1,3,5,6,7)
1 0 0 1 A’B’C m1
= m1 + m3 + m5 + m6 + m7
2 0 1 0 A’BC’ m2
= A’B’C + A’BC + AB’C + ABC’ + ABC
3 0 1 1 A’BC m3
4 1 0 0 AB’C’ m4
canonical form  minimal form
5 1 0 1 AB’C m5
F(A, B, C) = A’B’C + A’BC + AB’C + ABC + ABC’
6 1 1 0 ABC’ m6
= (A’B’ + A’B + AB’ + AB)C + ABC’
7 1 1 1 ABC m7
= ((A’ + A)(B’ + B))C + ABC’
= C + ABC’
short-hand notation for
= ABC’ + C
minterms of 3 variables
= AB + C
Each product is called a minterm, and denoted by small m and a decimal number
for the binary input values
Note that there is no reduction or minimization in canonical forms; each variable must
appear once for each product 30
Product-of-sums (P-o-S) canonical form

◼ Also known as conjunctive normal form


◼ Also known as maxterm expansion
F= 000 010 100
F = (A + B + C) (A + B’ + C) (A’ + B + C)

A B C F F’
0 0 0 0 1
0 0 1 1 0
0 1 0 0 1
0 1 1 1 0
1 0 0 0 1
1 0 1 1 0
1 1 0 1 0
1 1 1 1 0
F’ = (A + B + C’) (A + B’ + C’) (A’ + B + C’) (A’ + B’ + C) (A’ + B’ + C’)

The other canonical form is P-o-S. This one focuses on when F will be 0.
P-o-S is like the dual of S-o-P. First of all, we check all the cases that make F false or 0
The variables for each case or term are first complemented and then connected by the OR
operation. This ORed term is called a maxterm. Eventually, these terms are connected by AND.
What does the final expression mean?
31
Product-of-sums canonical form (cont’d)

◼ Sum term (or maxterm)


❑ ORed sum of literals – input combination for which output is false
❑ each variable appears exactly once, true or inverted (but not both)
A B C maxterms F in canonical form:
0 0 0 A+B+C M0 F(A, B, C) = M(0,2,4)
0 0 1 A+B+C’ M1 = M0 • M2 • M4
0 1 0 A+B’+C M2 = (A + B + C) (A + B’ + C) (A’ + B + C)
0 1 1 A+B’+C’ M3
1 0 0 A’+B+C M4 canonical form  minimal form
1 0 1 A’+B+C’ M5 F(A, B, C) = (A + B + C) (A + B’ + C) (A’ + B + C)
1 1 0 A’+B’+C M6 = (A + B + C) (A + B’ + C)
1 1 1 A’+B’+C’ M7 (A + B + C) (A’ + B + C)
= (A + C) (B + C)
short-hand notation for
maxterms of 3 variables

Each maxterm is denoted by the capital M and the decimal value of input variables.
32
S-o-P, P-o-S, and de Morgan’s theorem
◼ Sum-of-products
❑ F’ = A’B’C’ + A’BC’ + AB’C’
◼ Product-of-sums for F??
◼ Apply de Morgan’s
❑ (F’)’ = (A’B’C’ + A’BC’ + AB’C’)’
❑ F = (A + B + C) (A + B’ + C) (A’ + B + C)

◼ Product-of-sums
❑ F’ = (A + B + C’) (A + B’ + C’) (A’ + B + C’) (A’ + B’ + C) (A’ + B’ + C’)
◼ Sum-of-products for F?
◼ Apply de Morgan’s
❑ (F’)’ = ( (A + B + C’)(A + B’ + C’)(A’ + B + C’)(A’ + B’ + C)(A’ + B’ + C’) )’
❑ F = A’B’C + A’BC + AB’C + ABC’ + ABC

33
Four alternative two-level implementations
of F = AB + C
A

B canonical sum-of-products
F1

minimized sum-of-products

F2

canonical product-of-sums
F3

minimized product-of-sums
F4

34
Waveforms for the four alternatives
◼ Waveforms are essentially identical
❑ except for timing hazards (glitches)
❑ delays almost identical (modeled as a delay per level, not type of
gate or number of inputs to gate)

Even though F1, F2, F3 and F4 are equivalent in steady-state behaviors, their
transient behaviors may be different
35
Mapping between canonical forms
◼ Minterm to maxterm conversion
❑ use maxterms whose indices do not appear in minterm expansion
❑ e.g., F(A,B,C) = m(1,3,5,6,7) = M(0,2,4)
◼ Maxterm to minterm conversion
❑ use minterms whose indices do not appear in maxterm expansion
❑ e.g., F(A,B,C) = M(0,2,4) = m(1,3,5,6,7)
◼ Minterm expansion of F to minterm expansion of F’
❑ use minterms whose indices do not appear
❑ e.g., F(A,B,C) = m(1,3,5,6,7) F’(A,B,C) = m(0,2,4)
◼ Maxterm expansion of F to maxterm expansion of F’
❑ use maxterms whose indices do not appear
❑ e.g., F(A,B,C) = M(0,2,4) F’(A,B,C) = M(1,3,5,6,7)
36
Incompletely specified functions
◼ Example: binary coded decimal (BCD) increment by 1
❑ BCD digits encode the decimal digits 0 – 9
On-set: the set of cases
in the bit patterns 0000 – 1001
A B C D W X Y Z whose output is 1
0 0 0 0 0 0 0 1
0 0 0 1 0 0 1 0 off-set of W
0 0 1 0 0 0 1 1
0 0 1 1 0 1 0 0 on-set of W
0 1 0 0 0 1 0 1
0 1 0 1 0 1 1 0
don’t care (DC) set of W
0 1 1 0 0 1 1 1
0 1 1 1 1 0 0 0
1 0 0 0 1 0 0 1
1 0 0 1 0 0 0 0
1 0 1 0 X X X X
1 0 1 1 X X X X these inputs patterns should
1 1 0 0 X X X X never be encountered in practice
1 1 0 1 X X X X – "don’t care" about associated
1 1 1 0 X X X X output values, can be exploited
1 1 1 1 X X X X in minimization

BCD coding uses only ten values from 0 to 9. With 4 input lines, we have 6
don’t care cases of input values.
For these don’t care values, the function can have any arbitrary output values
37
Notation for incompletely specified functions

◼ Don’t cares and canonical forms


❑ so far, we focus on either on-set or off-set
❑ There can be don’t-care-set
❑ need two of the three sets (on-set, off-set, dc-set)
◼ Canonical representations of the BCD increment by 1
function:
❑ Minterm expansion?
❑ Z = m0 + m2 + m4 + m6 + m8 + d10 + d11 + d12 + d13 + d14 + d15
❑ Z =  [ m(0,2,4,6,8) + d(10,11,12,13,14,15) ]
❑ Maxterm expansion?
❑ Z = M1 • M3 • M5 • M7 • M9 • D10 • D11 • D12 • D13 • D14 • D15
❑ Z =  [ M(1,3,5,7,9) • D(10,11,12,13,14,15) ]

38
Simplification of two-level combinational logic

◼ Finding a minimal sum of products or product of sums


realization
❑ exploit don’t care information in the process
◼ Algebraic simplification
❑ not an algorithmic/systematic procedure
❑ how do you know when the minimum realization has been found?
◼ Computer-aided design (CAD) tools
❑ precise solutions require very long computation times, especially for
functions with many inputs (> 10)
❑ heuristic methods employed – "educated guesses" to reduce amount of
computation and yield good if not best solutions
◼ Hand methods still relevant
❑ to understand automatic tools and their strengths and weaknesses
❑ ability to check results (on small examples)
39
Two minimization techniques
◼ Boolean cubes
◼ Karnaugh-maps (K-maps)

◼ Both of them are based on the uniting theorem

40
The uniting theorem

◼ Key tool to simplification: A (B’ + B) = A


◼ Essence of simplification of two-level logic
❑ find two element subsets of the ON-set where only one variable
changes its value – this single varying variable can be eliminated
and a single product term used to represent both elements

F = A’B’+AB’ = (A’+A)B’ = B’
A B F
B has the same value in both on-set rows
0 0 1 – B remains (in complemented form)
0 1 0
1 0 1
A has a different value in the two rows
1 1 0
– A is eliminated

41
Boolean cubes

◼ Visual technique for identifying when the uniting theorem


can be applied
◼ n input variables = n-dimensional "cube"
01 11
0 1
Y 2-cube
1-cube
X 10
00
X

111 1111
0111

3-cube 4-cube
Y 101
Z
Y
Z
000 X W
1000
0000 X

42
Mapping truth tables onto Boolean cubes

◼ Uniting theorem combines two "faces" of a cube


into a larger "face"
◼ Example:
two faces of size 0 (nodes)
F
A B F combine into a face of size 1(line)
01 11
0 0 1
B
0 1 0
1 0 1 00 10
A
1 1 0

ON-set = solid nodes A varies within face, B does not


OFF-set = empty nodes this face represents the literal B'
DC-set = 'd nodes

fill in the nodes that correspond to the elements of the ON-set.


If there are two adjacent solid nodes, we can use the uniting theorem.
43
Three variable example

◼ Binary full-adder carry-out logic (A'+A)BCin

A B Cin Cout 011 AB(Cin'+Cin)


111
0 0 0 0
0 0 1 0 010
0 1 0 0
0 1 1 1 B 101
C
1 0 0 0
000 A A(B+B')Cin
1 0 1 1 100
1 1 0 1
1 1 1 1 the on-set is completely covered by
Cout = BCin+AB+ACin the combination (OR) of the subcubes
of lower dimensionality - note that “111”
is covered three times

44
Higher dimensional cubes

◼ Sub-cubes of higher dimension than 2

F(A,B,C) = m(4,5,6,7)
on-set forms a square
011 111 i.e., a cube of dimension 2
110 represents an expression in one variable
010 i.e., 3 dimensions – 2 dimensions
B 001
C 101 A is asserted (true) and unchanged
B and C vary
000 A 100
This subcube represents the
literal A

Output function is m(4,5,6,7) in S-O-P form.


In this case, the on-set nodes form a square.
Here, we use the uniting theorem at a greater scale. A(BC+BC’+B’C+B’C’) = A

45
m-dimensional cubes in a n-dimensional Boolean
space

◼ In a 3-cube (three variables):


❑ a 0-cube, i.e., a single node, yields a term in 3 literals
❑ a 1-cube, i.e., a line of two nodes, yields a term in 2 literals
❑ a 2-cube, i.e., a plane of four nodes, yields a term in 1 literal
❑ a 3-cube, i.e., a cube of eight nodes, yields a constant term "1"
◼ In general,
❑ In an n-cube, an m-subcube (m < n) yields a term
with n – m literals

46
Karnaugh maps
◼ Flat map of Boolean cube
❑ wrap–around at edges
❑ hard to draw and visualize for more than 4 dimensions
❑ virtually impossible for more than 6 dimensions
◼ Alternative to truth-tables to help visualize adjacencies
❑ guide to applying the uniting theorem
❑ on-set elements with only one variable changing value are
adjacent unlike the situation in a linear truth-table
A B F
A
B 0 1 0 0 1
0 1 1 0 1 0
0 2
1 0 0 1 0 1
1 3
1 1 0
Another technique is using a Karnaugh map, which is kind of a flat
version of the Boolean cube technique.
47
Karnaugh maps (cont’d)
◼ Numbering scheme based on Gray–code
❑ e.g., 00, 01, 11, 10
❑ only a single bit changes in code for adjacent map cells
AB A
C 00 01 11 10
A
0
0 2 6 4 00 0 4 12 8
C 1
1 3 7 5
01 1 5 13 9
B D
A 11 3 7 15 11
C
0 2 6 4 10 2 6 14 10
C B
1 3 7 5
13 = 1101= ABC’D
B
This slide shows Karnaugh maps of 3 and 4 inputs . The thick line segment represents
the domain (in the perpendicular direction) where each variable is always TRUE.
The complement of the above domain indicate the inverted variable.
* Gray code: two successive numbers differ in only one bit and they are cyclic
48
Adjacencies in Karnaugh maps

◼ Wrap from first to last column


◼ Wrap top row to bottom row
011 111
A
110
000 010 110 100 010

B 001
C 001 011 111 101 C 101
B 100
000 A

Let’s focus on cell 000; there are three adjacent cells.


Note that the number of adjacent cells is the same as the number of
input variables since it is equal to the number of bits.

49
Karnaugh map examples

◼ F= A
1 1
B’
B 0 0
◼ Cout =

◼ f(A,B,C) = m(0,4,5,7) A
0 0 1 0 AB + ACin + BCin

Cin 0 1 1 1

A B

1 0 0 1
The on-set included in
C 0 0 1 1 the red oval is already
covered by two other
B AC + B’C’ + AB’
adjacencies

50
More Karnaugh map examples

A
0 0 1 1
G(A,B,C) = A
C 0 0 1 1
B

A
1 0 0 1
F(A,B,C) = m(0,4,5,7) = AC + B’C’
C 0 0 1 1
B

A
0 1 1 0 F' simply replace 1's with 0's and vice versa
C
F'(A,B,C) =  m(1,2,3,6) = BC’ + A’C
1 1 0 0
B

51
Karnaugh map: 4-variable example

◼ F(A,B,C,D) = m(0,2,3,5,6,7,8,10,11,14,15)

F = C + A’BD + B’D’

A 1111
0111
1 0 0 1

0 1 0 0
D
C
1 1 1 1 D
C A
1000
1 1 1 1 0000 B
B find the smallest number of the largest possible
subcubes to cover the ON-set
(fewer terms with fewer inputs per term)

52
Karnaugh maps: don’t cares (DCs)

◼ f(A,B,C,D) =  m(1,3,5,7,9) + d(6,12,13)


❑ without don't cares A
• f = A’D + B’C’D 0 0 X 0

1 1 X 1
D
1 1 0 0
C
0 X 0 0
B

Now let’s see how we can utilize don’t care (DC) terms in the Karnaugh map technique.
If we don’t use DC terms, the logic function f is A’D +B’C’D

53
Karnaugh maps: don’t cares (cont’d)

◼ f(A,B,C,D) =  m(1,3,5,7,9) + d(6,12,13)


❑ f = A'D + B'C'D without don't cares
❑ f = A'D + C'D with don't cares

A
by using don't care as a "1"
0 0 X 0
a 2-cube can be formed
1 1 X 1 rather than a 1-cube to cover
D this node
1 1 0 0
C don't cares can be treated as 1s or 0s
0 X 0 0 depending on which is more advantageous
B

By interpreting DCs as 1s opportunistically,


we can utilize the uniting theorem at greater scale.
54
Combinational logic summary

◼ Logic functions, truth tables, and switches


❑ NOT, AND, OR, NAND, NOR, XOR, . . ., minimal set
◼ Axioms and theorems of Boolean algebra
❑ proofs by re-writing and perfect induction
◼ Gate logic
❑ networks of Boolean functions and their time behavior
◼ Canonical forms
❑ two-level and incompletely specified functions
◼ Simplification
❑ a start at understanding two-level simplification
◼ Later
❑ automation of simplification
❑ multi-level logic
❑ time behavior
❑ hardware description languages
❑ design case studies

55

You might also like