0% found this document useful (0 votes)
108 views40 pages

Com-112-Intro-To-Digital-Electronics - Real

INTRODUCTION TO DIGITAL ELECTRONICS

Uploaded by

tba2016rt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
108 views40 pages

Com-112-Intro-To-Digital-Electronics - Real

INTRODUCTION TO DIGITAL ELECTRONICS

Uploaded by

tba2016rt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Number Systems

Decimal System
The Decimal system is what you use every day when you count/ Its name is derived from the Latin
word Decem, which means ten. This makes sense since the system uses ten digits: 0, 1, 2, 3, 4, 5,
6, 7, 8 and 9. These digits are what we call the symbols of the decimal system.

Since we have ten symbols, we can count from 0 to 9. Note that 0, even though it often means
'nothing', is a symbol that counts! After all, you need a numeric way to say 'nothing'. When you
want to count past what your simple symbols will allow, you combine multiple digits. The table
below shows this concept, which is demonstrated by adding one for every step:
0 1 2 3 4 5 6 7 8 9
10 11 12 13 14 15 16 17 18 19
20 21 22 23 24 25 26 27 28 29
The table has 10 numbers across, which is the same number of symbols as the decimal system. As
you look at row 2, you notice that we added symbol 1 to the 0, making 10. In row 3, the one is
replaced by a 2, giving 20. The further you go down the table, the higher the numbers get.

Binary Number System


Binary is a number system used by digital devices like computers, cd players, etc. Binary is Base 2
unlike our counting system decimal which is Base 10 (denary). In other words, Binary has only 2
different numerals (0 and 1), unlike Decimal which has 10 numerals
(0,1,2,3,4,5,6,7,8 and 9). Here is an example of a binary number: 10011100
As you can see it is simply a bunch of zeroes and ones, there are 8 numerals in all which make this
an 8 bit binary number, bit is short for Binary Digit, and each numeral is classed as a bit.

The bit on the far right (in this case a zero) is known as the Least significant bit (LSB), and the bit
on the far left (in this case a 1) is known as the Most significant bit (MSB)

When writing binary numbers you will need to signify that the number is binary (base 2), for
example take the value 101, as it is written it would be hard to work out whether it is a binary or
decimal (denary) value, to get around this problem it is common to denote the base to which the
number belongs by writing the base value with the number, for example:

1012 is a binary number and 10110 is a decimal (denary) value.

Octal Number System


Although this was once a popular number base, especially in the Digital Equipment Corporation
PDP/8 and other old computer systems, it is rarely used today. The Octal system is based on the
binary system with a 3-bit boundary. The Octal Number System:

Uses base 8
Includes only the digits 0 through 7 (any other digit would make the number an invalid octal
number)

1
The weighted values for each position are as follows:

85 84 83 82 81 80

32768 4096 512 64 8 1

Hexadecimal Number System


Binary is an effective number system for computers because it is easy to implement with digital
electronics. It is inefficient for humans to use binary, however, because it requires so many digits
to represent a number. The number 76, for example, takes only two digits to write in decimal, yet
takes seven digits to write in binary (1001100). To overcome this limitation, the hexadecimal
number system was developed. Hexadecimal is more compact than binary but is still based on the
digital nature of computers.

Hexadecimal works in the same way as binary and decimal, but it uses sixteen digits instead of
two or ten. Since the western alphabet contains only ten digits, hexadecimal uses the letters A-F
to represent the digits ten through fifteen. Here are the digits used in hexadecimal and their
equivalents in binary and decimal:
Hex 0 1 2 3 4 5 6 7 8 9 A B C D E F
Decimal 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Binary 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

1 Binary to Decimal & Decimal to Binary


1.1 Binary to Decimal Conversion

1.2 Decimal to Binary Conversion


To convert a decimal number to binary, first subtract the largest possible power of two, and keep
subtracting the next largest possible power form the remainder, marking 1 st in each column
where this is possible and 0s where it is not.

Example 1 - (Convert Decimal 44 to Binary)

2
Example 2 - (Convert Decimal 15 to Binary)

Example 3 - (Convert Decimal 62 to Binary)

3
Decimal Values and Binary Equivalents chart:

DECIMAL BINARY
1 1
2 10
3 11
4 100
5 101
6 110
7 111
8 1000
9 1001
10 1010
16 10000
32 100000
64 1000000
100 1100100
256 100000000
512 1000000000
1000 1111110100
1024 10000000000

1.3 Binary to Octal & Octal to Binary


The following table show Octal numbers and their Binary Equivalent

Octal Binary

4
0 000

1 001

2 010

3 011

4 100

5 101

6 110

7 111
1.4 Binary to Octal Conversion
It is easy to convert from an integer binary number to octal. This is accomplished by:

1. Break the binary number into 3-bit sections from the LSB to the MSB.
2. Convert the 3-bit binary number to its octal equivalent.

For example, the binary value 1010111110110010 will be written:

001 010 111 110 110 010

1 2 7 6 6 2
1.5 Octal to Binary Conversion
It is also easy to convert from an integer octal number to binary. This is accomplished by:

1. Convert the decimal number to its 3-bit binary equivalent.


2. Combine the 3-bit sections by removing the spaces.

For example, the octal value 127662 will be written:

1 2 7 6 6 2

001 010 111 110 110 010


This yields the binary number 001010111110110010 or 00 1010 1111 1011 0010 in more readable
format.

1.6 Binary to HEX & HEX to Binary

5
Using this relationship, you can easily convert binary numbers to hex. Starting at the radix point
and moving either right or left, break the number into groups of four. The grouping of binary into
four bit groups is called binary-coded hexadecimal (BCH).

Convert 111010011 to hex:


2

Add 0s to the left of the MSD of the whole portion of the number and to the right of the LSD of
the fractional part to form a group of four. Convert .111 to hex:
2

Convert .111 to hex:


2

In this case, if a 0 had not been added, the conversion would have been .7 , which is incorrect.
16

Convert the following binary numbers to hex:


Q1. 102

Q2. 1011 2

Q3. 101111 2

Q4. 0011 2

Q5. 110011

Binary Coded Decimal

6
Binary-coded decimal (BCD) is an encoding for decimal numbers in which each digit is
represented by its own binary sequence. Its main virtue is that it allows easy conversion to
decimal digits for printing or display and faster decimal calculations. Its drawbacks are the
increased complexity of circuits needed to implement mathematical operations and a relatively
inefficient encoding—it occupies more space than a pure binary representation. In BCD, a digit is
usually represented by four bits which, in general, represent the values/digits/characters 0-9.
Other bit combinations are sometimes used for sign or other indications.

Excess-three Code:
A number code in which the decimal digit n is represented by the four-bit binary equivalent of n +
3.

Excess-3 binary coded decimal (XS-3), also called biased representation or Excess-N, is a
numeral system that uses a pre-specified number N as a biasing value. It is a way to represent
values with a balanced number of positive and negative numbers. In XS-3, numbers are
represented as decimal digits, and each digit is represented by four bits as the BCD value plus 3
(the "excess" amount):

• The smallest binary number represents the smallest value. (i.e. 0 - Excess Value)
• The greatest binary number represents the largest value. (i.e. 2N - Excess Value - 1)

Decimal Binary Decimal Binary Decimal Binary Decimal Binary

-3 0000 1 0100 5 1000 9 1100

-2 0001 2 0101 6 1001 10 1101

-1 0010 3 0110 7 1010 11 1110

0 0011 4 0111 8 1011 12 1111

To encode a number such as 127, then, one simply encodes each of the decimal digits as above,
giving (0100, 0101, 1010).

The primary advantage of XS-3 coding over BCD coding is that a decimal number can be nines'
complemented (for subtraction) as easily as a binary number can be ones' complemented; just
invert all bits.

Adding Excess-3 works on a different algorithm than BCD coding or regular binary numbers.
When you add two XS-3 numbers together, the result is not an XS-3 number. For instance, when
you add 1 and 0 in XS-3 the answer seems to be 4 instead of 1. In order to correct this problem,
when you are finished adding each digit, you have to subtract 3 (binary 11) if the digit is less than
decimal 10 and add three if the number is greater than or equal to decimal 10.

Seven Segment Display Code:

7
Binary numbers are necessary, but very hard to read or interpret. A seven-segment (LED) display
is used to display binary to decimal information.

A seven-segment display may have 7, 8, or 9 leads on the chip. Usually leads 8 and 9 are decimal
points. The figure below is a typical component and pin layout for a seven segment display.

The light emitting diodes in a seven-segment display are arranged in the figure below.

8
The image below is your typical seven segment display with each of the segments labeled with the
letters A through G. To display digits on these displays you turn on some of the LEDs. For
example, when you illuminate segments B and C for example your eye perceives it as looking
like the number "1." Light up A, B, and C and you will see what looks like a "7."

Boolean Postulates
Introduction
The most obvious way to simplify Boolean expressions is to manipulate them in the same way as
normal algebraic expressions are manipulated. With regards to logic relations in digital forms, a
set of rules for symbolic manipulation is needed in order to solve for the unknowns.
A set of rules formulated by the English mathematician George Boole describe certain
propositions whose outcome would be either true or false. With regard to digital logic, these
9
rules are used to describe circuits whose state can be either, 1 (true) or 0 (false). In order to fully
understand this, the relation between the AND gate, OR gate and NOT gate operations should be
appreciated. A number of rules can be derived from these relations as Table 1 demonstrates.

• P1: X = 0 or X = 1 • P2: 0 . 0 = 0 • P3: 1 + 1 = 1 • P4: 0 + 0 = 0 • P5: 1 . 1 = 1


• P6: 1 . 0 = 0 . 1 = 0 • P7: 1 + 0 = 0 + 1 = 1

Table 1: Boolean Postulates

Laws of Boolean Algebra


Table 2 shows the basic Boolean laws. Note that every law has two expressions, (a) and (b). This
is known as duality. These are obtained by changing every AND(.) to OR(+),
every OR(+) to AND(.) and all 1's to 0's and vice-versa.
It has become conventional to drop the . (AND symbol) i.e. A.B is written as AB.

Commutative Law
(a) A + B = B + A
(b) AB=BA

Associate Law
(a) (A + B) + C = A + (B + C)
(b) (A B) C = A (B C)

Distributive Law
(a) A (B + C) = A B + A C
(b) A + (B C) = (A + B) (A + C)

Identity Law
(a) A + A = A
(b) AA=A

(a)

(b)

Redundance Law

(a) A + A B = A
(b) A (A + B) = A

(a) 0 + A = A
(b) 0A=0

10
(a) 1 + A = 1
(b) 1A=A

(a)

(b)

(a)
(b)

De Morgan's Theorem

De Morgan's theorem are rules in formal logic relating pairs of dual logical operators in a
systematic manner expressed in terms of negation. The relationship so induced is called De
Morgan duality.

not (P and Q) = (not P) or (not Q) not (P or Q) = (not


P) and (not Q)

De Morgan's laws are based on the equivalent truth-values of each pair of statements.

(a)
(b)

Minimize Logic Expressions Algebraically

Using the laws given above, complicated expressions can be simplified.

Basic Digital Logic


There are three major functions in Digital Electronics. These functions are used to make more
complicated circuits, so an understanding of how these building blocks work is key to
understanding how circuits work.

The "AND" function requires that multiple inputs are all true for the output to be true. For
example, if you turn your car's ignition key, and step on the gas, then your car will start. Simply
turning the key or stepping on the gas isn't enough, both must be done to get the correct output.
Likewise, all the inputs into an AND gate must be true for the output to be true.

The "OR" function requires any input to be true for the output to be true. For example, you can
enter your home through either the back door or front door. Once you enter either one, you are
inside your home. Likewise, at least one of the inputs into an OR gate must be true for the output
to be true. If more then one input is true, the output is still true, since the minimum requirement is
one.

11
The "INVERTER" function (also known as the "NOT") simply changes the condition. If it was
true it becomes false, and if it was false it becomes true. For example, it is never day and night at
the same time. If it is day, it is not night. Likewise, an INVERTER gate will logically change the
input. For the output to be true, the input must be false.

In digital electronics, a false condition is 0 volts (called VSS), while a true condition is the applied
voltage (called VCC or VDD). Since the applied voltage can range from under 3 volts to 5 volts,
the true condition is normally simply called a logical 1, and the false condition is called a logical
0.

Using this information, it is possible to create what is called a "truth table." A truth table lists each
possible input combination, and the resulting output for each combination. While the AND and
the OR functions can each have two or more inputs, the truth table given here will assume two
inputs.

AND OR INV
#1 #2 O #1 #2 O I O
------- ------- ---
0 0 0 0 0 0 0 1
0 1 0 0 1 1 1 0
1 0 0 1 0 1
1 1 1 1 1 1

To read this table, read across. For example, look at the third line down. If input #1 is a logical 1
while input #2 is a logical 0, the output of an AND gate is a logical 0. On the other hand, the
same inputs into an OR gate will generate a logical 1 output. Remember that for an AND gate all
inputs must be true (input #1 AND input #2) to get a true output. However, for an OR gate only
one must be true (input #1 OR input #2) to get a true output.

Basic Boolean Algebra Manipulation


Boolean Algebra equations can be manipulated by following a few basic rules.

Manipulation Rules
A + B = B + A
A * B = B * A
(A + B) + C = A + (B + C)
(A * B) * C = A * (B * C)
A * (B + C) = (A * B) + (A * C)
A + (B * C) = (A + B) * (A + C)

Equivalence Rules
=
A = A (double negative)
A + A = A
A * A = A
_
A * A = 0
_
A + A = 1

Rules with Logical Constants


0 + A = A
1 + A = 1
0 * A = 0
1 * A = A
12
Many of these look identical to Matrix Operations in Linear Algebra. At any rate, this permits a
circuit designer to create a circuit as it comes to their mind, then manipulate the formula to
generate an equivalent circuit that does the same thing but requires less space.

This can be illustrated using the 5th manipulation rule.

Using the rule, generating an equivalent circuit that does the exact same thing, but be less
complicated, can be done with reasonable ease.

In the case of CMOS, the right hand side of the formula can also be manipulated, just always
remember to invert. The manipulation occurs under the invert bar.

_________________
D = (A * B) + (A * C)

is the same as...


___________
D = A * (B + C)

The manipulation is done the exact same way. Once there is a simplified formula, using the rules
with logical constants permit the placement of values directly into the formula to see what the
answer is. For example, using the above non inverted formula, C is a logical
1.

D = A * (B + C)
D = A * (B + 1)
D = A * (1) [1 + A = 1]
D = A [1 * A = A]

If C is known to be a logical 1, anything OR logical 1 is always a logical 1. Since the minimum


requirement is one input, once a single input is true (in this case C), the other inputs don't alter the
result. On the other hand, the AND gate requires all inputs. With
B+C true, the only other requirement is A. As the formula gave, D will be whatever A is.

Many Boolean Algebra problems can be solved using more then one formula, just like most
Algebra problems. For example...

13
___________
D = A * (B + C) [given formula]
_ _______ _______ _ _
D = A + (B + C) [DeMorgan (A * B) = A + B]
_ _ _ _______ _ _
D = A + (B * C) [DeMorgan (A + B) = A * B]
_ _ _ _
D = (A + B) * (A + C)
[A + (B * C) = (A + B) * (A + C)]
_______ _______ _______ _ _
D = (A * B) * (A * C) [DeMorgan (A * B) = A + B]
_________________ _______ _ _
D = (A * B) + (A * C) [DeMorgan (A + B) = A * B]

Manipulation rule number 5 could have been used to go from the first step to the last one in a
single move. However, using DeMorgan's Theorem, the problem turns into something that
manipulation rule number 6 can then be used on instead. DeMorgan's Theorem changes the logic
of the formula.

The Karnaugh Map

The Karnaugh map provides a simple and straight-forward graphic method of minimising boolean
expressions. It groups together expressions with common factors, thus eliminating unwanted
variables. With the Karnaugh map, Boolean expressions having up to four and even six variables
can be simplified.

The Karnaugh map is a rectangular map of the value of the expression for all possible input
values, it comprises a box for every line in the
truth table. But unlike a truth table, in which the
A
input values typically follow a standard binary
B A A
sequence (00, 01, ABY B
10, 11), the Karnaugh map's input values must be 0 0
ordered such that the values for adjacent columns
vary by only a single bit, for example, 00, 01, 11, 0 1 B
and 10. This ordering is known as a gray code,
and it is a key factor in the way in which 1 0
Karnaugh maps work. 1 1
Figure illustrates the concept of a Karnaugh map for 2 Inputs.

Example of Karnaugh map for 2 input AND gate

14
A
A B Y B A A
. 0 0
A B 0 0 0 = 0 B
A B 0 1 0
. 0 1
= 0 B
. 1 0 0
A B 1 1 1 = 0
.
A B = 1
An Example of Karnaugh Map for AND gate

Lets take an example of 4 inputs (A, B, C and D) as in Figure 2.

Figure 2: The basic Karnaugh map

Two things are noteworthy about this map. First, we've arranged the 16 possible values of the four
inputs as a 4x4 array, with two bits encoding each row or column.

The second and key feature is the way we number the rows and columns. They aren't in binary
sequence, as you might think. As you can see, they have the sequence 00, 01, 11, 10. Some of
you may recognize this as a Gray code.

Why this particular sequence? Because the codes associated with any two adjacent rows or
columns represent a change in only one variable. In a true binary counting code, sometimes
several digits can change in a single step; for example, the next step after 0x1111 is 0x10000.
Five output signals must change values simultaneously. In digital circuits, this can cause glitches
if one gate delay is a bit faster or slower than another. The Gray code avoids the problem. It's
commonly used in optical encoders.

Suppose the output value for two adjacent cells is the same. Since only one input variable is
changed between the two cells, this tells us that the output doesn't depend on that input. It's a
"don't care" for that output.

15
Figure 3: Looking for patterns

Look at Figure 3. Group X is true for inputs ABCD = 0100 and 1100. That means that it doesn't
depend on A, and we can write:

(20)

Similarly, Group Y doesn't depend on B or C. Its value is:

(21)

Note that the groupings don't necessarily have to be in contiguous rows or columns. In Group Z,
the group wraps around the edges of the map.

If we can group cells by twos, we eliminate one input. By fours, two inputs, and so on. If the cell
associated with a given output is isolated, it depends on all four inputs, and no minimization is
possible.

The Karnaugh map gives us a wonderful, graphical picture that lets us group the cells in a near-
optimal fashion. In doing so, we minimize the representation. Neat, eh?

Figure 4: Equation 2, mapped

Now let's see how Equation 2 plots onto the Karnaugh map, as shown in Figure 4. To make it
easier to see, I'll assign a different lowercase letter to each term of the equation:

(22)

16
In this case, the individual groups don't matter much. All that counts is the largest group we can
identify, which is of course the entire array of 16 cells. The output X is true for all values of the
inputs, so all four are don't-care variables.

As you can see, using a Karnaugh map lets us see the results and, usually, the optimal grouping at
a glance. It might seem that drawing a bunch of graphs is a tedious way to minimize logic, but
after a little practice, you get where you can draw these simple arrays fast and see the best
groupings quickly. You have to admit, it's a better approach than slogging through Equations 2
through 19.

17
NAND and NOR implementation
Function F can be implemented using NAND gates only, i.e.:

___________________________________________________
___________________________________________________
F = ¯A . B + ¯A . ¯C . D + B . C . D + ¯A . C . ¯D + ¯B . C . ¯D

Complement twice:
F = (¯(¯F))

__________________________________________________________
F = ¯(¯A . B) . ¯(¯A . ¯C . D) . ¯(B . C . D) . ¯(¯A . C . ¯D) . ¯(¯B . C . ¯D)

F now uses NAND gates only

Similarly, F can be implemented using NOR gates only

___________________________________________________
___________________________________________________
F = ¯A . B + ¯A . ¯C . D + B . C . D + ¯A . C . ¯D + ¯B . C . ¯D
________________________________________________________
________________________________________________________
F = (A + ¯B) . (A + C + ¯D) . (¯B + ¯C + ¯D) . (¯A + ¯C + D) . (B + ¯C + D)

_______________________________________________________________
F = ¯(A + ¯B) + ¯(A + C + ¯D) + ¯(¯B + ¯C + ¯D) + ¯(¯A + ¯C + D) + ¯(B + ¯C + D)

Now a circuit diagram using NOR gates only can be drawn

18
19
K-Map with Don’t care states
K-Map is constructed from a truth table, where all the combinations are given. Assuming sum of
minterms selection, a ‘1’ is inserted in the K-Map whenever a certain combination results in
obtaining a ‘1’.

In certain circumstances, a few combinations may not happen or if it does one may not be so
concerned about its occurance and its subsequent result. These combinations are called DON’T
CARE states and they are represented an a K-Map by a ‘X’.

20
Eg: BCD ----------------------- Excess-3
^ 0000 0011
||| 0001 0100
Viewed ||| 0010 0101
||| 0011 0110
||| 0100 0111
||| ... ...

Don’t care states are

BCD

A
1010 Not allowed in excess-3 and BCD
To represent a digit
B
1011
C
1100
D
1101
E
1110
F

1111

Simple Adders
In order to design a circuit capable of binary addition one would have to look at all of the logical
combinations. You might do that by looking at the following four sums:

0 0 1 1
+0 +1 +0 +1
0 1 1 10

That looks fine until you get to 1 + 1. In that case, you have a carry bit to worry about. If you
don't care about carrying (because this is, after all, a 1-bit addition problem), then you can see
that you can solve this problem with an XOR gate. But if you do care, then you might rewrite
your equations to always include 2 bits of output, like this:

0 0 1 1
+0 +1 +0 +1
00 01 01 10

From these equations you can form the logic table:

21
1-bit Adder with Carry-Out
A B Q CO
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1

By looking at this table you can see that you can implement Q with an XOR gate and CO (carry-
out) with an AND gate.

What if you want to add two 8-bit bytes together? This becomes slightly harder. The easiest
solution is to modularize the problem into reusable components and then replicate components. In
this case, we need to create only one component: a full binary adder.

The difference between a full adder and the previous adder we looked at is that a full adder accepts
an A and a B input plus a carry-in (CI) input. Once we have a full adder, then we can string eight
of them together to create a byte-wide adder and cascade the carry bit from one adder to the next.

In the next section, we'll look at how a full adder is implemented into a circuit.

Full Adders
The logic table for a full adder is slightly more complicated than the tables we have used before,
because now we have 3 input bits. It looks like this:

One-bit Full Adder with Carry-In and Carry-Out


CI A B Q CO
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1

There are many different ways that you might implement this table. If you look at the Q bit, you
can see that the top 4 bits are behaving like an XOR gate with respect to A and B, while the
bottom 4 bits are behaving like an XNOR gate with respect to A and B. Similarly, the top 4 bits
of CO are behaving like an AND gate with respect to A and B, and the bottom 4 bits behave like
an OR gate. Taking those facts, the following circuit implements a full adder:

22
This definitely is not the most efficient way to implement a full adder, but it is extremely easy to
understand and trace through the logic using this method.

Exercise: Implement the above logic using fewer gates


Now we have a piece of functionality called a "full adder." What a computer engineer then does is
"black-box" it so that he or she can stop worrying about the details of the component. A black
box for a full adder would look like this:

With that black box, it is now easy to draw a 4-bit full adder:

In this diagram the carry-out from each bit feeds directly into the carry-in of the next bit over. A 0
is hard-wired into the initial carry-in bit. If you input two 4-bit numbers on the A and B lines, you
will get the 4-bit sum out on the Q lines, plus 1 additional bit for the final carry-out. You can see
that this chain can extend as far as you like, through 8, 16 or 32 bits if desired.

The 4-bit adder we just created is called a ripple-carry adder. It gets that name because the carry
bits "ripple" from one adder to the next. This implementation has the advantage of simplicity but
the disadvantage of speed problems. In a real circuit, gates take time to switch states (the time is
23
on the order of nanoseconds, but in high-speed computers nanoseconds matter). So 32-bit or 64-
bit ripple-carry adders might take 100 to 200 nanoseconds to settle into their final sum because of
carry ripple. For this reason, engineers have created more advanced adders called carry-
lookahead adders. The number of gates required to implement carry-lookahead is large, but the
settling time for the adder is much better.

NAND Gate Implementation of Half Adder

Half adder:
A Combinational Circuit that performs the addition of two bits is called a Half adder.

Full adder:
A Combinational Circuit that performs the addition of three bits (two significant bits and a
previous carry) is a full adder.

NAND Implementation:
_____________
_____________ ________________
S = ¯A . B + A . ¯B S = ¯(¯A . B) . ¯(A .
¯B)

C=A .B

A ¯A B ¯B

¯(¯A . B)

&

& S
¯( A . ¯B)

&

& & C

Half adder

NAND Gate Implementation of Full Adder

24
Full Adder

x y Z C s
0 0 0 0 0
0 0 1 0 1
0 1 0 0 1
0 1 1 1 0
1 0 0 0 1
1 0 1 1 0
1 1 0 1 0
1 1 1 1 1
Not defined

S = ¯x . ¯ y . z + ¯x . y . ¯z + x . ¯ y . ¯z + x . y . z

C = ¯x . y . z + x . ¯ y . z + x . y . ¯z + x . y . z

xy\z0 1
00 1
01 1
11 1
10 1
S = No Simplification

xy\z 0 1
00
01 1
11 1 1
10 1
C=x .y+x .z+y .z

25
x ¯x y ¯y z ¯z

&

&

& S
&

&

&

& & C

&

NAND Implementation of Full Adder (F A)

26
x ¯x y ¯y z ¯z

& D0

& D1

& D2

& D3

& D4

& D5

& D6

& D7

D0
D1 A
D2 8 To-3
D3
Encoder B + F
D4
D5 C
D6
D7

MUX

m links

nXK
links

K product m sum
Terms Terms
‘AND’ (OR)
KXm m outputs
Links

n inputs nXK
links

27
Terminologies used to characterize Integrated Circuits
Fan in: Is the number of inputs of a digital gate.

Fan out: Is the number of gates’ inputs connected to the output of a gate (the amount of loading).
Sometimes the other types of loads (wires, pads, etc.) are expressed as fan out equivalent.

Example: For the following gate the Fan in is 4 and the Fan out is 5.

Input & Output Currents:

IIL is the input current to a gate when the input has a logic value of 0 (low input current)

IIH is the input current to a gate when the input has a logic value of 1 (high input current)

IOL is the output current of a gate when the output has a logic value of 0 (low output current)

IOH is the output current of a gate when the output has a logic value of 1 (high output current)

Noise Margins: These circuit parameters specify the circuit’s ability to withstand noise.

The low noise margin (NML or NM0) specifies by how much an input voltage representing
logic 0 can change before an error occurs due to it being interpreted as 1.

28
The high noise margin (NMH or NM1) specifies by how much an input voltage representing
logic 1 can change before an error occurs due to it being interpreted as 0. A typical voltage
characteristic (output voltage versus input voltage) of a digital gate is shown below.

Propagation Delay: The figure below shows the input and output waveforms of a digital gate
where all the delay parameters are defined on the figure.

Power Dissipation: In general, the amount of power dissipated by a logic gate has two
components a static one and a dynamic one.
a. Static Power (PSt) is due to DC current flow between the two supplies (VDD and ground)
and = IDC x VDD
29
b. Dynamic Power is due to the charging and discharging of capacitances at the outputs of
the gates. These capacitances are made of wiring capacitances, capacitances of the output
transistors of the gate itself, and input capacitances of the gates connected to the output of the
gate (the Fan out). The average value of the dynamic power PD = f x C x VDD2 where f is the
switching frequency in hertz and C is the output (load) capacitance. Total power PT = PSt +
PD .

Integrated Circuits (Chips)

Integrated Circuits are usually called ICs or chips. They are complex circuits which have been
etched onto tiny chips of semiconductor (silicon). The chip is packaged in a plastic holder with
pins spaced on a 0.1" (2.54mm) grid which will fit the holes on stripboard and breadboards. Very
fine wires inside the package link the chip to the pins.

Pin numbers

The pins are numbered anti-clockwise around the IC (chip) starting near the notch or dot. The
diagram shows the numbering for 8-pin and 14-pin ICs, but the principle is the same for all sizes.

Chip holders (DIL sockets)

ICs (chips) are easily damaged by heat when soldering and their short pins cannot be protected
with a heat sink. Instead we use a chip holder, strictly called a DIL socket (DIL = Dual In-Line),
which can be safely soldered onto the circuit board. The chip is pushed into the holder when all
soldering is complete.

Chip holders are only needed when soldering so they are not used on breadboards.
Commercially produced circuit boards often have chips soldered directly to the board without a
chip holder, usually this is done by a machine which is able to work very quickly. Please don't
attempt to do this yourself because you are likely to destroy the chip and it will be difficult to
remove without damage by de-soldering.

Now-a-days, most of the logic or digital systems are available in the market
as digital IC building blocks in various logic families. In fact, it is
30
comparatively more convenient and cheaper to build logic circuits and
systems using ICs which are more reliable compared to discrete components
gates. In this chapter we shall be discussing the ICs based on packing
density, IC series and their handling procedures.

Categories of Integrated Circuits Based on Packing Density

Ssi (Small scale integration) means integration levels typically below 12


equivalent gates per IC package.

MSI (Medium scale integration) means integration typically between 12 and


100 equivalent gates per IC package.

LSI (Large scale integration) implies integration typically above IDO


equivalent gates per IC package.

VLSI (Very large scale integration) means integration levels with extra high
number of gates. For example, a RAM may have more than 4000 gates in a
single chip.

Logic IC Series

Commonly used Logic IC Families are

a. Standard TTL (Type 74/54)


b. CMOS (Type 4000 B)
c. Low power Schottky TTL (Type 74LS/54LS)
d. Schottky TTL (Type 74S/54S)
e. ECL (Type 10,000).

Packages in Digital ICs

Digital ICs come in four major packaged forms. These forms are shown in Fig.
1.3.1. Dual-in-Line Package (DIP)” Most TTL and MOS devices in SSI, MSI and
LSI are packaged in 14, 16, 24 or 40 pin DIPs.

Mini Dual-in-Line Package (Mini DIP) Mini DIPs are usually 8 pin packages.
Flat Pack Flat packages are commonly used in applications where light
weight is essential requirement. Many military and space applications use
flat packs. The number of pins on a flat pack varies from device to device.
TO-5, TO-S Metal Can The number of pins on a TO5 or TO-8 can vary from 2
to 12.

All the above styles of packaging have different systems of numbering pins.
For knowing about how the pins of a particular package are numbered, the
manufacturer’s data sheet on package type and pin numbers must be
consulted.

31
Fig. 1 Typical packaging systems in digital integrated circuits.

Identification of Integrated Circuits

Usually the digital integrated circuits come in a dual-in-line (DIP) package. Sometimes, the device
in a DIP package may be an analog component –an operational amplifier or tapped resistors and
therefore, it is essential to understand as to how to identify a particular IC. In a schematic
diagram, the ICs are represented in one of the two methods.

Fig. 1.3.2 Representation of schemes for digital ICs

a. IC is represented by a rectangle (Fig. 1.3.2) with pin numbers shown along with each pin.
The identification number of the IC is listed on the schematic.

b. Representation of the IC in terms of its simple logic elements. For example, IC 74LS08 is
Quad 2-input and gate and when it is represented in a schematic, it is listed as 74 LS 08.

An IC can be identified from the informa-tion-given on the IC itself. The numbering system,
though has been standardized, has some variations from manufacturer to manufacturer. Usually
an IC has the following markings on its surface (Fig. 1.3.3).

32
Core Number identifies the logic family and its functions. In 74 LS 51, the first two numbers
indicate that the IC is a member of the 7400 series IC family. Last letters give the function of the
IC. Letters inserted in the centre of the core number show the logic sub-family. Since TTL is the
most common series,

Transistor-Transistor Logic (TTL) Technology


Transistor–transistor logic (TTL) is a class of digital circuits built from bipolar junction
transistors (BJT), and resistors. It is called transistor–transistor logic because both the logic
gating function (e.g., AND) and the amplifying function are performed by transistors (contrast
this with RTL and DTL).

TTL is notable for being a widespread integrated circuit (IC) family used in many applications
such as computers, industrial controls, test equipment and instrumentation, consumer electronics,
synthesizers, etc. The designation TTL is sometimes used to mean TTL-compatible logic levels,
even when not associated directly with TTL integrated circuits, for example as a label on the
inputs and outputs of electronic instruments.

TTL contrasts with the preceding resistor–transistor logic (RTL) and diode–transistor logic (DTL)
generations by using transistors not only to amplify the output, but also to isolate the inputs. The
p-n junction of a diode has considerable capacitance, so changing the logic level of an input
connected to a diode, as in DTL, requires considerable time and energy.

TTL is particularly well suited to integrated circuits because the inputs of a gate may all be
integrated into a single base region to form a multiple-emitter transistor. Such a highly
customized part might increase the cost of a circuit where each transistor is in a separate package.
However, by combining several small on-chip components into one larger device, it reduces the
cost of implementation on an IC.

33
As with all bipolar logic, a small amount of current must be drawn from a TTL input to ensure
proper logic levels. The total current drawn must be within the capacities of the preceding stage,
which limits the number of nodes that can be connected (the fanout).

All standardized common TTL circuits operate with a 5-volt power supply. A TTL input signal is
defined as "low" when between 0 V and 0.8 V with respect to the ground terminal, and "high"
when between 2.2 V and 5 V (precise logic levels vary slightly between sub-types).
Standardization of the TTL levels was so ubiquitous that complex circuit boards often contained
TTL chips made by many manufacturers, selected for availability and cost and not just
compatibility. Within usefully broad limits, logic gates could be treated as ideal Boolean devices
without concern for electrical limitations.
Comparison with other logic families
TTL devices consume substantially more power than an equivalent CMOS device at rest, but
power consumption does not increase with clock speed as rapidly as for CMOS devices.
Compared to contemporary ECL circuits, TTL uses less power and has easier design rules, but is
substantially slower. Designers could combine ECL and TTL devices in the same system to
achieve best overall performance and economy, but level-shifting devices were required between
the two logic families. TTL was less sensitive to damage from electrostatic discharge than early
CMOS devices.

Due to the output structure of TTL devices, the output impedance is asymmetrical between the
high and low state, making them unsuitable for driving transmission lines. This is usually solved
by buffering the outputs with special line driver devices where signals need to be sent through
cables. ECL, by virtue of its symmetric low-impedance output structure, does not have this
drawback.

Applications of TTL
Before the advent of VLSI devices, TTL integrated circuits were a standard method of
construction for the processors of mini-computer and mainframe processors; such as the DEC
VAX and Data General Eclipse, and for equipment such as machine tool numerical controls,
printers, and video display terminals. As microprocessors became more functional, TTL devices
became important for "glue logic" applications, such as fast bus drivers on a motherboard, which
tie together the function blocks realized in VLSI elements.

Diode–transistor logic
Diode–Transistor Logic (DTL) is a class of digital circuits built from bipolar junction transistors
(BJT), diodes and resistors; it is the direct ancestor of transistor–transistor logic. It is called
diode–transistor logic because the logic gating function (e.g., AND) is performed by a diode
network and the amplifying function is performed by a transistor .

Operation
With the simplified circuit shown in the picture the voltage at the base will be near 0.7 volts even
when one input is held at ground level, which results in unstable or invalid operation. Two diodes
in series with R3 are commonly used to lower the base voltage and prevent any base current
when one or more inputs are at low logic level. The IBM 1401 used DTL circuits almost identical
to this simplified circuit, but solved the base bias level problem mentioned above by alternating
NPN and PNP based gates operating on different power supply voltages instead of adding extra
diodes.

34
Speed disadvantage
A major advantage over the earlier resistor–transistor logic is the increased fan-in. However, the
propagation delay is still relatively large. When the transistor goes into saturation from all inputs
being high, charge is stored in the base region. When it comes out of saturation (one input goes
low) this charge has to be removed and will dominate the propagation time. One way to speed it
up is to connect a resistor to a negative voltage at the base of the transistor which aids the
removal of the minority carriers from the base.

The above problem is solved in TTL by replacing the diodes of the DTL circuit with a multiple-
emitter transistor, which also slightly reduces the required area per gate in an integrated circuit
implementation.

Emitter-coupled logic
In electronics, emitter-coupled logic, or ECL, is a logic family in which current is steered
through bipolar transistors to implement logic functions. ECL is sometimes called current-mode
logic or current-switch emitter-follower (CSEF) logic.

The chief characteristic of ECL is that the transistors are never in the saturation region and can
thus change states at very high speed. Its major disadvantage is that the circuit continuously
draws current, which means it requires a lot of power.

History
ECL was invented in August 1956 at IBM by Hannon S. Yourke. Originally called current
steering logic, it was used in the Stretch, IBM 7090, and IBM 7094 computers.

While ECL circuits in the mid-1960s through the 1990s consisted of a differential amplifier input
stage to perform logic, followed by an emitter follower to drive outputs and shift the output
voltages so they will be compatible with the inputs, Yourke's current switch, also known as ECL,
consisted only of differential amplifiers. To provide compatible input and output levels, two
complementary versions were used, an NPN version and a PNP version. The NPN output could
drive PNP inputs, and vice-versa. "The disadvantages are that more different power supply
voltages are needed, and both pnp and npn transistors are required." Motorola introduced their
first digital monolithic integrated circuit line, MECL I, in 1962.

Explanation
TTL and related families use transistors as digital switches where transistors are either cut off or
saturated, depending on the state of the circuit. ECL gates use differential amplifier
configurations at the input stage. A bias configuration supplies a constant voltage at the midrange
of the low and high logic levels to the differential amplifier, so that the appropriate logical
function of the input voltages will control the amplifier and the base of the output transistor (this
output transistor is used in common collector configuration). The propagation time for this
arrangement can be less than a nanosecond, making it for many years the fastest logic family.

Characteristics
Other noteworthy characteristics of the ECL family include the fact that the large current
requirement is approximately constant, and does not depend significantly on the state of the
circuit. This means that ECL circuits generate relatively little power noise, unlike many other
logic types which typically draw far more current when switching than quiescent, for which
35
power noise can become problematic. In an ALU - where a lot of switching occurs - ECL can
draw lower mean current than CMOS.

Usage
The drawbacks associated with ECL have meant that it has been used mainly when high
performance is a vital requirement. Other families (particularly advanced CMOS variants) have
replaced ECL in many applications, even mainframe computers. However, some experts predict
increasing use of ECL in the future, particularly in conjunction with more widespread adoption of
advanced semiconductors such as gallium arsenide, which has always been seen as the
semiconductor of the future, but cannot be produced as cheaply or cleanly as silicon.

Older high-end mainframe computers, such as the Enterprise System/9000 members of IBM's
ESA/390 computer family, used ECL; current IBM mainframes use CMOS.

The equivalent of emitter-coupled logic made out of FETs is called source-coupled FET logic
(SCFL).

Introducing bistable (Flip-Flops)


A Flip-Flop is a sequential circuit which is capable of retaining a unit of information such as ‘0’,
or ‘1’.

Basic Flip-Flop circuit


(a) Using NOR gates:

Reset
R
Q

Set ¯Q
S

Truth Table

S R Q ¯Q
1 0 1 0
0 0 1 0
0 1 0 1
0 0 0 1
1 1 0 0

(b) Using NAND gates

36
1
S
Q
0

1
¯Q
R
0

S R Q ¯Q
1 0 0 1
1 1 0 1
0 1 1 0
1 1 1 0
1 1 1 0
0 0 1 1

Let’s consider a general truth table

Qt = means present state at t=0


Qt-1 = means previous state at t=-0
-0 means a time very close to present

S R Qt-1 Qt ¯Qt
0 0 0 0 1
0 0 1 0 1
0 1 0 0 1
0 1 1 0 1
1 0 0 1 0
1 0 1 1 0
1 1 x X x
1 1 x X x
x = indeterminate

SET ------- make Q = 1


Reset ------- make Q =
Block representation of a Flip-Flop

37
S Q

R ¯Q

Clocked S-R Flip-Flop


To convert the basic Flip-Flop from Asynchronous to Synchronous a clock pulse must be
incorporated

Clock

¯Q
S

The J-K Flip-Flop

A very common form of flip-flop is the J-K flip-flop. It is unclear, historically, where the name
"J-K" came from, but it is generally represented in a black box like this:

In this diagram, P stands for "Preset," C stands for "Clear" and Clk stands for "Clock." The logic
table looks like this:

P C Clk J K Q Q'

1 1 1-to-0 1 0 1 0

38
1 1 1-to-0 0 1 0 1

1 1 1-to-0 1 1 Toggles

1 0 X X X 0 1

0 1 X X X 1 0

Here is what the table is saying: First, Preset and Clear override J, K and Clk completely. So if
Preset goes to 0, then Q goes to 1; and if Clear goes to 0, then Q goes to 0 no matter what J, K
and Clk are doing. However, if both Preset and Clear are 1, then J, K and Clk can operate. The 1-
to-0 notation means that when the clock changes from a 1 to a 0, the value of J and K are
remembered if they are opposites. At the low-going edge of the clock (the transition from 1 to 0),
J and K are stored. However, if both J and K happen to be 1 at the low-going edge, then Q simply
toggles. That is, Q changes from its current state to the opposite state.
The concept of "edge triggering" is very useful. The fact that J-K flip-flop only "latches" the J-K
inputs on a transition from 1 to 0 makes it much more useful as a memory device. J-K flip-flops
are also extremely useful in counters (which are used extensively when creating a digital clock).
Here is an example of a 4-bit counter using J-K flip-flops:

The outputs for this circuit are A, B, C and D, and they represent a 4-bit binary number. Into the
clock input of the left-most flip-flop comes a signal changing from 1 to 0 and back to 1
repeatedly (an oscillating signal). The counter will count the low-going edges it sees in this
signal. That is, every time the incoming signal changes from 1 to 0, the 4bit number represented
by A, B, C and D will increment by 1. So the count will go from 0 to 15 and then cycle back to 0.
You can add as many bits as you like to this counter and count anything you like. For example, if
you put a magnetic switch on a door, the counter will count the number of times the door is
opened and closed. If you put an optical sensor on a road, the counter could count the number of
cars that drive by.
Another use of a J-K flip-flop is to create an edge-triggered latch, as shown here:

39
In this arrangement, the value on D is "latched" when the clock edge goes from low to high.
Latches are extremely important in the design of things like central processing units (CPUs) and
peripherals in computers.

40

You might also like