0% found this document useful (0 votes)
65 views

COA Module - 4

The document discusses arithmetic operations in digital computers including addition, subtraction, multiplication, and division. It describes the logic circuits used to perform single-bit addition and subtraction and how these can be combined into ripple-carry adders to perform multi-bit arithmetic. It also discusses faster techniques like carry-lookahead addition and algorithms for signed and Booth multiplication. Finally, it outlines the circuit arrangement for binary division using a restoring division approach.

Uploaded by

shashidh1234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views

COA Module - 4

The document discusses arithmetic operations in digital computers including addition, subtraction, multiplication, and division. It describes the logic circuits used to perform single-bit addition and subtraction and how these can be combined into ripple-carry adders to perform multi-bit arithmetic. It also discusses faster techniques like carry-lookahead addition and algorithms for signed and Booth multiplication. Finally, it outlines the circuit arrangement for binary division using a restoring division approach.

Uploaded by

shashidh1234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 43

PRESIDENCY UNIVERISTY, BENGALURU

School of Engineering

Computer Organization and Architecture


CSE205

IV Semester
Module 4

Arithmetic Unit
Introduction
• A basic operation in all digital computers is the addition
or subtraction of two numbers.
• In this chapter we discuss about the logic circuits used
to implement arithmetic operations.
• The time needed to perform an addition operation
affects the processor’s performance.
• Multiply and divide operations, which require more
complex circuitry than either addition or subtraction
operations, also affect the performance.
• In this chapter we discuss about some of the techniques
used in modern computers to perform arithmetic
operations at high speed.
• Compared with arithmetic operations, logic operations
are simple to implement using combinational circuits.
3 11/8/22
Addition And Subtraction Of Two
Numbers
Consider the addition of two numbers X and Y with
n-bits each.
Figure 6.1 shows the logic truth table for adding
equally weighted bits Xi and Yi in two numbers X And Y.
The figure also shows the logic expressions for these
functions, along with an example of addition of 4-bit
unsigned numbers 7 and 6.
The logic expression for sum (Si) and the carry out
function (Ci+1) are shown in the figure.

4 11/8/22
5 11/8/22
6 11/8/22
FULL ADDER:
The circuit which performs the addition of three bits
is a Full Adder.
It consists of three inputs and two outputs.
INPUTS:
Xi, yi and ci are the three inputs of full adder.
OUTPUTS:
Si and Ci+1 are the two outputs of full adder.
Block diagram of full adder is shown in the figure.
A cascaded connection of n full adder blocks can be
used to add two n-bit numbers. Since the carries must
propagate or ripple through this cascade, the
configuration is called an n-bit ripple-carry adder.
A cascaded connection of K n-bit adders can be used
to add k n-bit numbers.
7 11/8/22
8 11/8/22
9 11/8/22
Computing the add time (contd..)

x0 y0
Consider 0th stage:
• c1 is available after 2 gate delays.
• s0 is available after 1 gate delay.
c1 FA c0

s0

Sum Carry
yi
c
i
xi
xi
yi si c
c i +1
i
ci
x
i
yi

10 11/8/22
Computing the add time (contd..)

Cascade of 4 Full Adders, or a 4-bit adder

x0 y0 x0 y0 x0 y0 x0 y0

FA FA FA FA c0
c4 c3 c2 c1

s3 s2 s1 s0

• s0 available after 1 gate delays, c1 available after 2 gate delays.


• s1 available after 3 gate delays, c2 available after 4 gate delays.
• s2 available after 5 gate delays, c3 available after 6 gate delays.
• s3 available after 7 gate delays, c4 available after 8 gate delays.
For an n-bit adder, sn-1 is available after 2n-1 gate delays
cn is available after 2n gate delays.
11 11/8/22
Fast addition (Carry-look Ahead Addition)
Recall the equations:
si  xi  yi  ci
ci 1  xi yi  xi ci  yi ci
Second equation can be written as:

ci 1  xi yi  ( xi  yi )ci
We can write:
ci 1  Gi  Pi ci
where Gi  xi yi and Pi  xi  yi
• Gi is called generate function and Pi is called propagate function
• Gi and Pi are computed only from xi and yi and not ci, thus they can
be computed in one gate delay after X and Y are applied to the
inputs of an n-bit adder. 11/8/22
12
• The expressions Gi and Pi are called the generate and
propagate functions for stage i.
• Each bit stage contains an
1) AND gate to form Gi,
2)OR gate to form Pi, and
3)three-input XOR gate to form Si.
• A simpler circuit can be designed to generate Gi, Si
and Pi

But in this case Gi=1, so it does not matter whether Pi


is 0 or 1.
Then using a cascade of two input XOR gates to realize
the 3-input XOR function. the basic cell B can be
13
used in each bit stage as shown in the11/8/22
figure.
14 11/8/22
• For example the carries in a four stage carry-look
ahead adder is given as follows.

C1= G0+P0C0

C2=G1+P1C1
= G1+ P1(G0+P0C0)
= G1+P1G0+P1P0C0

C3= G2+P2C2
= G2+P2(G1+P1G0+P1P0C0)
= G2+P2G1+P2P1G0+P2P1P0C0

15 11/8/22
C4= G3+P3C3
= G3+P3(G2+P2G1+P2P1G0+P2P1P0C0)
= G3+P3G2+P3P2G1+P3P2P1G0+P3P2P1P0C0

16 11/8/22
Pi and Gi:
All Pi and Gi are available after one gate delay.

Ci+1:
All carries are available after three gate delays.

Sum:
After a further XOR gate delay, all sum bits are
available. So after four gate delays all sums are
available.

17 11/8/22
• The complete 4-bit adder is shown in the figure 6.4b
• An adder implemented in this form is called a carry-
look ahead adder.
• Delay through the adder is 3gate delays for all carry
bits and 4 gate delays for all sum bits.
• In comparison 4-bit ripple carry adder requires 7
gate delays for all sums and 8 gate delays for all
carries.

18 11/8/22
Multiplication
Signed-operand Multiplication

• Considering 2’s-complement signed operands, what will happen


to (-13)(+11) if following the same method of unsigned
multiplication?
1 0 0 1 1 ( - 13)
0 1 0 1 1 ( + 11)

1 1 1 1 1 1 0 0 1 1

1 1 1 1 1 0 0 1 1
Sign extension is
shown in blue 0 0 0 0 0 0 0 0

1 1 1 0 0 1 1

0 0 0 0 0 0

1 1 0 1 1 1 0 0 0 1 ( - 143)

Sign extension of negative multiplicand.


Booth Algorithm

• The booth algorithm generates the 2n-bit product and


treats both positive and negative 2’s complement n-bit
operands uniformly.
• In general, in the booth scheme, -1 times the shifted
multiplicand is selected when moving from 0 to 1, and
+1 times the multiplicand is selected when moving from
1 to 0.
Example:
Recode the multiplier 101100 for Booth’s algorithm?

Multiplier: 1 0 1 1 0 0 0
Recoded Multiplier: -1 +1 0 -1 0 0
21 11/8/22
Booth Algorithm

22 11/8/22
23 11/8/22
24 11/8/22
Integer Division

• The figure 6.20 shows the examples of decimal division and


binary division of the same values.

Manual Division

25 11/8/22
Circuit Arrangement

Shift left

an an-1 a0 qn-1 q0
Dividend Q
A Quotient
Setting

N+1 bit Add/Subtract


adder
Control
Sequencer

0 mn-1 m0

Divisor M

Figure 6.21. Circuit arrangement for binary division.


Restoring Division

• Figure 6.21 shows a logic circuit arrangement that implements


restoring division.
• An n-bit positive divisor is loaded into register M.
• An n-bit positive dividend is loaded into register Q at the start of
the operation.
• Register A is set to 0.
• After the division is complete,

n-bit Quotient  Register Q


Remainder  Register A
• The extra bit position at the end of both A and M accommodates
the sign bit during subtractions.

27 11/8/22
Restoring Division
The following algorithm performs the restoring division:

Do the following ‘n’ times:


1. Shift A and Q left one binary position.
2. Subtract M from A, and place the answer back in A.
3. If the sign of A is 1, set Q0 to 0 and add M back to
A(that is, restore A); otherwise set Q0 to 1.

Figure 6.22 shows a 4-bit example as it would be processed by


the circuit in the figure 6.21

28 11/8/22
Examples

Initially 0 0 0 0 0 1 0 0 0
0 0 0 1 1
Shift 0 0 0 0 1 0 0 0
Subtract 1 1 1 0 1 First cycle
Set q0 1 1 1 1 0
Restore 1 1
0 0 0 0 1 0 0 0 0
1 0 Shift 0 0 0 1 0 0 0 0
1 1 1 0 0 0 Subtract 1 1 1 0 1
1 1 Set q0 1 1 1 1 1 Second cycle
Restore 1 1
1 0 0 0 0 1 0 0 0 0 0
Shift 0 0 1 0 0 0 0 0
Subtract 1 1 1 0 1
Set q0 0 0 0 0 1 Third cycle

Shift 0 0 0 1 0 0 0 0 1
Subtract 1 1 1 0 1 0 0 1
Set q0 1 1 1 1 1 Fourth cycle
Restore 1 1
0 0 0 1 0 0 0 1 0

Remainder Quotient

Figure 6.22: A restoring-division example.


Floating point numbers

• Until now, we have discussed numbers without any decimal


point, fixed point numbers.
• The decimal point is always assumed to be to the right of the
least significant digit.
e.g. 4.0 , 12.0, 24.0 (fixed-point
numbers)

Floating point numbers:


The numbers in which the position of the decimal point is
variable, such numbers are called Floating –point numbers.
e.g. 0.25, 12.5, 323.865

30 11/8/22
Fixed point representation:
• It has limitations.
• Very large numbers cannot be represented, nor can very small
fractions.
e.g. 1) 976,000,000,000,000.000
2) 0.0000000000000976
Floating-point representation:
• The number 976,000,000,000,000.000 can be represented as
9.76 * 1014
• Similarly the fraction 0.0000000000000976 can be represented
as 9.76*10-14
• What we have done, we moved the decimal point to convenient
location and use the exponent of 10 to indicate the position of
decimal point. when decimal point is placed to the right of first
(non zero) significant digit, the number is said to be normalized.
• This allows a range of very large and very small numbers to be
represented with only a few digits.

31 11/8/22
• The same approach can be taken with binary numbers.
e.g. +111101.1000110
let us see how this number can be represented in the floating
point format.
+1.111011000110*25 (Normalized form)
Floating point representation has three fields
1. sign
2. significant digits (mantissa)
3. Exponent
In the above example
sign = 0
mantissa = 11101100110
Exponent = 5

32 11/8/22
IEEE Standards For Floating- Point Numbers

• The Standards for representing floating-point numbers in 32-bits


and 64-bits have been developed by the Institute of Electrical
and Electronics Engineers (IEEE).

Single Precision:
• The 32-bit standard representation of floating point numbers is
called a Single-Precision representation.
Sign:
• The sign of the number is given in the first bit.
• For positive numbers the sign bit is 0 and for negative numbers it
is 1.
Exponent:
• Exponent field contains the representation for the exponent( to
the base 2) of the scale factor.
• Instead of the signed exponent, E, the value actually stored in
the exponent field is an unsigned integer E΄ = E+127.
• This is called Excess-127 format.
33 11/8/22
• Thus E΄ is in the range 0 ≤ E΄≤ 255.
• The end values of this range 0 and 255 are used to
represent special values.
• Therefore the range of E΄ for normal numbers is 1 ≤ E΄≤
254.
• This means the actual exponent, E is in the range -126≤ E ≤
127.
Mantissa:
• The string of significant bits commonly called the mantissa.
• The last 23-bits in single-precision represents the Mantissa.
• Since the most significant bit of the mantissa is always equal
to 1, this bit is not explicitly represented, it is assumed to
be to the immediate left of the binary point.
• Hence the 23-bits stored in mantissa field actually represent
the fractional part of the Mantissa, this bits are right to
the binary point.

The single-precision representation is shown in the figure


below.
34 11/8/22
35 11/8/22
Double Precision:
• The 64-bit standard representation of floating point numbers
is called a double-Precision representation.
• The double precision format has increased exponent and
mantissa ranges.
Sign:
• The sign of the number is given in the first bit.
• For positive numbers the sign bit is 0 and for negative
numbers it is 1.
Exponent:
• Exponent field contains the representation for the
exponent( to the base 2) of the scale factor.
• Instead of the signed exponent, E, the value actually stored
in the exponent field is an unsigned integer E΄ = E+1023.
• This is called Excess-1023 format.

36 11/8/22
• Thus E΄is in the range 0 ≤ E΄≤ 2047.
• The end values of this range 0 and 2047 are used to indicate
special values.
• Therefore the range of E΄ for normal numbers is 1 ≤ E΄≤
2046.
• Thus, the actual exponent E is in the range -1022 ≤ E≤ 1023.

Mantissa:
• The last 52-bits in double-precision represents the Mantissa.

(field 1) Sign  1-bit


(field 2) Exponent 11-bits
(field 3) mantissa  52-bits

37 11/8/22
Example:
Represent 1259.125 in single precision and double precision
formats.
the number 1259.125 has two parts
Integer part (1259)
Fractional part (125)
Step 1: convert the decimal number to binary format
Integer Part:
convert the integer part (1259) into binary format
1259 = 10011101011
Fractional part:
convert the fractional part to binary format
0.125*2 = 0.25  0
0.25*2 = 0.5  0
0.5* 2 = 1.0  1
38 0 11/8/22
hence
0.125 = 0.001
therefore
1259.125 = 10011101011 + 0.001
= 10011101011.001

Step2: Normalize the number

10011101011.001 = 1.0011101011001*210

Single Precision Representation:


Sign:
The sign of given number is positive
hence the sign bit is equal to 0.

39 11/8/22
Exponent:
the actual exponent E = 10

hence the exponent in excess 27-format is E’ is equal to E+127

E’ = 10 + 127 = 137

The binary representation of 137 is 10001001


E’ = 10001001
Mantissa:
the mantissa is equal to 0011101011001
hence M = 0011101011001……….0
hence the single precision representation of 1259.125 is given as

1259.125 = 0 10001001 0011101011001……..0


S E’ M

40 11/8/22
Double Precision:

In double precision representation 1 bit is used to indicate sign, 11


bits to represent the exponent and 52-bits to represent mantissa.

for a given number


sign = 0
exponent (E) = 10
Hence the exponent in excess- 1023 format (E’) is given as

E’ = E + 1023
= 10 + 1023
= 1033
The binary representation of 1033 is
E’ = 10000001001

41 11/8/22
Hence the double-precision representation of 1259.125 is given as
follows

1259.125 = 0 10000001001 0011101011001…………..0


S E’ M

42 11/8/22
THE END

43 11/8/22

You might also like