0% found this document useful (0 votes)
146 views

COA Lecture 16-Direct Mapped Cache PDF

The document describes cache memory and how it maps to main memory. It discusses three mapping techniques: direct mapping, associative mapping, and set associative mapping. Direct mapping simplistically maps each main memory block to one cache line. Associative mapping allows each memory block to map to any cache line but requires complex comparison logic. Set associative mapping strikes a balance between the two. Examples are given to illustrate direct mapping, including how to determine the cache address corresponding to a main memory address. Cache memory format and examples of cache hits and misses are also discussed.

Uploaded by

A3 Aashu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
146 views

COA Lecture 16-Direct Mapped Cache PDF

The document describes cache memory and how it maps to main memory. It discusses three mapping techniques: direct mapping, associative mapping, and set associative mapping. Direct mapping simplistically maps each main memory block to one cache line. Associative mapping allows each memory block to map to any cache line but requires complex comparison logic. Set associative mapping strikes a balance between the two. Examples are given to illustrate direct mapping, including how to determine the cache address corresponding to a main memory address. Cache memory format and examples of cache hits and misses are also discussed.

Uploaded by

A3 Aashu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Line Memory

Number Tag Block address


0 0
1 1
2 2 Block 0
3 ( words)

Block Length
( Words)

(a) Cache

Block

2
Word
Length
(b) Main memory

Figure 4.4 Cache/Main-Memory Structure


Mapping Function
Because there are fewer cache lines than main memory blocks, an algorithm
is needed for mapping main memory blocks into cache lines
Three techniques can be used:

Direct Associative Set Associative


The simplest technique Permits each main memory A compromise that exhibits
Maps each block of main block to be loaded into any the strengths of both the
memory into only one line of the cache direct and associative
possible cache line approaches while reducing
The cache control logic their disadvantages
interprets a memory
address simply as a Tag and
a Word field
To determine whether a
block is in the cache, the
cache control logic must
simultaneously examine
B0 L0

B L
First blocks of
cache memory
main memory
(equal to size of cache) = length of block in bits
= length of tag in bits
(a) Direct mapping

L0

one block of
main memory

L
cache memory
(b) Associative mapping

Figure 4.8 Mapping From Main Memory to Cache:


Direct and Associative
Direct mapped cache
0 1 2 3 4 5 6 7 8 9

0 1 2 3 4 5 6 7 8 9
10 11 12 13 14 15 16 17 18 19
20 21 22 23 24 25 26 27 28 29
30 31 32 33 34 35 36 37 38 39

Cache memory Main memory


Direct Mapped cache
At any time, cache location address 0 contains
only the data of memory location contents of
0/10/20/30
Usually CPU places the main memory address
and from this cache memory address needs to
be derived. But how????
Direct Mapped cache
For memory address of 23, find cache address.

Cache address = MODULO cache bit


size(Main address)
Cache address= MODULO 10 (23)
Cache address= 23/10 (Q:2, R=3)
The remainder or residual bit denotes the
cache address
Direct mapped cache
0 1 2 3 4 5 6 7

0 1 2 3 4 5 6 7
8 9 10 11 12 13 14 15
16 17 18 19 20 21 22 23

24 25 26 27 28 29 30 31
32 33 34 35 36 37 38 39

Cache memory Main memory


Direct Mapped cache
For memory address of 21, find cache address.

Cache address = MODULO cache bit


size(Main address)
Cache address= MODULO 8 (21)
Cache address= 21/8 (Q:2, R=5)
The remainder or residual bit denotes the
cache address
Cache memory format

VALID TAG DATA

VALID BIT SIZE TAG bit size It will be usually


is always 1. It will depends on address stored in Bytes.
be 0 initially and and data size. It is
once it (data) is required to locate
there in cache it the data in cache.
will be 1.
Problem 1
Express the main memory location 32 in cache
memory format

1 3 (32)
Direct mapped cache
0 1 2 3 4 5 6 7 8 9

0 1 2 3 4 5 6 7 8 9
10 11 12 13 14 15 16 17 18 19
20 21 22 23 24 25 26 27 28 29
30 31 32 33 34 35 36 37 38 39

1 3 (32)

Cache memory Main memory


Problem 2
Let CPU generate 22,23,17,22,7,17,22 memory
locations. What will be the cache contents on all
above address. The cache bit size is 10 bits.
Problem
Memory CACHE HIT/MISS VALID TAG DATA
address
22 2 MISS 01 2 (22)
23 3 MISS 01 2 (23)
17 7 MISS 01 1 (17)
22 2 HIT 1 2 (22)
7 7 MISS 1 0 (7)
17 7 MISS 1 1 (17)
22 2 HIT 1 2 (22)
Direct mapped cache
0 1 2 3 4 5 6 7 8 9

0 1 2 3 4 5 6 7 8 9
10 11 12 13 14 15 16 17 18 19
20 21 22 23 24 25 26 27 28 29
30 31 32 33 34 35 36 37 38 39

Cache memory Main memory


Problem 3
A 32-bit CPU (data size:32 bits) has 32 address
bits. Assume memory is byte addressable. In
direct mapped cache, then how many total bits
are required in cache for a cache of 64 KB of
data alone?

Data bits: 32 bits


Valid bit: 1 bit
TAG : ??? Bit/s
Problem 3
VALID TAG DATA

B B B B
VALID TAG 4 3 2 1

Data- 4 Bytes
For 64 KB, 64K/4B=16K blocks
214=16K Blocks
Problem 3
B B B B
VALID TAG 4 3 2 1

32 bit Address
32 1716 32 1

TAG Block offset

One of the 214 Byte offset to


blocks select the bytes

Total number of cache bits= 1 +16+32 = 49 bits/line


Problem 4
Do problem 3 with one change that the data
has 4 words/Block of 32 bit data/Block.
Problem 4
Do problem 3 with one change that the data
has 4 words/Block of 32 bit data/Block.

64K/4*4B = 4K=212
Byte offset, Block offset, 12 bits to select the
portion of the block, 16 bits TAG

1+ 16+4 *32=145 bits/line (for 4K-calc)


Problem 5
Consider a cache size of 64 Blocks. The block
size is 16 bytes. What is the block number for
the byte address 1600?

Block number = byte address/block size


Block number= 1600/16 =100
Actual block number= 100 Mod 64= 1R36
Actual block number 36
CPU,Cache,DRAM
Write thru cache CPU
Copy back on swap out
Dirty bit
Cache

Miss penalty

DRAM
Thank you for your listening

You might also like