CH 06
CH 06
Memory
1
6.1 Introduction
2
6.2 Types of Memory
3
Cont….
4
6.3 The Memory Hierarchy
6
Cont….
7
Cont….
9
Cont….
10
6.4 Cache Memory
12
Cont….
14
Cont….
15
Cont….
16
Cont….
Example.
The size of each field into which a memory address is
divided depends on the size of the cache.
•Suppose our memory consists of 214 words, cache has 16 =
24 blocks, and each block holds 8 words.
– Thus memory is divided into 214 / 2 3 = 211 blocks.
•For our field sizes, we know we need 4 bits for the block, 3
bits for the word, and the tag is what’s left over:
17
Cont….
18
Cont….
19
Cont….
20
Cont….
21
Cont….
22
Cont….
a. 232/25 = 227
b. 32 bit addresses with 17 bits in the tag field, 10 in
the block field, and 5 in the word field
c. 000063FA = 00000000000000000 1100011111
11010, which implies Block 799
23
Cont….
24
Cont….
25
Cont….
26
Cont….
28
Cont….
29
Cont….
30
Cont….
Replacement Policies
There are several popular replacement policies.
•With fully associative and set associative cache, a
replacement policy is invoked when it becomes necessary to
evict a block from cache.
•The algorithm for determining replacement is called the
replacement policy.
Optimal algorithm --We like to keep values in cache that
will be needed again soon, and throw out blocks that won’t
be needed again, or that won’t be needed for some time.
31
Cont….
33
Cont….
34
Cont….
35
Cont….
37
Cont….
38
6.5 Virtual Memory
• Cache memory enhances performance by providing faster
memory access speed.
• Virtual memory enhances performance by providing greater
memory capacity, without the expense of adding main memory.
• Instead, a portion of a disk drive serves as an extension of main
memory.
• If a system uses paging, virtual memory partitions main
memory into individually managed page frames, that are
written (or paged) to disk when they are not immediately
needed.
• The easiest way to think about virtual memory is to
conceptualize it as an imaginary memory location in which all
addressing issues are handled by the operating system. 39
Cont….
*virtual memory allows a program to run when only specific pieces are
present in memory. The parts not currently being used are stored in the
page file on disk.
Virtual address—The logical or program address that the process
uses. Whenever the CPU generates an address, it is always in terms of
virtual address space.
Physical address—The real address in physical memory.
Mapping—The mechanism by which virtual addresses are translated
into physical ones.
Programs create virtual addresses that are mapped to physical
addresses by the memory management unit(which is a hardware
device).
Page frames—The equal-size chunks or blocks into which main
40
memory (physical memory) is divided.
Cont….
42
Cont….
43
Cont….
44
Cont….
• If the valid bit is zero in the page table entry for the logical
address, this means that the page is not in memory and must
be fetched from disk.
This is a page fault.
If necessary, a page is evicted from memory and is
replaced by the page retrieved from disk, and the valid
bit is set to 1.
• If the valid bit is 1, the virtual page number is replaced by
the physical frame number.
• The data is then accessed by adding the offset to the
physical frame number.
45
Cont….
46
Cont….
47
Cont….
48
Cont….
N.B In this case virtual address 410010 generating a page fault; page 4 = 1002 is
not valid in the page table.
49
Cont….
• Since page tables are read constantly, it makes sense to keep them
in a special cache called a translation look-aside buffer (TLB).
• TLBs are a special associative cache that stores the mapping of
virtual pages to physical pages.
• They are special caches used to keep track of recently used
transactions.
• We can speed up the page table lookup by storing the most recent
page lookup values in a TLB.
51
Cont….
52
Cont….
53
Cont….
54
Cont….
56
6.6 Real-World Example
• The Pentium architecture supports both paging and segmentation,
and they can be used in various combinations including unpaged
unsegmented, segmented unpaged, and unsegmented paged.
• The processor supports two levels of cache (L1 and L2), both
having a block size of 32 bytes.
• The L1 cache is next to the processor, and the L2 cache sits
between the processor and memory.
• The L1 cache is in two parts: the Pentium (like many other
machines) separates L1 cache into cache used to hold
instructions called instruction cache (I-cache) and a data cache
(D-cache).
57
The next slide shows this organization schematically.
Cont….
58