Lecture 3 (Memory Hierarchy and Caches)
Lecture 3 (Memory Hierarchy and Caches)
2
Project Proposal Deadline
• Deadline is on October 2nd
3
Memory (Programmer’s View)
4
Virtual vs. Physical Memory
• Programmer sees virtual memory
– Can assume the memory is “infinite”
• Reality: Physical memory size is much smaller than what
the programmer assumes
• The system (system software + hardware, cooperatively)
maps virtual memory addresses to physical memory
– The system automatically manages the physical memory
space transparently to the programmer
+ Programmer does not need to know the physical size of memory nor manage it → A
small physical memory can appear as a huge one to the programmer → Life is
easier for the programmer
-- More complex system software and architecture
5
(Physical) Memory System
• You need a larger level of storage to manage a small
amount of physical memory automatically
→ Physical memory has a backing store: disk
Pipeline
Instruction Data
(Instruction
Supply Supply
execution)
DRAM INTERFACE
DRAM MEMORY
CORE 1
CORE 3
CONTROLLER
9
L2 CACHE 1 L2 CACHE 3
L2 CACHE 0 L2 CACHE 2
CORE 2
CORE 0
SHARED L3 CACHE
Ideal Memory
• Zero access time (latency)
• Infinite capacity
• Zero cost
• Infinite bandwidth (to support multiple accesses
in parallel)
10
The Problem
• Ideal memory’s requirements oppose each other
• Bigger is slower
– Bigger → Takes longer to determine the location
• Faster is more expensive
– Memory technology: SRAM vs. DRAM vs. Disk vs.
Tape
• Higher bandwidth is more expensive
– Need more banks, more ports, higher frequency, or
faster technology
11
Memory Technology: DRAM
• Dynamic random access memory
• Capacitor charge state indicates stored value
– Whether the capacitor is charged or discharged
indicates storage of 1 or 0
– 1 capacitor
– 1 access transistor row enable
_bitline
• Capacitor leaks through the RC path
– DRAM cell loses charge over time
– DRAM cell needs to be refreshed
12
Memory Technology: SRAM
• Static random access memory
• Two cross coupled inverters store a single bit
– Feedback path enables the stored value to persist in the “cell”
– 4 transistors for storage
– 2 transistors for access
row select
_bitline
bitline
13
Memory Bank Organization and Operation
• Read access sequence:
1. Decode row address
& drive word-lines
4. Decode column
address & select subset
of row
• Send to output
5. Precharge bit-lines
• For next access
14
SRAM (Static Random Access Memory)
Read Sequence
row select 1. address decode
2. drive row select
_bitline
3. selected bit-cells drive bitlines
bitline
Read Sequence
1~3 same as SRAM
4. a “flip-flopping” sense amp
RAS
bit-cell array amplifies and regenerates the
bitline, data bit is mux’ed out
n 2n
2n row x 2m-col 5. precharge all bitlines
(nm to minimize
overall latency) Destructive reads
m
Charge loss over time
2m
sense amp and mux Refresh: A DRAM controller must
1 periodically read each row within
A DRAM die comprises the allowed refresh time (10s of
CAS of multiple such arrays ms) such that charge is restored
16
DRAM vs. SRAM
• DRAM
– Slower access (capacitor)
– Higher density (1T 1C cell)
– Lower cost
– Requires refresh (power, performance, circuitry)
– Manufacturing requires putting capacitor and logic together
• SRAM
– Faster access (no capacitor)
– Lower density (6T cell)
– Higher cost
– No need for refresh
– Manufacturing compatible with logic process (no capacitor)
17
The Problem
• Bigger is slower
– SRAM, 512 Bytes, sub-nanosec
– SRAM, KByte~MByte, ~nanosec
– DRAM, Gigabyte, ~50 nanosec
– Hard Disk, Terabyte, ~10 millisec
• Faster is more expensive (dollars and chip area)
– SRAM, < 10$ per Megabyte
– DRAM, < 1$ per Megabyte
– Hard Disk < 1$ per Gigabyte
– These sample values (circa ~2011) scale with time
• Other technologies have their place as well
– Flash memory, PC-RAM, MRAM, RRAM (not mature yet)
18
Why Memory Hierarchy?
• We want both fast and large
backup
everything big but slow
here
20
Memory Hierarchy
• Fundamental tradeoff
– Fast memory: small
– Large memory: slow
• Idea: Memory hierarchy
Hard Disk
Main
CPU Cache Memory
RF (DRAM)
21
Locality
• One’s recent past is a very good predictor of his/her
near future.
23
Caching Basics: Exploit Temporal Locality
• Idea: Store recently accessed data in automatically
managed fast memory (called cache)
• Anticipation: the data will be accessed again soon
24
Caching Basics: Exploit Spatial Locality
• Idea: Store addresses adjacent to the recently accessed
one in automatically managed fast memory
– Logically divide memory into equal size blocks
– Fetch to cache the accessed block in its entirety
• Anticipation: nearby data will be accessed soon
25
The Bookshelf Analogy
• Book in your hand
• Desk
• Bookshelf
• Boxes at home
• Boxes in storage
26
Caching in a Pipelined Design
• The cache needs to be tightly integrated into the pipeline
– Ideally, access in 1-cycle so that dependent operations do not
stall
• High frequency pipeline → Cannot make the cache large
– But, we want a large cache AND a pipelined design
• Idea: Cache hierarchy
Main
Level 2 Memory
CPU Level1 Cache (DRAM)
RF Cache
27
A Note on Manual vs. Automatic Management
• Manual: Programmer manages data movement across
levels
-- too painful for programmers on substantial programs
– “core” vs “drum” memory in the 50’s
– still done in some embedded processors (on-chip scratch pad
SRAM in lieu of a cache) and GPUs (called “shared memory”)
28
A Modern Memory Hierarchy
Register File
32 words, sub-nsec
manual/compiler
Memory register spilling
L1 cache
Abstraction ~32 KB, ~nsec
L2 cache
512 KB ~ 1MB, many nsec Automatic
HW cache
L3 cache, management
.....
31
Intel Pentium 4 Example
• 90nm P4, 3.6 GHz
• L1 D-cache if m1=0.1, m2=0.1
T1=7.6, T2=36
– C1 = 16K
– t1 = 4 cyc int / 9 cycle fp if m1=0.01, m2=0.01
• L2 D-cache T1=4.2, T2=19.8
– C2 =1024 KB if m1=0.05, m2=0.01
– t2 = 18 cyc int / 18 cyc fp T1=5.00, T2=19.8
• Main memory if m1=0.01, m2=0.50
– t3 = ~ 50ns or 180 cyc T1=5.08, T2=108
• Notice
– best case latency is not 1
– worst case access latencies are into 500+ cycles
Cache Basics and Operation
Cache
• Generically, any structure that “memoizes” frequently
used results to avoid repeating the long-latency
operations required to reproduce the results from
scratch, e.g. a web cache
Hit/miss? Data
• Cache hit rate = (# hits) / (# hits + # misses) = (# hits) / (# accesses)
• Average memory access time (AMAT)
= ( hit-rate * hit-latency ) + ( miss-rate * miss-latency )
• Aside: Can reducing AMAT reduce performance?
36
A Basic Hardware Cache Design
• We will start with a basic hardware cache design
37
Blocks and Addressing the Cache
◼ Memory is logically divided into fixed-size blocks
◼ Each block maps to a location in the cache, determined by
the index bits in the address
❑used to index into the tag and data stores tag index byte in block
2b 3 bits 3 bits
1) index into the tag and data stores with index bits in address
2) check valid bit in tag store
3) compare tag bits in address with the stored tag in tag store
38
Direct-Mapped Cache: Placement and Access
• Assume byte-addressable memory: 256 bytes, 8-
byte blocks → 32 blocks
• Assume cache: 64 bytes, 8 blocks
– Direct-mapped: A block can go to only one location
V tag
byte in block
=? MUX
Hit? Data
– Addresses with same index contend for the same
location: Cause conflict misses
39
Direct-Mapped Caches
• Direct-mapped cache: Two blocks in memory that
map to the same index in the cache cannot be
present in the cache at the same time
– One index → one entry
V tag V tag
=? =? MUX
=? =? =? =?
Logic Hit?
Data store
MUX
byte in block
MUX
+ Likelihood of conflict misses even lower
-- More tag comparators and wider data mux; larger tags
42
Full Associativity
• Fully associative cache
– A block can be placed in any cache location
Tag store
=? =? =? =? =? =? =? =?
Logic
Hit?
Data store
MUX
byte in block
MUX
43
Associativity (and Tradeoffs)
• Degree of associativity: How many blocks can map to
the same index (or set)?
• Higher associativity
++ Higher hit rate
-- Slower cache access time
(hit latency and data access latency)
-- More expensive hardware
hit rate
(more comparators)
44 associativity
Issues in Set-Associative Caches
• Think of each block in a set having a “priority”
– Indicating how important it is to keep the block in the cache
• Key issue: How do you determine/adjust block priorities?
• There are three key decisions in a set:
– Insertion, promotion, eviction (replacement)
• Insertion: What happens to priorities on a cache fill?
– Where to insert the incoming block, whether or not to insert the block
• Promotion: What happens to priorities on a cache hit?
– Whether and how to change block priority
• Eviction/replacement: What happens to priorities on a
cache miss?
– Which block to evict and how to adjust priorities
45
Eviction/Replacement Policy
• Which block in the set to replace on a cache miss?
– Any invalid block first
– If all are valid, consult the replacement policy
• Random
• FIFO
• Least recently used (how to implement?)
• Not most recently used
• Least frequently used?
• Least costly to re-fetch?
– Why would memory accesses have different cost?
• Hybrid replacement policies
• Optimal replacement policy?
46
Implementing LRU
• Idea: Evict the least recently accessed block
• Problem: Need to keep track of access ordering of blocks
47
Approximations of LRU
• Most modern processors do not implement “true LRU”
(also called “perfect LRU”) in highly-associative caches
• Why?
– True LRU is complex
– LRU is an approximation to predict locality anyway (i.e., not
the best possible cache management policy)
• Examples:
– Not MRU (not most recently used)
– Hierarchical LRU: divide the N-way set into M “groups”, track
the MRU group and the MRU way in each group
– Victim-NextVictim Replacement: Only keep track of the victim
and the next victim
48
Hierarchical LRU (not MRU)
• Divide a set into multiple groups
• Keep track of only the MRU group
• Keep track of only the MRU block in each group
49
Cache Replacement Policy: LRU or Random
• LRU vs. Random: Which one is better?
– Example: 4-way cache, cyclic references to A, B, C, D, E
• 0% hit rate with LRU policy
• Set thrashing: When the “program working set” in a set is
larger than set associativity
– Random replacement policy is better when thrashing occurs
• In practice:
– Depends on workload
– Average hit rate of LRU and Random are similar
• Best of both Worlds: Hybrid of LRU and Random
– How to choose between the two? Set sampling
• See Qureshi et al., “A Case for MLP-Aware Cache Replacement,“ ISCA
2006.
50
What Is the Optimal?
• Belady’s OPT
– Replace the block that is going to be referenced furthest in
the future by the program
– Belady, “A study of replacement algorithms for a virtual-
storage computer,” IBM Systems Journal, 1966.
– How do we implement this? Simulate?
51
What’s In A Tag Store Entry?
• Valid bit
• Tag
• Replacement policy bits
• Dirty bit?
– Write back vs. write through caches
52
Handling Writes (I)
◼ When do we write the modified data in a cache to the next
level?
• Write through: At the time the write happens
• Write back: When the block is evicted
– Write-back
+ Can combine multiple writes to the same block before eviction
– Potentially saves bandwidth between cache levels + saves energy
-- Need a bit in the tag store indicating the block is “dirty/modified”
– Write-through
+ Simpler
+ All levels are up to date. Consistency: Simpler cache coherence because
no need to check close-to-processor caches’ tag stores for presence
-- More bandwidth intensive; no combining of writes
53
Handling Writes (II)
• Do we allocate a cache block on a write miss?
– Allocate on write miss: Yes
– No-allocate on write miss: No
• No-allocate
+ Conserves cache space if locality of writes is low (potentially
better cache hit rate)
54
Handling Writes (III)
• What if the processor writes to an entire block
over a small amount of time?
55
Cache Performance
Cache Parameters vs. Miss/Hit Rate
• Cache size
• Block size
• Associativity
• Replacement policy
• Insertion/Placement policy
57
Cache Size
• Cache size: total data (not including tag) capacity
– bigger can exploit temporal locality better
– not ALWAYS better
• Too large a cache adversely affects hit and miss latency
– smaller is faster => bigger is slower
– access time may degrade critical path
• Too small a cache hit rate
– doesn’t exploit temporal locality well
– useful data replaced often
“working set”
size
• Working set: the whole set of data
the executing application references
– Within a time interval
cache size
58
Block Size
• Block size is the data that is associated with an address tag
– not necessarily the unit of transfer between hierarchies
• Sub-blocking: A block divided into multiple pieces (each with V bit)
– Can improve “write” performance
59
Large Blocks: Critical-Word and Subblocking
• Large cache blocks can take a long time to fill into
the cache
– fill cache line critical word first
– restart cache access before complete fill
• Large cache blocks can waste bus bandwidth
– divide a block into subblocks
– associate separate valid bits for each subblock
– When is this useful?
v d subblock v d subblock v d subblock tag
60
Associativity
• How many blocks can be present in the same index (i.e.,
set)?
• Larger associativity
– lower miss rate (reduced conflicts)
– higher hit latency and area cost (plus diminishing returns)
hit rate
• Smaller associativity
– lower cost
– lower hit latency
• Especially important for L1 caches
61
Classification of Cache Misses
• Compulsory miss
– first reference to an address (block) always results in a miss
– subsequent references should hit unless the cache block is
displaced for the reasons below
• Capacity miss
– cache is too small to hold everything needed
– defined as the misses that would occur even in a fully-
associative cache (with optimal replacement) of the same
capacity
• Conflict miss
– defined as any miss that is neither a compulsory nor a
capacity miss
62
How to Reduce Each Miss Type
• Compulsory
– Caching cannot help
– Prefetching can
• Conflict
– More associativity
– Other ways to get more associativity without making the
cache associative
• Victim cache
• Better, randomized indexing
• Software hints?
• Capacity
– Utilize cache space better: keep blocks that will be referenced
– Software management: divide working set such that each
“phase” fits in cache
63
How to Improve Cache Performance
• Three fundamental goals
65
Cheap Ways of Reducing Conflict
Misses
• Instead of building highly-associative caches:
• Victim Caches
• Hashed/randomized Index Functions
• Pseudo Associativity
• Skewed Associative Caches
• …
66
Victim Cache: Reducing Conflict Misses
Victim
Direct cache
Mapped Next Level
Cache Cache
67
Hashing and Pseudo-Associativity
• Hashing: Use better “randomizing” index functions
+ can reduce conflict misses
• by distributing the accessed memory blocks more evenly to sets
• Example of conflicting accesses: strided access pattern where stride
value equals number of sets in cache
-- More complex to implement: can lengthen critical path
68
Skewed Associative Caches
• Idea: Reduce conflict misses by using different
index functions for each cache way
69
Skewed Associative Caches (I)
• Basic 2-way associative cache structure
Way 0 Way 1
Same index function
for each way
=? =?
70
Skewed Associative Caches (II)
• Skewed associative caches
– Each bank has a different index function
same index
redistributed to same index
Way 0 different sets same set Way 1
f0
71
Skewed Associative Caches (III)
• Idea: Reduce conflict misses by using different index
functions for each cache way
72
Software Approaches for Higher Hit
Rate
• Restructuring data access patterns
• Restructuring data layout
• Loop interchange
• Data structure separation/merging
• Blocking
• …
73
Restructuring Data Access Patterns (I)
• Idea: Restructure data layout or data access patterns
• Example: If column-major
– x[i+1,j] follows x[i,j] in memory
– x[i,j+1] is far away from x[i,j]
74
Restructuring Data Access Patterns (II)
• Blocking
– Divide loops operating on arrays into computation
chunks so that each chunk can hold its data in the cache
– Avoids cache conflicts between different chunks of
computation
– Essentially: Divide the working set so that each piece fits
in the cache
76
Restructuring Data Layout (II)
struct Node { • Idea: separate frequently-
struct Node* next; used fields of a data
int key; structure and pack them
struct Node-data* node-data;
} into a separate data
structure
struct Node-data {
char [256] name;
char [256] school; • Who should do this?
} – Programmer
– Compiler
while (node) {
if (node→key == input-key) { • Profiling vs. dynamic
// access node→node-data – Hardware?
} – Who can determine what
node = node→next; is frequently used?
}
77
Improving Basic Cache Performance
• Reducing miss rate
– More associativity
– Alternatives/enhancements to associativity
• Victim caches, hashing, pseudo-associativity, skewed associativity
– Better replacement/insertion policies
– Software approaches
• Reducing miss latency/cost
– Multi-level caches
– Critical word first
– Subblocking/sectoring
– Better replacement/insertion policies
– Non-blocking caches (multiple cache misses in parallel)
– Multiple accesses per cycle
– Software approaches
78
Miss Latency/Cost
• What is miss latency or miss cost affected by?
– Where does the miss get serviced from?
•Local vs. remote memory
•What level of cache in the hierarchy?
•Row hit versus row miss in DRAM
•Queueing delays in the memory controller and the
interconnect
•…
– How much does the miss stall the processor?
• Is it overlapped with other latencies?
• Is the data immediately needed?
•…
79
Memory Level Parallelism (MLP)
isolated miss parallel miss
B
A
C
time
P4 P3 P2 P1 P1 P2 P3 P4 S1 S2 S3
P4 P3 P2 P1 P1 P2 P3 P4 S1 S2 S3
Hit/Miss H H H M H H H H M M M
Misses=4
Time stall
Stalls=4
Belady’s OPT replacement
Hit/Miss H M M M H M M M H H H
Time Saved
stall Misses=6
cycles
Stalls=2
MLP-Aware replacement
MLP-Aware Cache Replacement
• How do we incorporate MLP into replacement
decisions?
• Qureshi et al., “A Case for MLP-Aware Cache
Replacement,” ISCA 2006.
84
Paper Review #1: Summary
• Spent two paragraphs talking about the key insights of the
paper, without covering the strengths.
• Making lots of claims in their reviews, but did not use any
citations to support their claims.
85
Paper Review #1: Grades Distribution
Grade Distribution
7
Grades (out of 10)
6
5
4
3
2
1
0
8 9 10
86
Review #3: Cache Compression
87
CSC 2224: Parallel Computer
Architecture and Programming
Memory Hierarchy & Caches