0% found this document useful (0 votes)
12 views

Lecture 3 (Memory Hierarchy and Caches)

This is the file wo aplosd

Uploaded by

Nisha Tariq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Lecture 3 (Memory Hierarchy and Caches)

This is the file wo aplosd

Uploaded by

Nisha Tariq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

CSC 2224: Parallel Computer

Architecture and Programming


Memory Hierarchy & Caches

Prof. Gennady Pekhimenko


University of Toronto
Fall 2018
The content of this lecture is adapted from the lectures of
Onur Mutlu @ CMU and ETH
Reviews: Cache Compression
• Review:
– Pekhimenko et al., “Base-Delta-Immediate
Compression: Practical Data Compression for On-
Chip Caches,” PACT 2012

2
Project Proposal Deadline
• Deadline is on October 2nd

• Send emails with your proposals (PDFs) to


[email protected]

3
Memory (Programmer’s View)

4
Virtual vs. Physical Memory
• Programmer sees virtual memory
– Can assume the memory is “infinite”
• Reality: Physical memory size is much smaller than what
the programmer assumes
• The system (system software + hardware, cooperatively)
maps virtual memory addresses to physical memory
– The system automatically manages the physical memory
space transparently to the programmer
+ Programmer does not need to know the physical size of memory nor manage it → A
small physical memory can appear as a huge one to the programmer → Life is
easier for the programmer
-- More complex system software and architecture

A classic example of the programmer/(micro)architect tradeoff

5
(Physical) Memory System
• You need a larger level of storage to manage a small
amount of physical memory automatically
→ Physical memory has a backing store: disk

• We will first start with the physical memory system

• For now, ignore the virtual → physical indirection

• We will get back to it when the needs of virtual


memory start complicating the design of physical
memory…
6
Idealism

Pipeline
Instruction Data
(Instruction
Supply Supply
execution)

- Zero latency access - No pipeline stalls - Zero latency access

- Infinite capacity -Perfect data flow - Infinite capacity


(reg/memory dependencies)
- Zero cost - Infinite bandwidth
- Zero-cycle interconnect
- Perfect control flow (operand communication) - Zero cost

- Enough functional units

- Zero latency compute


7
The Memory Hierarchy
DRAM BANKS
Memory in a Modern System

DRAM INTERFACE
DRAM MEMORY
CORE 1

CORE 3
CONTROLLER

9
L2 CACHE 1 L2 CACHE 3
L2 CACHE 0 L2 CACHE 2

CORE 2
CORE 0
SHARED L3 CACHE
Ideal Memory
• Zero access time (latency)
• Infinite capacity
• Zero cost
• Infinite bandwidth (to support multiple accesses
in parallel)

10
The Problem
• Ideal memory’s requirements oppose each other
• Bigger is slower
– Bigger → Takes longer to determine the location
• Faster is more expensive
– Memory technology: SRAM vs. DRAM vs. Disk vs.
Tape
• Higher bandwidth is more expensive
– Need more banks, more ports, higher frequency, or
faster technology
11
Memory Technology: DRAM
• Dynamic random access memory
• Capacitor charge state indicates stored value
– Whether the capacitor is charged or discharged
indicates storage of 1 or 0
– 1 capacitor
– 1 access transistor row enable

_bitline
• Capacitor leaks through the RC path
– DRAM cell loses charge over time
– DRAM cell needs to be refreshed
12
Memory Technology: SRAM
• Static random access memory
• Two cross coupled inverters store a single bit
– Feedback path enables the stored value to persist in the “cell”
– 4 transistors for storage
– 2 transistors for access

row select

_bitline
bitline

13
Memory Bank Organization and Operation
• Read access sequence:
1. Decode row address
& drive word-lines

2. Selected bits drive


bit-lines
• Entire row read

3. Amplify row data

4. Decode column
address & select subset
of row
• Send to output

5. Precharge bit-lines
• For next access

14
SRAM (Static Random Access Memory)
Read Sequence
row select 1. address decode
2. drive row select

_bitline
3. selected bit-cells drive bitlines
bitline

(entire row is read together)


4. differential sensing and column select
(data is ready)
5. precharge all bitlines
(for next read or write)
bit-cell array
n+m n 2n
2n row x 2m-col Access latency dominated by steps 2 and 3
Cycling time dominated by steps 2, 3 and 5
(nm to minimize
overall latency) - step 2 proportional to 2m
- step 3 and 5 proportional to 2n
m 2m diff pairs
sense amp and mux
1 15
DRAM (Dynamic Random Access Memory)
Bits stored as charges on node
row enable
capacitance (non-restorative)
- bit cell loses charge when read
_bitline

- bit cell loses charge over time

Read Sequence
1~3 same as SRAM
4. a “flip-flopping” sense amp
RAS
bit-cell array amplifies and regenerates the
bitline, data bit is mux’ed out
n 2n
2n row x 2m-col 5. precharge all bitlines
(nm to minimize
overall latency) Destructive reads
m
Charge loss over time
2m
sense amp and mux Refresh: A DRAM controller must
1 periodically read each row within
A DRAM die comprises the allowed refresh time (10s of
CAS of multiple such arrays ms) such that charge is restored
16
DRAM vs. SRAM
• DRAM
– Slower access (capacitor)
– Higher density (1T 1C cell)
– Lower cost
– Requires refresh (power, performance, circuitry)
– Manufacturing requires putting capacitor and logic together

• SRAM
– Faster access (no capacitor)
– Lower density (6T cell)
– Higher cost
– No need for refresh
– Manufacturing compatible with logic process (no capacitor)

17
The Problem
• Bigger is slower
– SRAM, 512 Bytes, sub-nanosec
– SRAM, KByte~MByte, ~nanosec
– DRAM, Gigabyte, ~50 nanosec
– Hard Disk, Terabyte, ~10 millisec
• Faster is more expensive (dollars and chip area)
– SRAM, < 10$ per Megabyte
– DRAM, < 1$ per Megabyte
– Hard Disk < 1$ per Gigabyte
– These sample values (circa ~2011) scale with time
• Other technologies have their place as well
– Flash memory, PC-RAM, MRAM, RRAM (not mature yet)

18
Why Memory Hierarchy?
• We want both fast and large

• But we cannot achieve both with a single level of


memory

• Idea: Have multiple levels of storage


(progressively bigger and slower as the levels are
farther from the processor) and ensure most of
the data the processor needs is kept in the
fast(er) level(s)
19
The Memory Hierarchy
move what you use here fast
small

With good locality of


reference, memory

cheaper per byte


appears as fast as

faster per byte


and as large as

backup
everything big but slow
here
20
Memory Hierarchy
• Fundamental tradeoff
– Fast memory: small
– Large memory: slow
• Idea: Memory hierarchy

Hard Disk
Main
CPU Cache Memory
RF (DRAM)

• Latency, cost, size,


bandwidth

21
Locality
• One’s recent past is a very good predictor of his/her
near future.

• Temporal Locality: If you just did something, it is


very likely that you will do the same thing again
soon
– since you are here today, there is a good chance you will
be here again and again regularly

• Spatial Locality: If you did something, it is very likely


you will do something similar/related (in space)
– every time I find you in this room, you are probably
sitting close to the same people
22
Memory Locality
• A “typical” program has a lot of locality in memory
references
– typical programs are composed of “loops”

• Temporal: A program tends to reference the same


memory location many times and all within a small
window of time

• Spatial: A program tends to reference a cluster of memory


locations at a time
– most notable examples:
1. instruction memory references
2. array/data structure references

23
Caching Basics: Exploit Temporal Locality
• Idea: Store recently accessed data in automatically
managed fast memory (called cache)
• Anticipation: the data will be accessed again soon

• Temporal locality principle


– Recently accessed data will be again accessed in the near
future
– This is what Maurice Wilkes had in mind:
• Wilkes, “Slave Memories and Dynamic Storage Allocation,” IEEE
Trans. On Electronic Computers, 1965.
• “The use is discussed of a fast core memory of, say 32000 words
as a slave to a slower core memory of, say, one million words in
such a way that in practical cases the effective access time is
nearer that of the fast memory than that of the slow memory.”

24
Caching Basics: Exploit Spatial Locality
• Idea: Store addresses adjacent to the recently accessed
one in automatically managed fast memory
– Logically divide memory into equal size blocks
– Fetch to cache the accessed block in its entirety
• Anticipation: nearby data will be accessed soon

• Spatial locality principle


– Nearby data in memory will be accessed in the near future
• E.g., sequential instruction access, array traversal
– This is what IBM 360/85 implemented
• 16 Kbyte cache with 64 byte blocks
• Liptay, “Structural aspects of the System/360 Model 85 II: the cache,”
IBM Systems Journal, 1968.

25
The Bookshelf Analogy
• Book in your hand
• Desk
• Bookshelf
• Boxes at home
• Boxes in storage

• Recently-used books tend to stay on desk


– Comp Arch books, books for classes you are currently taking
– Until the desk gets full
• Adjacent books in the shelf needed around the same time
– If I have organized/categorized my books well in the shelf

26
Caching in a Pipelined Design
• The cache needs to be tightly integrated into the pipeline
– Ideally, access in 1-cycle so that dependent operations do not
stall
• High frequency pipeline → Cannot make the cache large
– But, we want a large cache AND a pipelined design
• Idea: Cache hierarchy

Main
Level 2 Memory
CPU Level1 Cache (DRAM)
RF Cache

27
A Note on Manual vs. Automatic Management
• Manual: Programmer manages data movement across
levels
-- too painful for programmers on substantial programs
– “core” vs “drum” memory in the 50’s
– still done in some embedded processors (on-chip scratch pad
SRAM in lieu of a cache) and GPUs (called “shared memory”)

• Automatic: Hardware manages data movement across


levels, transparently to the programmer
++ programmer’s life is easier
– the average programmer doesn’t need to know about it
• You don’t need to know how big the cache is and how it works to write a
“correct” program! (What if you want a “fast” program?)

28
A Modern Memory Hierarchy
Register File
32 words, sub-nsec
manual/compiler
Memory register spilling
L1 cache
Abstraction ~32 KB, ~nsec

L2 cache
512 KB ~ 1MB, many nsec Automatic
HW cache
L3 cache, management
.....

Main memory (DRAM),


GB, ~100 nsec
automatic
Swap Disk
demand
100 GB, ~10 msec paging
29
Hierarchical Latency Analysis
• For a given memory hierarchy level i it has a technology-intrinsic
access time of ti, The perceived access time Ti is longer than ti
• Except for the outer-most hierarchy, when looking for a given
address there is
– a chance (hit-rate hi) you “hit” and access time is ti
– a chance (miss-rate mi) you “miss” and access time ti +Ti+1
– hi + mi = 1
• Thus
Ti = hi·ti + mi·(ti + Ti+1)
Ti = ti + mi ·Ti+1

hi and mi are defined to be the hit-rate


and miss-rate of just the references that missed at Li-1
30
Hierarchy Design Considerations
• Recursive latency equation
Ti = ti + mi ·Ti+1
• The goal: achieve desired T1 within allowed cost
• Ti  ti is desirable
• Keep mi low
– increasing capacity Ci lowers mi, but beware of increasing ti
– lower mi by smarter management (replacement::anticipate what you
don’t need, prefetching::anticipate what you will need)
• Keep Ti+1 low
– faster lower hierarchies, but beware of increasing cost
– introduce intermediate hierarchies as a compromise

31
Intel Pentium 4 Example
• 90nm P4, 3.6 GHz
• L1 D-cache if m1=0.1, m2=0.1
T1=7.6, T2=36
– C1 = 16K
– t1 = 4 cyc int / 9 cycle fp if m1=0.01, m2=0.01
• L2 D-cache T1=4.2, T2=19.8
– C2 =1024 KB if m1=0.05, m2=0.01
– t2 = 18 cyc int / 18 cyc fp T1=5.00, T2=19.8
• Main memory if m1=0.01, m2=0.50
– t3 = ~ 50ns or 180 cyc T1=5.08, T2=108
• Notice
– best case latency is not 1
– worst case access latencies are into 500+ cycles
Cache Basics and Operation
Cache
• Generically, any structure that “memoizes” frequently
used results to avoid repeating the long-latency
operations required to reproduce the results from
scratch, e.g. a web cache

• Most commonly in the on-die context: an


automatically-managed memory hierarchy based on
SRAM
– memoize in SRAM the most frequently accessed DRAM
memory locations to avoid repeatedly paying for the DRAM
access latency
34
Caching Basics
◼ Block (line): Unit of storage in the cache
❑Memory is logically divided into cache blocks that map to
locations in the cache
◼ On a reference:
❑HIT: If in cache, use cached data instead of accessing memory
❑MISS: If not in cache, bring block into cache
◼ Maybe have to kick something else out to do it

◼ Some important cache design decisions


❑Placement: where and how to place/find a block in cache?
❑Replacement: what data to remove to make room in cache?
❑Granularity of management: large or small blocks? Subblocks?
❑Write policy: what do we do about writes?
❑Instructions/data: do we treat them separately?
35
Cache Abstraction and Metrics
Address
Tag Store Data Store

(is the address (stores


in the cache? memory
+ bookkeeping) blocks)

Hit/miss? Data
• Cache hit rate = (# hits) / (# hits + # misses) = (# hits) / (# accesses)
• Average memory access time (AMAT)
= ( hit-rate * hit-latency ) + ( miss-rate * miss-latency )
• Aside: Can reducing AMAT reduce performance?

36
A Basic Hardware Cache Design
• We will start with a basic hardware cache design

• Then, we will examine a multitude of ideas to


make it better

37
Blocks and Addressing the Cache
◼ Memory is logically divided into fixed-size blocks
◼ Each block maps to a location in the cache, determined by
the index bits in the address
❑used to index into the tag and data stores tag index byte in block
2b 3 bits 3 bits

◼ Cache access: 8-bit address

1) index into the tag and data stores with index bits in address
2) check valid bit in tag store
3) compare tag bits in address with the stored tag in tag store

◼ If a block is in the cache (cache hit), the stored tag should


be valid and match the tag of the block

38
Direct-Mapped Cache: Placement and Access
• Assume byte-addressable memory: 256 bytes, 8-
byte blocks → 32 blocks
• Assume cache: 64 bytes, 8 blocks
– Direct-mapped: A block can go to only one location

tag index byte in block


2b 3 bits 3 bits Tag store Data store
Address

V tag

byte in block
=? MUX

Hit? Data
– Addresses with same index contend for the same
location: Cause conflict misses
39
Direct-Mapped Caches
• Direct-mapped cache: Two blocks in memory that
map to the same index in the cache cannot be
present in the cache at the same time
– One index → one entry

• Can lead to 0% hit rate if more than one block


accessed in an interleaved manner map to the same
index
– Assume addresses A and B have the same index bits but
different tag bits
– A, B, A, B, A, B, A, B, … → conflict in the cache index
– All accesses are conflict misses
40
Set Associativity
• Addresses 0 and 8 always conflict in direct mapped cache
• Instead of having one column of 8, have 2 columns of 4 blocks

Tag store Data store


SET

V tag V tag

=? =? MUX

Logic byte in block


MUX
Hit?
Address
tag index byte in block
Key idea: Associative memory within the set
3b 2 bits 3 bits
+ Accommodates conflicts better (fewer conflict misses)
-- More complex, slower access, larger tag store
41
Higher Associativity
• 4-way Tag store

=? =? =? =?

Logic Hit?

Data store

MUX
byte in block
MUX
+ Likelihood of conflict misses even lower
-- More tag comparators and wider data mux; larger tags
42
Full Associativity
• Fully associative cache
– A block can be placed in any cache location
Tag store

=? =? =? =? =? =? =? =?

Logic

Hit?

Data store

MUX
byte in block
MUX

43
Associativity (and Tradeoffs)
• Degree of associativity: How many blocks can map to
the same index (or set)?
• Higher associativity
++ Higher hit rate
-- Slower cache access time
(hit latency and data access latency)
-- More expensive hardware
hit rate
(more comparators)

• Diminishing returns from


higher associativity

44 associativity
Issues in Set-Associative Caches
• Think of each block in a set having a “priority”
– Indicating how important it is to keep the block in the cache
• Key issue: How do you determine/adjust block priorities?
• There are three key decisions in a set:
– Insertion, promotion, eviction (replacement)
• Insertion: What happens to priorities on a cache fill?
– Where to insert the incoming block, whether or not to insert the block
• Promotion: What happens to priorities on a cache hit?
– Whether and how to change block priority
• Eviction/replacement: What happens to priorities on a
cache miss?
– Which block to evict and how to adjust priorities

45
Eviction/Replacement Policy
• Which block in the set to replace on a cache miss?
– Any invalid block first
– If all are valid, consult the replacement policy
• Random
• FIFO
• Least recently used (how to implement?)
• Not most recently used
• Least frequently used?
• Least costly to re-fetch?
– Why would memory accesses have different cost?
• Hybrid replacement policies
• Optimal replacement policy?

46
Implementing LRU
• Idea: Evict the least recently accessed block
• Problem: Need to keep track of access ordering of blocks

• Question: 2-way set associative cache:


– What do you need to implement LRU perfectly?

• Question: 4-way set associative cache:


– What do you need to implement LRU perfectly?
– How many different orderings possible for the 4 blocks in the
set?
– How many bits needed to encode the LRU order of a block?
– What is the logic needed to determine the LRU victim?

47
Approximations of LRU
• Most modern processors do not implement “true LRU”
(also called “perfect LRU”) in highly-associative caches

• Why?
– True LRU is complex
– LRU is an approximation to predict locality anyway (i.e., not
the best possible cache management policy)

• Examples:
– Not MRU (not most recently used)
– Hierarchical LRU: divide the N-way set into M “groups”, track
the MRU group and the MRU way in each group
– Victim-NextVictim Replacement: Only keep track of the victim
and the next victim

48
Hierarchical LRU (not MRU)
• Divide a set into multiple groups
• Keep track of only the MRU group
• Keep track of only the MRU block in each group

• On replacement, select victim as:


– A not-MRU block in one of the not-MRU groups
(randomly pick one of such blocks/groups)

49
Cache Replacement Policy: LRU or Random
• LRU vs. Random: Which one is better?
– Example: 4-way cache, cyclic references to A, B, C, D, E
• 0% hit rate with LRU policy
• Set thrashing: When the “program working set” in a set is
larger than set associativity
– Random replacement policy is better when thrashing occurs
• In practice:
– Depends on workload
– Average hit rate of LRU and Random are similar
• Best of both Worlds: Hybrid of LRU and Random
– How to choose between the two? Set sampling
• See Qureshi et al., “A Case for MLP-Aware Cache Replacement,“ ISCA
2006.

50
What Is the Optimal?
• Belady’s OPT
– Replace the block that is going to be referenced furthest in
the future by the program
– Belady, “A study of replacement algorithms for a virtual-
storage computer,” IBM Systems Journal, 1966.
– How do we implement this? Simulate?

• Is this optimal for minimizing miss rate?


• Is this optimal for minimizing execution time?
– No. Cache miss latency/cost varies from block to block!
– Two reasons: Remote vs. local caches and miss overlapping
– Qureshi et al. “A Case for MLP-Aware Cache
Replacement,“ ISCA 2006.

51
What’s In A Tag Store Entry?
• Valid bit
• Tag
• Replacement policy bits

• Dirty bit?
– Write back vs. write through caches

52
Handling Writes (I)
◼ When do we write the modified data in a cache to the next
level?
• Write through: At the time the write happens
• Write back: When the block is evicted

– Write-back
+ Can combine multiple writes to the same block before eviction
– Potentially saves bandwidth between cache levels + saves energy
-- Need a bit in the tag store indicating the block is “dirty/modified”

– Write-through
+ Simpler
+ All levels are up to date. Consistency: Simpler cache coherence because
no need to check close-to-processor caches’ tag stores for presence
-- More bandwidth intensive; no combining of writes
53
Handling Writes (II)
• Do we allocate a cache block on a write miss?
– Allocate on write miss: Yes
– No-allocate on write miss: No

• Allocate on write miss


+ Can combine writes instead of writing each of them
individually to next level
+ Simpler because write misses can be treated the same way as
read misses
-- Requires (?) transfer of the whole cache block

• No-allocate
+ Conserves cache space if locality of writes is low (potentially
better cache hit rate)

54
Handling Writes (III)
• What if the processor writes to an entire block
over a small amount of time?

• Is there any need to bring the block into the


cache from memory in the first place?

• Ditto for a portion of the block, i.e., subblock


– E.g., 4 bytes out of 64 bytes

55
Cache Performance
Cache Parameters vs. Miss/Hit Rate
• Cache size
• Block size
• Associativity

• Replacement policy
• Insertion/Placement policy

57
Cache Size
• Cache size: total data (not including tag) capacity
– bigger can exploit temporal locality better
– not ALWAYS better
• Too large a cache adversely affects hit and miss latency
– smaller is faster => bigger is slower
– access time may degrade critical path
• Too small a cache hit rate
– doesn’t exploit temporal locality well
– useful data replaced often
“working set”
size
• Working set: the whole set of data
the executing application references
– Within a time interval
cache size

58
Block Size
• Block size is the data that is associated with an address tag
– not necessarily the unit of transfer between hierarchies
• Sub-blocking: A block divided into multiple pieces (each with V bit)
– Can improve “write” performance

• Too small blocks


– don’t exploit spatial locality well hit rate

– have larger tag overhead

• Too large blocks


– too few total # of blocks → less
temporal locality exploitation
– waste of cache space and
bandwidth/energy:
block
if spatial locality is not high size

59
Large Blocks: Critical-Word and Subblocking
• Large cache blocks can take a long time to fill into
the cache
– fill cache line critical word first
– restart cache access before complete fill
• Large cache blocks can waste bus bandwidth
– divide a block into subblocks
– associate separate valid bits for each subblock
– When is this useful?
v d subblock v d subblock v d subblock tag

60
Associativity
• How many blocks can be present in the same index (i.e.,
set)?
• Larger associativity
– lower miss rate (reduced conflicts)
– higher hit latency and area cost (plus diminishing returns)
hit rate
• Smaller associativity
– lower cost
– lower hit latency
• Especially important for L1 caches

• Is power of 2 associativity required? associativity

61
Classification of Cache Misses
• Compulsory miss
– first reference to an address (block) always results in a miss
– subsequent references should hit unless the cache block is
displaced for the reasons below
• Capacity miss
– cache is too small to hold everything needed
– defined as the misses that would occur even in a fully-
associative cache (with optimal replacement) of the same
capacity
• Conflict miss
– defined as any miss that is neither a compulsory nor a
capacity miss
62
How to Reduce Each Miss Type
• Compulsory
– Caching cannot help
– Prefetching can
• Conflict
– More associativity
– Other ways to get more associativity without making the
cache associative
• Victim cache
• Better, randomized indexing
• Software hints?
• Capacity
– Utilize cache space better: keep blocks that will be referenced
– Software management: divide working set such that each
“phase” fits in cache

63
How to Improve Cache Performance
• Three fundamental goals

• Reducing miss rate


– Caveat: reducing miss rate can reduce performance if more
costly-to-refetch blocks are evicted

• Reducing miss latency or miss cost

• Reducing hit latency or hit cost

• The above three together affect performance


64
Improving Basic Cache Performance
• Reducing miss rate
– More associativity
– Alternatives/enhancements to associativity
• Victim caches, hashing, pseudo-associativity, skewed associativity
– Better replacement/insertion policies
– Software approaches
• Reducing miss latency/cost
– Multi-level caches
– Critical word first
– Subblocking/sectoring
– Better replacement/insertion policies
– Non-blocking caches (multiple cache misses in parallel)
– Multiple accesses per cycle
– Software approaches

65
Cheap Ways of Reducing Conflict
Misses
• Instead of building highly-associative caches:
• Victim Caches
• Hashed/randomized Index Functions
• Pseudo Associativity
• Skewed Associative Caches
• …

66
Victim Cache: Reducing Conflict Misses
Victim
Direct cache
Mapped Next Level
Cache Cache

• Jouppi, “Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-


Associative Cache and Prefetch Buffers,” ISCA 1990.

• Idea: Use a small fully-associative buffer (victim


cache) to store recently evicted blocks
+ Can avoid ping ponging of cache blocks mapped to the same set (if two cache blocks
continuously accessed in nearby time conflict with each other)
-- Increases miss latency if accessed serially with L2; adds complexity

67
Hashing and Pseudo-Associativity
• Hashing: Use better “randomizing” index functions
+ can reduce conflict misses
• by distributing the accessed memory blocks more evenly to sets
• Example of conflicting accesses: strided access pattern where stride
value equals number of sets in cache
-- More complex to implement: can lengthen critical path

• Pseudo-associativity (Poor Man’s associative cache)


– Serial lookup: On a miss, use a different index function and
access cache again
– Given a direct-mapped array with K cache blocks
• Implement K/N sets
• Given address Addr, sequentially look up: {0,Addr[lg(K/N)-1: 0]},
{1,Addr[lg(K/N)-1: 0]}, … , {N-1,Addr[lg(K/N)-1: 0]}
+ Less complex than N-way; -- Longer cache hit/miss latency

68
Skewed Associative Caches
• Idea: Reduce conflict misses by using different
index functions for each cache way

• Seznec, “A Case for Two-Way Skewed-Associative


Caches,” ISCA 1993.

69
Skewed Associative Caches (I)
• Basic 2-way associative cache structure
Way 0 Way 1
Same index function
for each way

=? =?

Tag Index Byte in Block

70
Skewed Associative Caches (II)
• Skewed associative caches
– Each bank has a different index function
same index
redistributed to same index
Way 0 different sets same set Way 1

f0

=? tag index byte in block =?

71
Skewed Associative Caches (III)
• Idea: Reduce conflict misses by using different index
functions for each cache way

• Benefit: indices are more randomized (memory


blocks are better distributed across sets)
– Less likely two blocks have same index (esp. with strided
access)
• Reduced conflict misses

• Cost: additional latency of hash function

72
Software Approaches for Higher Hit
Rate
• Restructuring data access patterns
• Restructuring data layout

• Loop interchange
• Data structure separation/merging
• Blocking
• …

73
Restructuring Data Access Patterns (I)
• Idea: Restructure data layout or data access patterns
• Example: If column-major
– x[i+1,j] follows x[i,j] in memory
– x[i,j+1] is far away from x[i,j]

Poor code Better code


for i = 1, rows for j = 1, columns
for j = 1, columns for i = 1, rows
sum = sum + x[i,j] sum = sum + x[i,j]
• This is called loop interchange
• Other optimizations can also increase hit rate
– Loop fusion, array merging, …
• What if multiple arrays? Unknown array size at compile time?

74
Restructuring Data Access Patterns (II)
• Blocking
– Divide loops operating on arrays into computation
chunks so that each chunk can hold its data in the cache
– Avoids cache conflicts between different chunks of
computation
– Essentially: Divide the working set so that each piece fits
in the cache

• But, there are still self-conflicts in a block


1. there can be conflicts among different arrays
2. array sizes may be unknown at compile/programming
time
75
Restructuring Data Layout (I)
• Pointer based traversal
struct Node { (e.g., of a linked list)
struct Node* next;
int key; • Assume a huge linked
char [256] name; list (1B nodes) and
char [256] school; unique keys
}
• Why does the code on
while (node) { the left have poor
if (node→key == input-key) { cache hit rate?
// access other fields of node
– “Other fields” occupy
}
node = node→next;
most of the cache line
} even though rarely
accessed!

76
Restructuring Data Layout (II)
struct Node { • Idea: separate frequently-
struct Node* next; used fields of a data
int key; structure and pack them
struct Node-data* node-data;
} into a separate data
structure
struct Node-data {
char [256] name;
char [256] school; • Who should do this?
} – Programmer
– Compiler
while (node) {
if (node→key == input-key) { • Profiling vs. dynamic
// access node→node-data – Hardware?
} – Who can determine what
node = node→next; is frequently used?
}
77
Improving Basic Cache Performance
• Reducing miss rate
– More associativity
– Alternatives/enhancements to associativity
• Victim caches, hashing, pseudo-associativity, skewed associativity
– Better replacement/insertion policies
– Software approaches
• Reducing miss latency/cost
– Multi-level caches
– Critical word first
– Subblocking/sectoring
– Better replacement/insertion policies
– Non-blocking caches (multiple cache misses in parallel)
– Multiple accesses per cycle
– Software approaches

78
Miss Latency/Cost
• What is miss latency or miss cost affected by?
– Where does the miss get serviced from?
•Local vs. remote memory
•What level of cache in the hierarchy?
•Row hit versus row miss in DRAM
•Queueing delays in the memory controller and the
interconnect
•…
– How much does the miss stall the processor?
• Is it overlapped with other latencies?
• Is the data immediately needed?
•…

79
Memory Level Parallelism (MLP)
isolated miss parallel miss
B
A
C
time

❑ Memory Level Parallelism (MLP) means generating and


servicing multiple memory accesses in parallel [Glew’98]
❑ Several techniques to improve MLP (e.g., out-of-order execution)
❑ MLP varies. Some misses are isolated and some parallel
How does this affect cache replacement?
Traditional Cache Replacement Policies
❑ Traditional cache replacement policies try to reduce miss
count

❑ Implicit assumption: Reducing miss count reduces


memory-related stall time

❑ Misses with varying cost/MLP breaks this assumption!

❑ Eliminating an isolated miss helps performance more than


eliminating a parallel miss
❑ Eliminating a higher-latency miss could help performance
more than eliminating a lower-latency miss
81
An Example

P4 P3 P2 P1 P1 P2 P3 P4 S1 S2 S3

Misses to blocks P1, P2, P3, P4 can be parallel


Misses to blocks S1, S2, and S3 are isolated

Two replacement algorithms:


1. Minimizes miss count (Belady’s OPT)
2. Reduces isolated miss (MLP-Aware)

For a fully associative cache containing 4 blocks


Fewest Misses = Best Performance
P4 P3
S1Cache
P2
S2 S3 P1
P4 P3
S1 P2
S2 P1
S3 P4P4P3S1P2
P4S2P1
P3S3P4
P2 P3
S1 P2P4S2P3 P2 S3

P4 P3 P2 P1 P1 P2 P3 P4 S1 S2 S3

Hit/Miss H H H M H H H H M M M
Misses=4
Time stall
Stalls=4
Belady’s OPT replacement

Hit/Miss H M M M H M M M H H H
Time Saved
stall Misses=6
cycles
Stalls=2
MLP-Aware replacement
MLP-Aware Cache Replacement
• How do we incorporate MLP into replacement
decisions?
• Qureshi et al., “A Case for MLP-Aware Cache
Replacement,” ISCA 2006.

84
Paper Review #1: Summary
• Spent two paragraphs talking about the key insights of the
paper, without covering the strengths.

• Blaming the model for being over-simplified, but without


making any concrete suggestions on how to further improve it.
Authors already explain that including power consumption of
memory subsystems only reduce the amount of speedup
(because they eat away the allocated power budgets).

• Making lots of claims in their reviews, but did not use any
citations to support their claims.

85
Paper Review #1: Grades Distribution
Grade Distribution
7
Grades (out of 10)
6
5
4
3
2
1
0
8 9 10

86
Review #3: Cache Compression

• Pekhimenko et al., “Base-Delta-Immediate


Compression: Practical Data Compression for On-Chip
Caches,” PACT 2012

87
CSC 2224: Parallel Computer
Architecture and Programming
Memory Hierarchy & Caches

Prof. Gennady Pekhimenko


University of Toronto
Fall 2018
The content of this lecture is adapted from the lectures of
Onur Mutlu @ CMU and ETH

You might also like