0% found this document useful (0 votes)
1 views48 pages

02b_Cache

The document discusses the importance of memory hierarchy and cache design in modern multi-core processors, highlighting the significant performance gap between CPU speed and memory access times. It covers various types of memory technologies, cache organization, and optimization strategies to improve cache performance, such as reducing miss rates and penalties. Additionally, it explains the concepts of data locality, cache placement, identification, and replacement policies, emphasizing the need for efficient memory management in computing systems.

Uploaded by

Hu Da
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views48 pages

02b_Cache

The document discusses the importance of memory hierarchy and cache design in modern multi-core processors, highlighting the significant performance gap between CPU speed and memory access times. It covers various types of memory technologies, cache organization, and optimization strategies to improve cache performance, such as reducing miss rates and penalties. Additionally, it explains the concepts of data locality, cache placement, identification, and replacement policies, emphasizing the need for efficient memory management in computing systems.

Uploaded by

Hu Da
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 48

CS6290

Caches
Introduction
Memory Hierarchy

Copyright © 2012, Elsevier Inc. All rights


Introduction
Memory Performance Gap

Copyright © 2012, Elsevier Inc. All rights


Introduction
Memory Hierarchy Design
• Memory hierarchy design becomes more
crucial with recent multi-core processors:
– Aggregate peak bandwidth grows with # cores:
• Intel Core i7 can generate two references per core per
clock
• Four cores and 3.2 GHz clock
– 25.6 billion 64-bit data references/second +
– 12.8 billion 128-bit instruction references
– = 409.6 GB/s!
• DRAM bandwidth is only 6% of this (25 GB/s)
• Requires:
– Multi-port, pipelined caches
– Two levels of cache per core
– Shared third-level cache on chip
Copyright © 2012, Elsevier Inc. All rights
Outline
• Memory Technologies • Reduce miss rate
– DRAM, SRAM, and – Larger Blocks
Memory Wall – Higher Associativitity
• Locality • Reduce hit time
• Cache Organization – Access with Virtual
– Architecture, Address
Replacement, Write – Pipelines Cache
Policies
• Hiding Miss Latency
• Cache Performance
– Non-Blocking Caches
• Reduce miss penalty – Cache Prefetch
– Multi Level Cache
– Critical Word First
– Write Buffer • Extended Reading
– Victim Cache – Trace Cache
Types of Memory
• Static RAM (SRAM)
– 6 0r 8 transistors per bit
– Two inverters (4 transistors) + transistors for
reading/writing
– Optimized for speed (first) and density (second)
– Fast (sub-nanosecond latencies for small SRAM)
– Speed roughly proportional to its area
– Mixes well with standard processor logic
• Dynamic RAM (DRAM)
– 1 transistor + 1 capacitor per bit
– Optimized for density (in terms of cost per bit)
– Slow (>40ns internal access, ~100ns pin-to-pin)
– Different fabrication steps (does not mix well with logic)
• Nonvolatile storage: Magnetic disk, Flash RAM
Types of Storage
Cost - what can $200 buy today (2009)?
• • SRAM: 16MB
• • DRAM: 4,000MB (4GB) – 250x cheaper than SRAM
• • Flash: 64,000MB (64GB) – 16x cheaper than DRAM
• • Disk: 2,000,000MB (2TB) – 32x vs. Flash (512x vs. DRAM)
Latency
• • SRAM: <1 to 2ns (on chip)
• • DRAM: ~50ns – 100x or more slower than SRAM
• • Flash: 75,000ns (75 microseconds) – 1500x vs. DRAM
• • Disk: 10,000,000ns (10ms) – 133x vs Flash (200,000x vs DRAM)
Bandwidth
• • SRAM: 300GB/sec (e.g., 12-port 8-byte register file @ 3Ghz)
• • DRAM: ~25GB/s
• • Flash: 0.25GB/s (250MB/s), 100x less than DRAM
• • Disk: 0.1 GB/s (100MB/s), 250x vs DRAM, sequential access only
Main Memory Background
• Performance of Main Memory:
– Latency: Cache Miss Penalty
• Access Time: time between request and word arrival
• Cycle Time: minimum time between requests
– Bandwidth: I/O & Large Block Miss Penalty (L2)
• Main Memory is DRAM: Dynamic Random Access Memory
– Dynamic since needs to be refreshed periodically (every 8 ms, 1%
time)
– Addresses divided into 2 halves (Memory as a 2D matrix):
• RAS or Row Access Strobe
• CAS or Column Access Strobe
• Cache uses SRAM: Static Random Access Memory
– No refresh (but needs 6 transistors/bit vs. 1 transistor/bit for DRAM)
Size: SRAM/DRAM ­4-8
Cost/Cycle time: SRAM/DRAM ­8-16
DRAM Logical Organization (4 Mbit = 222b)

Column Decoder

11 bits Sense Amps & I/O D

A0…A10 Memory Array Q


(2,048 x 2,048)
Storage
Word Line Cell

Square root of number of memory bits is in each RAS & CAS address

9
The Quest for DRAM
Performance
1. Fast Page mode
– Add timing signals that allow repeated accesses to row
buffer without another row access time
– Such a buffer comes naturally, since each array
buffers 1024 to 2048 bits for each access
2. Synchronous DRAM (SDRAM)
– Add a clock signal to DRAM interface, so that the
repeated transfers would not suffer the time overhead
of synchronizing with the DRAM controller
3. Double Data Rate (DDR SDRAM)
– Transfer data on both the rising edge and falling edge
of the DRAM clock signal  doubling the peak data
rate
– DDR2 lowers power by dropping the voltage from 2.5
to 1.8 volts + offers higher clock rates: up to 400 MHz
– DDR3 drops to 1.5 volts + higher clock rates: up to
800 MHz
• Improved Bandwidth, not Latency
10
DRAM name based on Peak Chip Transfers / Sec
DIMM name based on Peak DIMM MBytes / Sec
M
Stan- Clock Rate transfers DRAM Mbytes/s/ DIMM
dard (MHz) / second Name DIMM Name
Fastest for sale 4/06 ($125/GB)

DDR 133 266 DDR266 2128 PC2100


DDR 150 300 DDR300 2400 PC2400
DDR 200 400 DDR400 3200 PC3200
DDR2 266 533 DDR2-533 4264 PC4300
DDR2 333 667 DDR2-667 5336 PC5300
DDR2 400 800 DDR2-800 6400 PC6400
DDR3 533 1066 DDR3-1066 8528 PC8500
DDR3 666 1333 DDR3-1333 10664 PC10700
DDR3 800 1600 DDR3-1600 12800 PC12800
x2 x8
CSE502-S10 Lec 21+22 Adv. Memory 11
Memory Technology
Memory Optimizations
• DDR:
– DDR2
• Lower power (2.5 V -> 1.8 V)
• Higher clock rates (266 MHz, 333 MHz, 400 MHz)
– DDR3
• 1.5 V
• 800 MHz
– DDR4
• 1-1.2 V
• 1600 MHz

• GDDR5 is graphics memory based on


DDR3

Copyright © 2012, Elsevier Inc. All rights


Memory Technology
Memory Optimizations
• Graphics memory:
– Achieve 2-5 X bandwidth per DRAM vs.
DDR3
• Wider interfaces (32 vs. 16 bit)
• Higher clock rate
– Possible because they are attached via soldering instead
of socketted DIMM modules

• Reducing power in SDRAMs:


– Lower voltage
– Low power mode (ignores clock, continues
to refresh)

Copyright © 2012, Elsevier Inc. All rights


Memory Latency is Long
• 60-100ns not totally uncommon
• Quick back-of-the-envelope
calculation:
– 2GHz CPU
  0.5ns / cycle
– 100ns memory  200 cycle memory
latency!

• Solution: Caches
Why More on Memory
Hierarchy?
100,000

10,000
Performance

1,000
Processor Processor-Memory
100 Performance Gap
Growing
10
Memory

1
1980 1985 1990 1995 2000 2005 2010
Year

15
Storage Hierarchy and Locality

Capacity +
Disk
Speed -
SRAM Cache
Main Memory
Row buffer
L3 Cache

L2 Cache

ITLB Instruction Cache Data Cache DTLB

Register File

Bypass Network Speed +


Capacity -
Locality and Caches
• Data Locality
– Temporal: if data item needed now,
it is likely to be needed again in near
future
– Spatial: if data item needed now,
nearby data likely to be needed in near
future
• Exploiting Locality: Caches
– Keep recently used data
in fast memory close to the processor
– Also bring nearby data there
Cache in Processors
Cache Basics
• Fast (but small) memory close to processor
• When data referenced Key: Optimize the
– If in cache, use cache instead of memoryaverage memory
access latency
– If not in cache, bring into cache
(actually, bring entire block of data, too)
– Maybe have to kick something else out to do it!
• Important decisions
– Placement: where in the cache can a block go?
– Identification: how do we find a block in cache?
– Replacement: what to kick out to make room in
cache?
– Write policy: What do we do about stores?
Cache Basics
• Cache consists of block-sized lines
– Line size typically power of two
– Typically 16 to 128 bytes in size
• Example
– Suppose block size is 128 bytes
• Lowest seven bits determine offset within
block
– Read data at address A=0x7fffa3f4
– Address begins to block with base
address 0x7fffa380
Cache Placement
• Placement
– Which memory blocks are allowed
into which cache lines
• Placement Policies
– Direct mapped (block can go to only one
line)
– Fully Associative (block can go to any line)
– Set-associative (block can go to one of N
lines)
• E.g., if N=4, the cache is 4-way set associative
• Other two policies are extremes of this
(E.g., if N=1 we get a direct-mapped cache)
Cache Identification
• When address referenced, need to
– Find whether its data is in the cache
– If it is, find where in the cache
– This is called a cache lookup
• Each cache line must have
– A valid bit (1 if line has data, 0 if line
empty)
• We also say the cache line is valid or invalid
– A tag to identify which block is in the line
(if line is valid)
Cache Organization
Cache Replacement
• Need a free line to insert new block
– Which block should we kick out?
• Several strategies
– Random (randomly selected line)
– FIFO (line that has been in cache the
longest)
– LRU (least recently used line)
– LRU Approximations
– NMRU
– LFU
Write Policy
• Do we allocate cache lines on a write?
– Write-allocate
• A write miss brings block into cache
– No-write-allocate
• A write miss leaves cache as it was
• Do we update memory on writes?
– Write-through
• Memory immediately updated on each write
– Write-back
• Memory updated when line replaced
Write-Back Caches
• Need a Dirty bit for each line
– A dirty line has more recent data than
memory
• Line starts as clean (not dirty)
• Line becomes dirty on first write to it
– Memory not updated yet, cache has the
only up-to-date copy of data for a dirty line
• Replacing a dirty line
– Must write data back to memory (write-
back)
Cache Performance
• Miss rate
– Fraction of memory accesses that miss in
cache
– Hit rate = 1 – miss rate
• Average memory access time
AMAT = hit time + miss rate * miss
penalty
• Memory stall
CPUtime cycles
= CycleTime x (Cycles Exec + CyclesMemoryStall)

CyclesMemoryStall = CacheMisses x (MissLatencyTotal – MissLatencyOverlapped)


Improving Cache Performance
• AMAT = hit time + miss rate * miss
penalty
– Reduce miss penalty
– Reduce miss rate
– Reduce hit time

• CyclesMemoryStall = CacheMisses x
(MissLatencyTotal – MissLatencyOverlapped)
– Increase overlapped miss latency
Reducing Cache Miss Penalty
(1)
• Multilevel caches
– Very Fast, small Level 1 (L1) cache
– Fast, not so small Level 2 (L2) cache
– May also have slower, large L3 cache, etc.
• Why does this help?
– Miss in L1 cache can hit in L2 cache, etc.
AMAT = HitTimeL1+MissRateL1MissPenaltyL1
MissPenaltyL1= HitTimeL2+MissRateL2MissPenaltyL2
MissPenaltyL2= HitTimeL3+MissRateL3MissPenaltyL3
Reducing Cache Miss Penalty
(2)
• Early Restart & Critical Word First
– Block transfer takes time (bus too narrow)
– Give data to loads before entire block arrive
• Early restart
– When needed word arrives, let processor use it
– Then continue block transfer to fill cache line
• Critical Word First
– Transfer loaded word first, then the rest of block
(with wrap-around to get the entire block)
– Use with early restart to let processor go ASAP
Reducing Cache Miss Penalty
(3)
• Increase Load Miss Priority
– Loads can have dependent instructions
– If a load misses and a store needs to go
to memory, let the load miss go first
– Need a write buffer to remember stores
• Merging Write Buffer
– If multiple write misses to the same
block, combine them in the write buffer
– Use block-write instead of a many small
writes
Reducing Cache Miss Penalty
(4)
• Victim Caches
– Recently kicked-out blocks kept in
small cache
– If we miss on those blocks, can get
them fast
– Why does it work: conflict misses
• Misses that we have in our N-way set-
assoc cache, but would not have if the
cache was fully associative
– Example: direct-mapped L1 cache and
a 16-line fully associative victim cache
• Victim cache prevents thrashing when
several “popular” blocks want to go to the
same entry
Kinds of Cache Misses
• The “3 Cs”
– Compulsory: have to have these
• Miss the first time each block is accessed
– Capacity: due to limited cache capacity
• Would not have them if cache size was
infinite
– Conflict: due to limited associativity
• Would not have them if cache was fully
associative
Reducing Cache Miss Rate (1)
• Larger blocks
– Helps if there is more spatial locality
Reducing Cache Miss Rate (2)
• Larger caches
– Fewer capacity misses, but longer hit
latency!
• Higher Associativity
– Fewer conflict misses, but longer hit latency
• Way Prediction
– Speeds up set-associative caches
– Predict which of N ways has our data,
fast access as direct-mapped cache
– If mispredicted, access again as set-assoc
cache
Reducing Cache Miss Rate (2)
• Pseudo Associative Caches
– Similar to way prediction
– Start with direct mapped cache
– If miss on “primary” entry, try another
entry
• Compiler optimizations
– Loop interchange
– Blocking
Reducing Hit Time (1)
• Small & Simple Caches are faster
Reducing Hit Time (2)
• Avoid address translation on cache hits
• Software uses virtual addresses,
memory accessed using physical
addresses
– Details of this later (virtual memory)
• HW must translate virtual to physical
– Normally the first thing we do
– Caches accessed using physical address
– Wait for translation before cache lookup
• Idea: index cache using virtual address
Reducing Hit Time (3)
• Pipelined Caches
– Improves bandwidth, but not latency
– Essential for L1 caches at high frequency
• Even small caches have 2-3 cycle latency at N
GHz
– Also used in many L2 caches
• Trace Caches
– For instruction caches
Hiding Miss Latency
• Idea: overlap miss latency with useful work
– Also called “latency hiding”
• Non-blocking caches
– A blocking cache services one access at a time
• While miss serviced, other accesses blocked (wait)
– Non-blocking caches remove this limitation
• While miss serviced, can process other requests
• Prefetching
– Predict what will be needed and get it ahead of
time
Non-Blocking Caches
• Hit Under Miss
– Allow cache hits while one miss in progress
– But another miss has to wait
• Miss Under Miss, Hit Under Multiple Misses
– Allow hits and misses when other misses in progress
– Memory system must allow multiple pending requests
Non-Blocking Cache
Prefetching
• Predict future misses and get data into
cache
– If access does happen, we have a hit now
(or a partial miss, if data is on the way)
– If access does not happen, cache pollution
(replaced other data with junk we don’t need)
• To avoid pollution, prefetch buffers
– Pollution a big problem for small caches
– Have a small separate buffer for prefetches
• When we do access it, put data in cache
• If we don’t access it, cache not polluted
Simple Sequential Prefetch
• On a cache miss, fetch two sequential memory
blocks
– Exploits spatial locality in both instructions & data
– Exploits high bandwidth for sequential accesses
– Called “Adjacent Cache Line Prefetch” or “Spatial Prefetch”
by Intel
• Extend to fetching N sequential memory blocks
– Pick N large enough to hide the memory latency
• Stream prefetching is a continuous version of
prefetching
– Stream buffer can fit N cache lines
– On a miss, start fetching N sequential cache lines
– On a stream buffer hit: Move cache line to cache, start
fetching line (N+1)
Strided Prefetch
• Idea: detect and prefetch strided accesses
– for (i=0; i<N; i++) A[i*1024]++;
• Stride detected using a PC-based table
– For each PC, remember the stride
– Stride detection
• Remember the last address used for this PC
• Compare to currently used address for this PC
– Track confidence using a two bit saturating
counter
• Increment when stride correct, decrement when
incorrect
SandybridgePrefetching(Int
el Core i7-2600K)
• “Intel 64 and IA-32 Architectures
Optimization Reference Manual, Jan
2011”, pg 2-24
Software Prefetching
• Two flavors: register prefetch and cache
prefetch
• Each flavor can be faulting or non-faulting
– If address bad, does it create exceptions?
• Faulting register prefetch is binding
– It is a normal load, address must be OK, uses register
• Not faulting cache prefetch is non-binding
– If address bad, becomes a NOP
– Does not affect register state
– Has more overhead (load still there),
ISA change (prefetch instruction),
complicates cache (prefetches and loads different)

You might also like