Open In App

Cache Mapping Techniques

Last Updated : 01 Nov, 2025
Comments
Improve
Suggest changes
12 Likes
Like
Report

In modern computer systems, the speed difference between the processor and main memory (RAM) can significantly affect system performance. To bridge this gap, computers use a small, high-speed memory known as cache memory. But since cache is limited in size, the system needs a smart way to decide where to place data from main memory — and that’s where cache mapping comes in.

  • Cache mapping is a technique used to determine where a particular block of main memory will be stored in the cache.
  • It defines how and where that new data block from main memory will be placed inside the cache.
cpu
Cache-RAM Mapping

Key Terminologies in Cache Mapping

Before diving into mapping techniques, let’s understand some important terms:

  1. Main Memory Blocks: The main memory is divided into equal-sized sections called blocks.
  2. Cache Lines (or Cache Blocks): The cache memory is also divided into equal partitions called cache lines.
  3. Block Size: The number of bytes or words stored in one block or line.
  4. Tag Bits: A small portion of the address used to identify which block of main memory is stored in a particular cache line.
  5. Number of Cache Lines: Determined by the ratio of Cache Size ÷ Block Size.
  6. Number of Cache Sets: Determined by Number of Cache Lines ÷ Associativity (used in set-associative mapping).

Types of Cache Mapping

There are three main cache mapping techniques used in computer systems:

1. Direct Mapping

In Direct mapping, each block of main memory maps to exactly one specific cache line. The main memory address is divided into three parts:

  1. Tag Bits: Identify which block of memory is stored.
  2. Line Number: Indicates which cache line it belongs to.
  3. Byte Offset: Specifies the exact byte within the block.
2
Direct Mapping

The formula for finding the cache line is:

Cache Line Number=(Block Number) MOD (Number of Cache Lines)

2. Fully Associative Mapping

In fully associative mapping, a memory block can be stored in any line of the cache. The address is divided into:

  1. Tag Bits: Identify the memory block.
  2. Byte Offset: Specifies the byte within that block.
3
Fully Associative Mapping

There is no line number here because placement is flexible — any block can go into any cache line.

3. Set-Associative Mapping

Set-associative mapping combines the benefits of both direct and fully associative mapping.

  • The cache is divided into a number of sets, each containing a few lines (e.g., 2-way, 4-way set associative).
  • A memory block maps to exactly one set, but can be placed in any line within that set.
1
2-way Set Associative Mapping

The address is divided into:

  1. Tag Bits: Identify which memory block is stored.
  2. Set Number: Determines which cache set it belongs to.
  3. Byte Offset: Specifies the byte position.

The mapping formula is:

Set Number = (Block Number) MOD (Number of Sets)

Need for Cache Mapping

Cache mapping is essential for two main reasons:

  1. Locate Data Efficiently: It helps the processor quickly determine whether the required data is in the cache (cache hit) or must be fetched from main memory (cache miss).
  2. Manage Data Placement: When a cache miss occurs, mapping tells the system where in the cache to place the new memory block.

Essentially, it’s like assigning a “home address” in the cache for every block of main memory.

Comparison Table

Here is a simple comparison among cache mapping types:

FeatureDirect MappingFully Associative MappingSet-Associative Mapping
Placement RuleFixed locationAny locationLimited (within a set)
Hardware CostLowHighModerate
Access TimeFastSlowModerate
Conflict MissesHighNoneLow
FlexibilityLowHighMedium

Real-Life Analogy

Imagine a parking lot:

  • Direct Mapping: Each car has a fixed parking spot number.
  • Fully Associative: Any car can park in any spot.
  • Set-Associative: Cars are assigned to a specific section, but can park in any space within that section.

This analogy helps visualize how memory blocks find their “parking spots” in the cache.


Explore