0% found this document useful (0 votes)
26 views33 pages

coa unit 4

Uploaded by

shubhrajkumar707
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views33 pages

coa unit 4

Uploaded by

shubhrajkumar707
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Memory Organisation in Computer Architecture

Last Updated : 10 Apr, 2025

Memory organization is essential for efficient data processing and storage. The memory hierarchy
ensures quick access to data by the CPU, while larger, slower storage devices hold data for the long
term. Effective memory management ensures the system operates efficiently, providing programs with the
memory they need and preventing unnecessary delays in processing.

Memory Hierarchy

Types of Memory in a Computer System

Auxiliary Memory (Non-Volatile)

Devices that provide secondary or backup storage are called auxiliary memory. For example, Magnetic
disks and tapes are commonly used auxiliary devices. It is not directly accessible to the CPU, is
accessed using the Input/Output channels.

Hard Disk Drive (HDD): A permanent storage device that holds large amounts of data even when
the computer is turned off. It is slower than RAM but offers much more capacity.

Solid-State Drive (SSD): A faster alternative to HDDs with no moving parts. SSDs provide faster
read/write speeds compared to HDDs.

Optical Discs and USB Flash Drives: Optical discs and USB flash drives are other forms of
secondary memory used for storage, though they are less common in modern high-speed systems.

Main Memory (Volatile)

The memory unit that communicates directly within the CPU, Cache memory is called main memory. It is
fast memory used to store data during computer operations. Main memory is made up
of RAM and ROM, majority part consists of RAM.

RAM Random Access Memory

DRAM: Dynamic RAM, is made of capacitors and transistors. It is slower and cheaper than SRAM.
SRAM: Static RAM, retains data, until powered off.

ROM Read Only Memory

Read Only Memory, is non-volatile and is more like a permanent storage for information. It also stores
the bootstrap loader program, to load and start the operating system when computer is turned
on. PROM (Programmable ROM), EPROM (Erasable PROM) and EEPROM (Electrically Erasable
PROM) are some commonly used ROMs.

Cache Memory

The cache memory is used to store program data that is currently being executed in the CPU.
Whenever the CPU needs to access memory, it first checks the cache memory. If the data is not found in
cache memory then the CPU moves onto the main memory.

Registers

These are small, ultra-fast memory locations within the CPU used to hold data that is being processed.
Registers are crucial for executing instructions efficiently.

Tertiary and Offline Memory

Tertiary memory refers to storage devices used for backups and archives, like magnetic tapes.
Offline memory is storage that is not directly accessible by the computer (e.g., external hard
drives, optical discs) but data can be retrieved when connected.

Other Types of Memory Based on Storage Time

Volatile Memory: This loses its data, when power is switched off.

Non-Volatile Memory: This is a permanent storage and does not lose any data when power is
switched off.

Memory Organization

Program Load: When a program is executed, it is loaded from secondary storage (HDD/SSD) into
main memory (RAM). It may also be loaded partially into cache memory to speed up execution.
Accessing Data: The CPU accesses data through registers and cache for quick computations. If
the data is not in the cache, it will fetch it from RAM. If it’s not in RAM either, it will fetch it from
secondary storage.
Swapping and Virtual Memory: If the system runs out of physical RAM, parts of the program
(pages) may be swapped out to secondary storage. This process, known as paging, is managed by
the operating system’s memory manager.
Random Access Memory (RAM)
Last Updated : 23 Jul, 2025

Random Access Memory (RAM) is a type of computer memory that stores data temporarily. When you turn off your computer, the data in RAM
disappears, unlike the data on your hard drive, which stays saved. RAM helps your computer run programs and process information faster. This is
similar to how the brain’s memory helps us remember things. In this article, we’ll talk more about RAM and its different types.

What is a Computer Memory?


Computer memory is essential for storing data and instructions. It is divided into cells, each with a unique address. Memory makes computers
function like a human brain, which has different types of memory (short-term, long-term, etc.). Similarly, computers have different types of memory:

Cache Memory: High-speed memory that speeds up the CPU. It’s fast but expensive.
Primary Memory (Main Memory): Includes RAM (volatile) and ROM (non-volatile). It stores current data.
Secondary Memory: Non-volatile memory used for permanent storage (e.g., hard drives, SSDs).

Types of Memory

What is RAM (Random Access Memory)?


It is one of the parts of the Main memory, also famously known as Read Write Memory. Random Access memory is present on the motherboard and
the computer's data is temporarily stored in RAM. As the name says, RAM can help in both reading and writing. RAM is a volatile memory,
which means, it is present as long as the Computer is in the ON state, as soon as the computer turns OFF, the memory
is erased.

Random Access Memory

To better understand RAM, imagine the blackboard of the classroom, the students can both read and write and also erase the data written after the
class is over, some new data can be entered now.

Evolution of RAM Technology


ROM RAM

Non-volatile memory used for permanent storage Volatile memory used for temporary storage

Generally slower than RAM High-speed access

Primarily read-only Read and write operations

ROM can hold more than just megabytes RAM can be stored in gigabytes

Data accessible is less difficult but more restrictive Data accessibility is easy

Cheaper than RAM High cost as compared to ROM

Used for the permanent storage of data Used for temporary storage of data

RAM vs Virtual Memory

Feature RAM Virtual Memory

Definition Physical memory is used for temporary data storage. Uses a storage drive to supplement physical RAM.

Speed Fast, providing quick access to data. Slower, as it relies on the hard drive or SSD.

Function Stores data currently being processed by the CPU. Extends memory capacity when RAM is full.

Capacity Limited by the amount of physical RAM installed. Can use available storage space, larger than RAM.

Data Loss Data is lost when the system is turned off. Data is lost when the system is turned off.

Features of RAM
RAM is volatile, meaning that the data is erased when the device is turned off.
It is referred to as the primary memory of the computer, as it directly supports the CPU during operation.
RAM is relatively expensive because it allows for fast, direct access to data.
As the fastest type of memory, RAM serves as internal memory within the computer, enabling quick data retrieval.
The overall speed of the computer is greatly influenced by the amount of RAM. With less RAM, the computer takes longer to load and may slow
down significantly.

How Much RAM Do You Need?


The system's RAM requirements depend on what the user is doing. For editing videos, for instance, a machine should have at least 16 GB of RAM,
though more is preferable. A machine needs also to have at least 3GB of RAM to run Photoshop CC on a Mac for photo processing, according
to Adobe. Even 8GB of RAM, meanwhile, can cause a slowdown if the user is using many apps at once.

Types of RAM
RAM is further divided into two types, SRAM - Static Random Access Memory and DRAM- Dynamic Random Access Memory. Let's learn about both
of these types in more detail.

1. SRAM (Static Random Access memory)

SRAM is used for Cache memory, it can hold the data as long as the power availability is there. It is refreshed simultaneously to store the present
information. It is made with CMOS technology. It contains 4 to 6 transistors and it also uses clocks. It does not require a periodic refresh cycle due to
the presence of transistors. Although SRAM is faster, it requires more power and is more expensive. Since SRAM requires more power, more heat is
lost here as well, another drawback of SRAM is that it can not store more bits per chip, for instance, for the same amount of memory stored
in DRAM, SRAM would require one more chip.

Function of SRAM

The function of SRAM is that it provides a direct interface with the Central Processing Unit at higher speeds.

Characteristics of SRAM
SRAM is used as the Cache memory inside the computer.
SRAM is known to be the fastest among all memories.
SRAM is costlier.
SRAM has a lower density (number of memory cells per unit area).
The power consumption of SRAM is less but when it is operated at higher frequencies, the power consumption of SRAM is compatible with
DRAM.

2. DRAM (Dynamic Random Access memory)

DRAM is used for the Main memory, it has a different construction than SRAM, it uses one transistor and one capacitor (also known as a conductor),
which is needed to get recharged in milliseconds due to the presence of the capacitor. Dynamic RAM was the first sold memory integrated circuit.
DRAM is the second most compact technology in production (the First is Flash Memory). DRAM has one transistor and one capacitor in 1 memory
bit. Although DRAM is slower, it can store more bits per chip, for instance, for the same amount of memory stored in SRAM, DRAM requires one less
chip. DRAM requires less power and hence, less heat is produced.

Function of DRAM

The function of DRAM is that it is used for programming code by a computer processor to function. It is used in our PCs (Personal Computers).

Characteristics of DRAM

DRAM is used as the Main Memory inside the computer.


DRAM is known to be a fast memory but not as fast as SRAM.
DRAM is cheaper as compared to SRAM.
DRAM has a higher density (number of memory cells per unit area)
The power consumption by DRAM is more

Types of DRAM

SDRAM: Synchronous DRAM, increases performance through its pins, which sync up with the data connection between the main memory and
the microprocessor.
DDR SDRAM: (Double Data Rate) It has features of SDRAM also but with double speed.
ECC DRAM: (Error Correcting Code) This RAM can find corrupted data easily and sometimes can fix it.
RDRAM: It stands for Rambus DRAM. It used to be popular in the late 1990s and early 2000s. It was developed by a company named Rambus
Inc. At that time it competed with SDRAM. Its latency was higher at the beginning but it was more stable than SDRAM, consoles like Nintendo 64
and Sony Play Station 2 used that.
DDR2, DDR3, AND DDR4: These are successor versions of DDR SDRAM with upgrades in performance

Difference Between SRAM and DRAM

Feature SRAM (Static RAM) DRAM (Dynamic RAM)

Full Form Static Random Access Memory Dynamic Random Access Memory

Power Consumption Requires more power Requires less power

Cost More expensive Less expensive

Speed Faster due to no need for refreshing Slower because it needs to be refreshed

Usage Used in cache memory for quick access Used in main memory for large data storage

For more information, you can refer to our dedicated article on Difference between SRAM and DRAM.

Advantages of RAM
Speed: RAM is faster than other types of storage like ROM, hard drives or SSDs, allowing for quick access to data and smooth performance of
applications.
Multitasking: More RAM allows a computer to handle multiple applications simultaneously without slowing down.
Flexibility: RAM can be easily upgraded, enhancing a computer’s performance and extending its usability.
Volatile Storage: RAM automatically clears its data when the computer is turned off, reducing the risk of unwanted data accumulation.
Disadvantages of RAM
Volatility: Data stored in RAM is lost when the computer is turned off, which means important data must be saved to permanent storage.
Cost: RAM can be more expensive per gigabyte compared to other storage options like hard drives or SSDs.
Limited Storage: RAM has a limited capacity, so it cannot store large amounts of data permanently.
Power Consumption: RAM requires continuous power to retain data, contributing to the overall power consumption of the device.
Physical Space: Increasing RAM requires physical space in the computer, which might be limited to smaller devices like laptops and tablets.
Read Only Memory (ROM)
Last Updated : 23 Jul, 2025

Memory plays a crucial role in how devices operate, and one of the most important types is Read-Only Memory (ROM). Unlike
RAM (Random Access Memory), which loses its data when the power is turned off, ROM is designed to store essential information
permanently.

Here, we’ll explore what ROM is, how it works, its various types, and why it remains an essential component in modern technology.
Whether you’re a tech enthusiast or just curious about how your devices operate, understanding ROM is key to grasping the
fundamentals of computing.

What is Read-Only Memory (ROM)?


ROM stands for Read-Only Memory. It is a non-volatile memory was used to operate the system. As its name refers to read-only
memory, we can only read the stored programs and data.

Information stored in ROM is permanent.


Information and programs are stored on ROM in binary format (0s and 1s).
It is used in the start-up process of the computer.

Evolution of ROM Technology


The development of ROM has seen key advancements over the years:

Year Type Key Advancement Use Cases

1956 Mask ROM (MROM) Hardwired during manufacturing Early calculators, embedded systems

1956 PROM One-time programmable by users Custom firmware

1971 EPROM Erasable with UV light, reprogrammable Legacy computer BIOS

1983 EEPROM Electrically erasable, reusable Microcontrollers, car key fobs

1984 Flash Memory Block-level erasure, high speed USB drives, SSDs, smartphones

Block Diagram of ROM


The main purpose of the ROM block diagram is to represent how ROM (Read-Only Memory) works within a computer
system. It helps illustrate the flow of data and how the system accesses the stored information.

In a Read-Only Memory (ROM) system, there are k input lines and n output lines. The input address from which we wish to
retrieve the ROM content is taken using the k input lines. Since each of the k input lines can have a value of 0 or 1, there are a
total of 2 k addresses that can be referred to by these input lines, and each of these addresses contains n bits of information that is
output from the ROM.

A ROM of this type is designated as a 2k x n ROM.


Block Diagram of ROM

Internal Structure of ROM


The internal structure of ROM has two basic components:

1. Decoder
2. OR Gates

Internal Structure of ROM

A circuit known as a decoder converts an encoded form, such as binary coded decimal, or BCD, into a decimal form. As a result,
the output is the binary equivalent of the input. The outputs of the decoder will be the output of every OR gate in the ROM. Let’s
use a 64 x 4 ROM as an example. This read-only memory has 64 words with a 4 bit length. As a result, there would be four output
lines.

Since there are only six input lines and there are 64 words in this ROM, we can specify 64 addresses or minimum terms by
choosing one of the 64 words that are available on the output lines from the six input lines. Each address entered has a unique
selected word.

Working of ROM
A small, long-lasting battery within the computer powers the ROM, which is made up of two primary components: the OR logic
gates and the decoder. In ROM, the decoder receives binary input and produces decimal output. The decoder’s decimal output
serves as the input for ROM’s OR gates. ROM chips have a grid of columns and rows that may be switched on and off. If they are
turned on, the value is 1, and the lines are connected by a diode. When the value is 0, the lines are not connected.

Each element in the arrangement represents one storage element on the memory chip. The diodes allow only one direction of flow,
with a specific threshold known as forward break over. This determines the current required before the diode passes the flow on.
Silicon-based circuitry typically has a forward break-over voltage of 0.6 V. ROM chips sometimes transmit a charge that exceeds
the forward break over to the column with a specified row that is grounded to a specific cell. When a diode is present in the cell, the
charge transforms to the binary system, and the cell is “on” with a value of 1.
Types of Read-Only Memory (ROM)

ROM Type Erasure Method Reprogrammable Use Cases/Examples

Mask ROM (MROM) Hardwired during manufacturing No Early embedded systems, firmware

PROM One-time programming No Custom firmware for specific applications

EPROM UV light Yes (with UV) Firmware updates, legacy computer systems

EEPROM Electrical signals Yes Microcontrollers, BIOS, small firmware updates

Flash Memory Block-level electrical erasure Yes USB drives, SSDs, memory cards, smartphones

PLD-ROM Configurable logic Yes FPGA, CPLD, custom hardware logic

Lets discuss some main type of ROM in details one-by-one:

1. MROM (Masked read-only memory): We know that ROM is as old as semiconductor technology. MROM was the very first
ROM that consisted of a grid of word lines and bit lines joined together by transistor switches. This type of ROM data is
physically encoded in the circuit and only be programmed during fabrication. It was not so expensive.
2. PROM (Programmable read-only memory): PROM is a form of digital memory. In this type of ROM, each bit is locked
by a fuse or anti-fuse. The data stored in it are permanently stored and can not be changed or erasable. It is used in low-level
programs such as firmware or microcode.
3. EPROM (Erasable programmable read-only memory): EPROM also called EROM, is a type of PROM but it can be
reprogrammed. The data stored in EPROM can be erased and reprogrammed again by ultraviolet light. Reprogrammed it is
limited. Before the era of EEPROM and flash memory, EPROM was used in microcontrollers.
4. EEPROM (Electrically erasable programmable read-only memory): As its name refers, it can be programmed and
erased electrically. The data and program of this ROM can be erased and programmed about ten thousand times. The duration
of erasing and programming of the EEPROM is about 4ms to 10ms. It is used in microcontrollers and remote keyless systems.

Advantages of ROM
Non-Volatile – Retains data without power.
Security – Prevents unauthorized changes.
Reliable – Data remains intact over time.
Cost-Effective – Cheap for large-scale production.
Fast Access – Quick retrieval of stored data.

Disadvantages of ROM
Limited Modifiability – Data can’t be easily changed.
Low Storage Capacity – Not suitable for large data storage.
Slow Write Speeds – Writing data is slow.
Physical Wear – Can wear out after many write cycles (EEPROM/EPROM).
High Initial Cost – Expensive to manufacture in small quantities (Mask ROM).

Difference Between RAM and ROM


Here are some key difference between RAM and ROM.

RAM ROM

RAM stands for Random Access Memory. ROM stands for Read Only Memory.
RAM ROM

Data in ROM can not modified or erased, you can only read
You can modify , edit or erase data in RAM.
data of ROM.

RAM is a volatile memory that stores data as long as power supply ROM is a non-volatile memory that retian data even after the
is given. power is turned off.

Speed of RAM is more then speed of ROM. ROM is slower then RAM.

RAM is costly as compared to ROM. ROM is cheap as compared to RAM.

A RAM chip can store only a few gigabytes (GB) of data. A ROM chip can store multiple megabytes (MB) of data.

CPU can easily access data stored in RAM. CPU cannot easily access data stored in ROM.

RAM is used for the temporary storage of data currently being ROM is used to store firmware, BIOS, and other data that needs
processed by the CPU. to be retained.
Secondary Memory
Last Updated : 21 Jul, 2025

Secondary memory, also known as secondary storage, refers to the storage devices
and systems used to store data persistently, even when the computer is powered off.
Unlike primary memory (RAM), which is fast and temporary, secondary
memory is slower but offers much larger storage capacities.

Some Examples of secondary memory include hard disk drives (HDDS), solid-state
drives (SSDS), optical disks (CDS/DVDS), and external storage devices like
USB drives.

Secondary Memory Devices

These devices are essential for long-term data storage and retrieval, providing a means to
store operating systems, applications, and personal files, ensuring that data remains intact
even after the system is turned off.

Use of Secondary Memory


Secondary memory is used for different purposes, but the main purposes of using
secondary memory are:

Permanent storage: As we know that primary memory stores data only when the
power supply is on, it loses data when the power is off. So we need a secondary memory
to store data permanently, even if the power supply is off.
Large Storage: Secondary memory provides large storage space so that we can store
large data like videos, images, audios, files, etc, permanently.
Portable: Some secondary devices are removable. So we can easily store or transfer
data from one computer or device to another.

Types of Secondary Memory


There are two types of secondary memory:

1. Fixed Devices
2. Removable Devices

Types of Secondary Memory

1. Fixed Devices

Fixed devices in secondary memory are storage devices that are permanently installed in a
system and cannot be easily removed, like internal hard drives or solid-state drives
(SSDs). They store data that is always accessible by the system.

Some examples of fixed devices are:

Hard Disk Drive (HDD)

A Hard Disk Drive (HDD) is a traditional storage device that stores data on spinning
magnetic disks. It's commonly used because it offers large storage space at a low cost.
However, it’s slower than newer storage technologies.

Solid-State Drive (SSD)


A Solid-State Drive (SSD) is a faster and newer type of storage. It uses flash
memory instead of spinning disks, so it has no moving parts. This makes it more reliable
and much faster for reading and writing data compared to an HDD.

External Hard Drives (If used as a fixed device in some cases)

Although it’s technically external, if an external hard drive is kept connected to a


device permanently, it may be considered fixed.

Network Attached Storage (NAS)

Network Attached Storage (NAS) is a storage device that is connected to a


network, allowing multiple users to access and share data. It’s commonly used in homes
and businesses for backing up files and sharing data across different devices.

2. Removable Devices

Removable devices in secondary memory are storage devices that can be easily
disconnected and used on different systems, like USB drives or external hard drives. They
allow for easy data transfer and backup.

Some examples of removable devices are:

Optical Discs (CD, DVD, Blu-ray)

CD (Compact Disc): Holds up to 700 MB of data, often used for music, software, or
small files.
DVD (Digital Versatile Disc): Can store more data than a CD, typically 4.7 GB or
more, and is commonly used for videos or larger data files.
Blu-ray Disc: Designed for high-definition video, Blu-ray discs can hold from 25 GB
(single layer) to 50 GB (dual layer), making them great for movies and large files.

USB Flash Drives

USB Flash Drives are small, portable devices that use flash memory to store data.
They’re commonly used to transfer files between computers or as backup storage. They
are durable, easy to carry, and come in various sizes, ranging from a few gigabytes to
several terabytes.

Magnetic Tapes

Magnetic Tapes are an older form of storage where data is stored on long, thin tapes.
While they are not commonly used for personal computers anymore, they are still used
for large-scale data storage and archiving because they offer a lot of space at a low cost.

Flash Memory Cards (SD Cards, MicroSD Cards)

Flash Memory Cards like SD cards and MicroSD cards are tiny, portable
storage devices used in cameras, smartphones, and other gadgets. They are ideal for
storing photos, videos, and other media files.

External Hard Drives (If used as a removable device)

While external hard drives can be considered fixed if connected permanently, they are
usually removable and used for backup or transferring large files.

Cloud Storage

Cloud Storage is not a physical device but a service that allows you to store your data
online, on servers that you can access over the internet. Popular services like Google
Drive, Dropbox, and OneDrive allow users to store, share, and access their data
from anywhere with an internet connection.

Applications of Secondary Memory


Data Storage & Archiving: Secondary memory stores large volumes of data, such
as documents, photos, videos, and other files, for long-term retention and easy access
when needed.
Backup & Recovery: It helps protect data by creating backups, ensuring that
important information can be recovered in case of system failures or data loss.
Software & OS Storage: Secondary memory holds operating systems and software
applications, enabling quick access and smooth execution on computers and devices.
Media & Content Storage: It is used for storing large media files, including music,
movies, and games, making it easier to organize and access entertainment content.
Database Management: Secondary memory stores extensive databases, critical for
businesses, research, and education, supporting data retrieval and management.
Virtual Memory: It enhances system performance by swapping data between the
primary memory (RAM) and secondary memory, allowing the system to handle more
tasks simultaneously.
Cloud Storage: Cloud storage offers remote, online storage solutions, enabling users
to access files from any device and collaborate easily across locations.
File Sharing: Through Attached Storage (NAS) or cloud services, secondary memory
facilitates seamless file sharing and access over networks, improving collaboration.
Gaming: Secondary memory stores video games, downloadable content, and save
files, especially in high-performance external drives and SSDs for quick loading and
gameplay.
Business & Research: It provides secure storage for critical business documents,
research data, and collaborative project files, supporting daily operations and innovation.

Advantages of Secondary Memory


1. Large storage capacity: Secondary memory devices typically have a much larger
storage capacity than primary memory, allowing users to store large amounts of data and
programs.
2. Non-volatile storage: Data stored on secondary memory devices is typically non-
volatile, meaning it can be retained even when the computer is turned off.
3. Portability: Many secondary memory devices are portable, making it easy to transfer
data between computers or devices.
4. Cost-effective: Secondary memory devices are generally more cost-effective than
primary memory.

Disadvantages of Secondary Memory


1. Slower access times: Accessing data from secondary memory devices typically
takes longer than accessing data from primary memory.
2. Mechanical failures: Some types of secondary memory devices, such as hard disk
drives, are prone to mechanical failures that can result in data loss.
3. Limited lifespan: Secondary memory devices have a limited lifespan and can only
withstand a certain number of read and write cycles before they fail.
4. Data corruption: Data stored on secondary memory devices can become corrupted
due to factors such as electromagnetic interference, viruses, or physical damage.
Difference between Magnetic Disk and
Optical Disk
Last Updated : 12 Jul, 2025

The magnetic and optical disks are the storage devices that provide a way to store data for
a long duration. Both are categorized as secondary storage devices. In this article, we are
going to discuss the Difference between Magnetic Disks and Optical disks in detail.

What is a Magnetic Disk?


A magnetic disk is a storage device that uses a magnetization process to read, write,
rewrite, and access data. The magnetic disk is made of a set of circular platters. It is
covered with a magnetic coating and stores data in the form of tracks, spots, and sectors.
Hard disks, zip disks, and floppy disks are common examples of magnetic disks. The
number of bits stored on each track does not change by using the simplest constant angular
velocity.

Features of Magnetic Disk

Magnetic disks can store a huge amount of data.


Magnetic disks are transportable and budget-friendly.
Magnetic disks are reliable storage devices.

What is an Optical Disk?


An optical disk is any computer disk that uses optical storage techniques and technology to
read and write data. It is a storage device for optical (light) energy. It is a computer storage
disk that stores data digitally and uses laser beams to read and write data. It uses optical
technology in which laser light is centered on the spinning disks.

Features of Optical Disk

Optical disks rely on a red or blue laser to record and read data.
Most of optical disks are flat, circular and 12-14 cm in diameter these days.

Difference Between Magnetic Disk and Optical Disk


MAGNETIC DISK OPTICAL DISK

The media type used is a Single removable


Media type used is Multiple fixed disk
disk

Intermediate signal to noise ratio Excellent signal to noise ratio

Sample rate is Low Sample rate is High

Implemented where data is randomly


Implemented in streaming files.
accessed.

Only one disk can be used at a time Mass replication is possible

Tracks in the magnetic disk are generally In optical disk the tracks are constructed
circular spirally.

The data in the magnetic disk is randomly In the optical disk, the data is sequentially
accessed. accessed.

In the magnetic disk, only one disk is


An optical disk allows mass replication.
accessed at a time.

The storing and accessing of data take place at


The copying of data takes more time in
a much faster rate using laser beams than a
magnetic disk compared to optical disk.
magnetic disk.

The storage capacity is high in magnetic The storage capacity of optical disk is
disk i.e. up to several Gigabytes, comparatively low i.e. up to 27GB in the case
Terabytes. of Blue-ray.

Magnetic disks are a crucial part of Optical disk is optional component in


computers. computers.

Magnetic disks are mainly used to hold Optical disks are portable and generally used
data, instructions, software applications, to store music, videos, movies.

Examples include- Examples include-


Hard Disk CD
Floppy Disk DVD
Magnetic Tape, and more. Blue-ray, and more.
Cache Memory in Computer Organization
Last Updated : 23 Jul, 2025

Cache memory is a small, high-speed storage area in a computer. It stores copies of the data from frequently used main memory locations. There
are various independent caches in a CPU, which store instructions and data.

The most important use of cache memory is that it is used to reduce the average time to access data from the main memory.
The concept of cache works because there exists locality of reference (the same items or nearby items are more likely to be accessed next) in
processes.

By storing this information closer to the CPU, cache memory helps speed up the overall processing time. Cache memory is much faster than the
main memory (RAM). When the CPU needs data, it first checks the cache. If the data is there, the CPU can access it quickly. If not, it must fetch
the data from the slower main memory.

Characteristics of Cache Memory


Extremely fast memory type that acts as a buffer between RAM and the CPU.
Holds frequently requested data and instructions, ensuring that they are immediately available to the CPU when needed.
Costlier than main memory or disk memory but more economical than CPU registers.
Used to speed up processing and synchronize with the high-speed CPU.

Cache Memory

Levels of Memory
Level 1 or Register: It is a type of memory in which data is stored and accepted that are immediately stored in the CPU. The most
commonly used register is Accumulator, Program counter, Address Register, etc.
Level 2 or Cache memory: It is the fastest memory that has faster access time where data is temporarily stored for faster access.
Level 3 or Main Memory: It is the memory on which the computer works currently. It is small in size and once power is off data no longer
stays in this memory.
Level 4 or Secondary Memory: It is external memory that is not as fast as the main memory but data stays permanently in this memory.

Cache Performance
When the processor needs to read or write a location in the main memory, it first checks for a corresponding entry in the cache.

If the processor finds that the memory location is in the cache, a Cache Hit has occurred and data is read from the cache.
If the processor does not find the memory location in the cache, a cache miss has occurred. For a cache miss, the cache allocates a new
entry and copies in data from the main memory, then the request is fulfilled from the contents of the cache.

The performance of cache memory is frequently measured in terms of a quantity called Hit ratio.

Hit Ratio(H) = hit / (hit + miss) = no. of hits/total accesses


Miss Ratio = miss / (hit + miss) = no. of miss/total accesses = 1 - hit ratio(H)

We can improve Cache performance using higher cache block size, and higher associativity, reduce miss rate, reduce miss penalty, and reduce
the time to hit in the cache.

Cache Mapping
Cache mapping refers to the method used to store data from main memory into the cache. It determines how data from memory is mapped to
specific locations in the cache.

There are three different types of mapping used for the purpose of cache memory which is as follows:

Direct Mapping
Fully Associative Mapping
Set-Associative Mapping
1. Direct Mapping

Direct mapping is a simple and commonly used cache mapping technique where each block of main memory is mapped to exactly one location in
the cache called cache line. If two memory blocks map to the same cache line, one will overwrite the other, leading to potential cache misses.
Direct mapping's performance is directly proportional to the Hit ratio.

Memory block is assigned to cache line using the formula below:

i = j modulo m = j % m
where,
i = cache line number
j = main memory block number
m = number of lines in the cache

For example, consider a memory with 8 blocks(j) and a cache with 4 lines(m). Using direct mapping, block 0 of memory might be stored in cache
line 0, block 1 in line 1, block 2 in line 2, and block 3 in line 3. If block 4 of memory is accessed, it would be mapped to cache line 0 (as i = j modulo
m i.e. i = 4 % 4 = 0), replacing memory block 0.

The Main Memory consists of memory blocks and these blocks are made up of fixed number of words. A typical address in main memory is split
into two parts:

1. Index Field: It represent the block number. Index Field bits tells us the location of block where a word can be.
2. Block Offset: It represent words in a memory block. These bits determines the location of word in a memory block.

The Cache Memory consists of cache lines. These cache lines has same size as memory blocks. The address in cache memory consists of:

1. Block Offset: This is the same block offset we use in Main Memory.
2. Index: It represent cache line number. This part of the memory address determines which cache line (or slot) the data will be placed in.
3. Tag: The Tag is the remaining part of the address that uniquely identifies which block is currently occupying the cache line.

Memory Structure in Direct Mapping

The index field in main memory maps directly to the index in cache memory, which determines the cache line where the block will be stored. The
block offset in both main memory and cache memory indicates the exact word within the block. In the cache, the tag identifies which memory block
is currently stored in the cache line. This mapping ensures that each memory block is mapped to exactly one cache line, and the data is accessed
using the tag and index while the block offset specifies the exact word in the block.

2. Fully Associative Mapping

Fully associative mapping is a type of cache mapping where any block of main memory can be stored in any cache line. Unlike direct-mapped
cache, where each memory block is restricted to a specific cache line based on its index, fully associative mapping gives the cache the flexibility to
place a memory block in any available cache line. This improves the hit ratio but requires a more complex system for searching and managing
cache lines.

The address structure of Cache Memory is different in fully associative mapping from direct mapping. In fully associative mapping, the cache does
not have an index field. It only have a tag which is same as Index Field in memory address. Any block of memory can be placed in any cache line.
This flexibility means that there’s no fixed position for memory blocks in the cache.

Cache Memory Structure in Fully Associative Mapping


To determine whether a block is present in the cache, the tag is compared with the tags stored in all cache lines. If a match is found, it is a cache
hit, and the data is retrieved from that cache line. If no match is found, it's a cache miss, and the required data is fetched from main memory.

Fully Associative Mapping

3. Set-Associative Mapping

Set-associative mapping is a compromise between direct-mapped and fully-associative mapping in cache systems. It combines the flexibility of
fully associative mapping with the efficiency of direct mapping. In this scheme, multiple cache lines (typically 2, 4, or more) are grouped into sets.

v=m/k
where,
m = number of cache lines in the cache memory
k = number of cache lines we want in each set
v = number of sets

Like direct mapping, now each memory block can be placed into any cache line within a specific set.

i = j modulo v = j % v
where,
j = main memory block number
v = number of sets
i = cache line set number

The Cache address structure is as follows:

Cache Memory in Set Associative Mapping

This reduces the conflict misses that occur in direct mapping while still limiting the search space compared to fully-associative mapping.

For example, consider a 2-way set associative cache, which means 2 cache lines make a set in this cache structure. There are 8 memory blocks
and 4 cache lines, thus the number of sets will be 4/2 = 2 sets. Using direct mapping strategy first, block 0 will be in set 0, block 1 in set 1, block 2
in set 2 and so on. Then, the tag is used to search through all cache lines in that set to find the correct block (Associative Mapping).
Two Way Set Associative Cache

For more, you can refer to the Difference between Types of Cache Mapping .

Application of Cache Memory


Here are some of the applications of Cache Memory.

Primary Cache: A primary cache is always located on the processor chip. This cache is small and its access time is comparable to that of
processor registers.
Secondary Cache: Secondary cache is placed between the primary cache and the rest of the memory. It is referred to as the level 2 (L2)
cache. Often, the Level 2 cache is also housed on the processor chip.
Spatial Locality of Reference: Spatial Locality of Reference says that there is a chance that the element will be present in close proximity
to the reference point and next time if again searched then more close proximity to the point of reference.
Temporal Locality of Reference: Temporal Locality of Reference uses the Least recently used algorithm will be used. Whenever there is
page fault occurs within a word will not only load the word in the main memory but the complete page fault will be loaded because the spatial
locality of reference rule says that if you are referring to any word next word will be referred to in its register that's why we load complete page
table so the complete block will be loaded.

Advantages
Cache Memory is faster in comparison to main memory and secondary memory.
Programs stored by Cache Memory can be executed in less time.
The data access time of Cache Memory is less than that of the main memory.
Cache Memory stored data and instructions that are regularly used by the CPU, therefore it increases the performance of the CPU.

Disadvantages
Cache Memory is costlier than primary memory and secondary memory .
Data is stored on a temporary basis in Cache Memory.
Whenever the system is turned off, data and instructions stored in cache memory get destroyed.
The high cost of cache memory increases the price of the Computer System.
Concept of Cache Memory Design
Last Updated : 12 Jul, 2025

Cache Memory plays a significant role in reducing the processing time of a program by provide swift access to
data/instructions. Cache memory is small and fast while the main memory is big and slow. The concept of caching is
explained below. Caching Principle : The intent of cache memory is to provide the fastest access to resources without
compromising on size and price of the memory. The processor attempting to read a byte of data, first looks at the cache
memory. If the byte does not exist in cache memory, it searches for the byte in the main memory. Once the byte is found
in the main memory, the block containing a fixed number of byte is read into the cache memory and then on to the
processor. The probability of finding subsequent byte in the cache memory increases as the block read into the cache
memory earlier contains relevant bytes to the process because of the phenomenon called Locality of Reference
or Principle of Locality. Cache Memory Design :

1. Cache Size and Block Size - To align with the processor speed, cache memories are very small so that it takes less
time to find and fetch data. They are usually divided into multiple layers based on the architecture. The size of cache
should accommodate the size of blocks, which are again determined by the processor's architecture. When block size
increase, hit ratio increases initially because of the principle of locality. Further increase in the block size resulting in
bringing more data into the cache will decrease the hit ratio because, after certain point, the probability of using the
new data brought in by the new block is less than the probability of reusing the data that is being flushed out to make
room for newer blocks.
2. Mapping function - When a block of data is read from the main memory, the mapping function decides which
location in the cache gets occupied by the read-in main memory block. There will be a need to replace the cache
memory block with the main memory block if the cache is full and this rises to complexities. Which cache block should
be replaced? Care should be taken to not replace the cache block that is more probable to get referred by the
processor. Replacement algorithm directly depends on the mapping function such that if the mapping function is more
flexible, the replacement algorithm will provide the maximum hit ratio. But, in order to provide more flexibility, the
complexity of the circuitry to search cache memory to determine if the block is in the cache increases.
3. Replacement Algorithm - It decides which block in cache gets replaced by the read-in block from main memory when
the cache is full, with a certain constraints from the mapping function. The block of cache that would not be referred in
the near future should be replaced but it is highly improbable to determine which block would not be referred. Hence,
the block in cache that has not been referred for a long time should be replaced by the new read-in block from the
main memory. This is called Least-Recently-Used algorithm.
4. Write Policy - One of the most important aspect of memory caching. The block of data from cache that is chosen to
be replaced by the new read-in main memory block should first be placed back in the main memory. This is to prevent
loss of data. A decision should be made when the cache memory block would be put back in the main memory. These
two available option are as follows -
1. Place the cache memory block in the main memory when it is chosen to be replaced by a new read-in block from
main memory.
Caching
Let's start at the beginning and talk about what caching even is.

Caching is the process of storing some data near where It's supposed to be used rather than accessing them from an expensive origin,
every time a request comes in.

Caches are everywhere. From your CPU to your browser. So there's no doubt that caching is extremely useful. implementing a high-
performance cache system comes with its own set of challenges. In this post, we'll focus on cache replacement algorithms.

from wikipedia.com

Cache Replacement Algorithms


We talked about what caching is and how we can utilize it but there's a dinosaur in the room; Our cache storage is finite. Especially in
caching environments where high-performance and expensive storage is used. So in short, we have no choice but to evict some objects and
keep others.

Cache replacement algorithms do just that. They decide which objects can stay and which objects should be evicted.

After reviewing some of the most important algorithms we go through some of the challenges that we might encounter.

LRU
The least recently used (LRU) algorithm is one of the most famous cache replacement algorithms and for good reason!

As the name suggests, LRU keeps the least recently used objects at the top and evicts objects that haven't been used in a while if the list
reaches the maximum capacity.

So it's simply an ordered list where objects are moved to the top every time they're accessed; pushing other objects down.

LRU is simple and providers a nice cache-hit rate for lots of use-cases.

from progressivecoder.com

LFU
the least frequently used (LFU) algorithm works similarly to LRU except it keeps track of how many times an object was accessed instead of
how recently it was accessed.
Each object has a counter that counts how many times it was accessed. When the list reaches the maximum capacity, objects with the
lowest counters are evicted.

LFU has a famous problem. Imagine an object was repeatedly accessed for a short period only. Its counter increases by a magnitude
compared to others so it's very hard to evict this object even if it's not accessed for a long time.

FIFO
FIFO (first-in-first-out) is also used as a cache replacement algorithm and behaves exactly as you would expect. Objects are added to the
queue and are evicted with the same order. Even though it provides a simple and low-cost method to manage the cache but even the most
used objects are eventually evicted when they're old enough.

from wikipedia.com

Random Replacement (RR)


This algorithm randomly selects an object when it reaches maximum capacity. It has the benefit of not keeping any reference or history of
objects and being very simple to implement at the same time.

This algorithm has been used in ARM processors and the famous Intel i860.

The Problem of One-hit Wonders


Let's talk about a problem that occurs in large-scale caching solutions, one-hit wonders.

One-hit wonders are objects that are rarely or never accessed twice. This happens quite often in CDNs where the number of unique objects
is huge and most objects are rarely used again.

This becomes a problem when every bit of storage performance matters to us. By caching these objects we basically pollute our storage
with junk since these cache objects are always evicted before they're used again. So we waste a large amount of resources just to persist
some objects that we're not going to use.

So what's the solution? Unfortunately, there's no silver bullet here. The most used solution is just not caching an object when it's first
accessed!

By keeping a list of object signatures, we can only cache the objects that are seen more than one time. This might seem weird at first but
overall it improves your disk performance significantly.

After accepting this solution, we immediately encounter another challenge. In many scenarios, the number of object signatures is extremely
large so storing the list itself becomes a challenge. In this case, we can use probabilistic data structures such as the bloom filter data
structure.

I've covered probabilistic data structures in a previous post: Rate Limiting in IPv6 Era Using Probabilistic Data Structures

from wikipedia.com

In short, probabilistic data structures like bloom filter use a lot less memory but in return, they only give a probabilistic answer to our
queries. In our case, the bloom filter offers a nice solution since we don't need definite answers to improve our cache performance.
Cache Memory Performance
Last Updated : 24 Feb, 2023

Types of Caches :

L1 Cache : Cache built in the CPU itself is known as L1 or Level 1 cache. This type of cache holds most recent data so when, the data is required
again so the microprocessor inspects this cache first so it does not need to go through main memory or Level 2 cache. The main significance
behind above concept is "Locality of reference", according to which a location just accessed by the CPU has a higher probability of being
required again.
L2 Cache : This type of cache resides on a separate chip next to the CPU also known as Level 2 Cache. This cache stores recent used data that
cannot be found in the L1 Cache. Some CPU's has both L1 and L2 Cache built-in and designate the separate cache chip as level 3 (L3) Cache.

Cache that is built into the CPU is faster than separate cache. Separate cache is faster than RAM. Built-in Cache runs as a speed of a
microprocessor.

Disk Cache : It contains most recent read in data from the hard disk and this cache is much slower than RAM.
Instruction Cache Vs Data Cache : Instruction or I-cache stores instructions only while Data or D-cache stores only data. Distinguishing the
stored data by this method recognizes the different access behavior pattern of instructions and data. For example : The programs need to involve
few write accesses, and they often exhibit more temporal and spatial locality than the data they process.
Unified Cache Vs Split Cache : A cache that stores both instructions and data is referred to as a unified cache. A split cache on other hand,
consist of two associated but largely independent units - An I-cache and D-cache. This type of cache can also be designed to deal with two
independent units differently.

The performance of the cache memory is measured in terms of a quantity called Hit Ratio. When the CPU refers to the memory and reveals the word
in the cache, it's far stated that a hit has successfully occurred. If the word is not discovered in the cache, then the CPU refers to the main memory for
the favored word and it is referred to as a miss to cache.

Hit Ratio (h) :

Hit Ratio (h) = Number of Hits / Total CPU references to memory = Number of hits / ( Number of Hits + Number of Misses )

The Hit ratio is nothing but a probability of getting hits out of some number of memory references made by the CPU. So its range is 0 <= h <= 1.

Miss Ratio: The miss ratio is the probability of getting miss out of some number of memory references made by the CPU. Miss
Ratio = Number of misses / Total CPU references to memory = Number of misses/ (Number of hits + Number of misses) Miss Ratio
= 1 - hit ratio(h)
Average Access Time ( tavg ) :

tavg = h X tc + ( 1- h ) X ( tc + tm ) = tc + ( 1- h ) X tm

Let tc, h and tm denote the cache access time, hit ratio in cache and and main access time respectively.

Average memory access time = Hit Time + Miss Rate X Miss Penalty

Miss Rate : It can be defined as he fraction of accesses that are not in the cache (i.e. (1-h)).

Miss Penalty : It can be defined as the addition clock cycles to service the miss, the extra time needed to carry the favored information into cache
from main memory in case of miss in cache.

Cache Memory Structure


Here's a more detailed breakdown of how to improve cache performance:

1. Reduce the Miss Rate:


Increase Cache Size:
Larger caches can hold more data, reducing the likelihood of a cache miss, according to Study Mind.
Increase Associativity:
Higher associativity (e.g., 4-way set-associative) allows a memory block to be placed in more locations within the cache,
reducing conflict misses.
Optimize Cache Block Size:
Larger block sizes can exploit spatial locality but may also increase miss penalty. The optimal size depends on the
access patterns of the application.
Compiler Optimizations:
Compilers can analyze code to improve data access patterns and reduce the chance of cache misses.
Prefetching:
Hardware or software can anticipate future data needs and load them into the cache before they are actually requested,
reducing misses.

This video discusses basic cache optimization techniques:

54s

NPTEL IIT Guwahati


YouTube · 5 Jan 2024

2. Reduce the Miss Penalty:


Multi-level Caches:
Using multiple levels of cache (L1, L2, etc.) with varying sizes and speeds can help minimize the time it takes to retrieve
data from slower memory when a miss occurs.
Critical Word First:
When a miss occurs, the requested word can be fetched first, allowing the processor to continue execution while the rest
of the block is retrieved.
Prioritize Read Misses:
Giving read misses priority over write misses can help reduce the impact of read operations.
Victim Cache:
A small, fast cache can be used to hold recently evicted cache lines, potentially reducing the miss penalty for
subsequent accesses to those lines.
Non-blocking Caches:
Allowing the cache to handle other requests while a miss is being resolved (non-blocking) can reduce the impact of the
miss penalty.
This video explains non-blocking caches:

56s

NPTEL IIT Guwahati


YouTube · 1 Sept 2023

3. Reduce the Hit Time:


Small and Simple Caches:
Smaller caches have a shorter critical path, reducing the time it takes to find data within the cache.
Avoid Address Translation:
Using virtual caches that index directly with virtual addresses (rather than requiring address translation) can reduce the
hit time.
Pipelined Cache Access:
Pipelining allows multiple cache access operations to occur simultaneously, reducing the overall access time.

This video discusses how using cache memory can improve system performance:
1m

Dr. Sapna Katiyar


YouTube · 8 Sept 2022

4. Other Considerations:
Cache Coherency:
Ensuring that multiple processors or cores have consistent views of cached data, especially in multi-core systems.
Monitoring and Measurement:
Regularly monitoring cache performance using tools can help identify bottlenecks and areas for optimization.
Cache Replacement Policies:
Choosing appropriate replacement policies (e.g., Least Recently Used - LRU) can significantly impact performance
based on access patterns.
Content Delivery Networks (CDNs):
Distributing static content across multiple geographically dispersed servers using CDNs can improve performance by
reducing latency for users.
Browser Cache Optimization:
Clearing browser cache regularly can help remove outdated or unnecessary data and improve browser speed.

By carefully considering these techniques and tailoring them to specific application needs, it's possible to
significantly improve overall cache performance and system responsiveness [
Virtual Memory in Operating System
Last Updated : 23 Jul, 2025

Virtual memory is a memory management technique used by operating systems to give the appearance of a large, continuous block of memory to applications, even if the
physical memory (RAM) is limited. It allows larger applications to run on systems with less RAM.

Objectives of Virtual Memory


To support multiprogramming , it allows more than one program to run at the same time.
A program doesn’t need to be fully loaded in memory to run. Only the needed parts are loaded.
Programs can be bigger than the physical memory available in the system.
Virtual memory creates the illusion of a large memory, even if the actual memory (RAM) is small.
It uses both RAM and disk storage to manage memory, loading only parts of programs into RAM as needed.
This allows the system to run more programs at once and manage memory more efficiently.

Virtual Memory

What is Virtual Memory?


Virtual memory is a way for a computer to pretend it has more RAM than it really does. When the RAM is full, the computer moves some data to the hard drive (or SSD).
This space on the hard drive is used like extra memory. This helps the computer run bigger programs or multiple programs at the same time, even if there isn’t enough
RAM. The part of the hard drive used for this is called a page file or swap space. The computer automatically moves data in and out of RAM and the hard drive as needed.

History of Virtual Memory

Before virtual memory, computers only used RAM and secondary storage (like disks) to store data.
In the 1940s and 1950s, memory was very small and expensive.
Early computers used magnetic core for RAM and magnetic drums for secondary storage.
As programs got bigger, there wasn’t enough memory to run them all at once.
In 1956, Fritz-Rudolf Guntsch, a German physicist, developed the idea of virtual memory.
The first real system using virtual memory was built at the University of Manchester, during the development of the Atlas computer.

How Virtual Memory Works

Virtual memory uses both hardware and software to manage memory.


When a program runs, it uses virtual addresses (not real memory locations).
The computer system converts these virtual addresses into physical addresses (actual locations in RAM) while the program runs.

Types of Virtual Memory


In a computer, virtual memory is managed by the Memory Management Unit (MMU), which is often built into the CPU. The CPU generates virtual addresses that the MMU
translates into physical addresses.

There are two main types of virtual memory:

Paging
Segmentation
Paging

Paging divides memory into small fixed-size blocks called pages. When the computer runs out of RAM, pages that aren't currently in use are moved to the hard drive, into
an area called a swap file. The swap file acts as an extension of RAM. When a page is needed again, it is swapped back into RAM, a process known as page swapping.
This ensures that the operating system (OS) and applications have enough memory to run.

Demand Paging: The process of loading the page into memory on demand (whenever a page fault occurs) is known as demand paging. The process includes the
following steps are as follows:

If the CPU tries to refer to a page that is currently not available in the main memory, it generates an interrupt indicating a memory access fault.
The OS puts the interrupted process in a blocking state. For the execution to proceed the OS must bring the required page into the memory.
The OS will search for the required page in the logical address space.
The required page will be brought from logical address space to physical address space. The page replacement algorithms are used for the decision-making of
replacing the page in physical address space.
The page table will be updated accordingly.
The signal will be sent to the CPU to continue the program execution and it will place the process back into the ready state.

What is Page Fault Service Time?

The time taken to service the page fault is called page fault service time. The page fault service time includes the time taken to perform all the above six steps.

Let Main memory access time is: m


Page fault service time is: s
Page fault rate is : p
Then, Effective memory access time = (p*s) + (1-p)*m

Page and Frame

A page is a fixed size block of data in virtual memory and a frame is a fixed size block of physical memory in RAM where these pages are loaded. Think of a page as a
piece of a puzzle (virtual memory) While, a frame as the spot where it fits on the board (physical memory). When a program runs its pages are mapped to available frames
so the program can run even if the program size is larger than physical memory.

Segmentation

Segmentation divides virtual memory into segments of different sizes. Segments that aren't currently needed can be moved to the hard drive. The system uses a segment
table to keep track of each segment's status, including whether it's in memory, if it's been modified, and its physical address. Segments are mapped into a process's
address space only when needed.

You can read more about - Segmentation

Swapping
Swapping is a process out means removing all of its pages from memory, or marking them so that they will be removed by the normal page replacement process.
Suspending a process ensures that it is not runnable while it is swapped out. At some later time, the system swaps back the process from the secondary storage to the
main memory. When a process is busy swapping pages in and out then this situation is called thrashing.
Swapping

Thrashing
Only a few pages of each process are kept in main memory at a time, allowing more processes to run simultaneously and saving time by not loading unused pages.
The operating system must carefully manage this to keep memory full of useful pages.
When the OS loads a new page, it must remove another. If it removes a page that will be needed soon, the system wastes time swapping pages in and out a problem
called thrashing.
To avoid this, an efficient page replacement algorithm is needed.

In the given diagram, the initial degree of multiprogramming up to some extent of point(lambda), the CPU utilization is very high and the system resources are utilized
100%. But if we further increase the degree of multiprogramming the CPU utilization will drastically fall down and the system will spend more time only on the page
replacement and the time taken to complete the execution of the process will increase. This situation in the system is called thrashing.

Causes of Thrashing
Thrashing occurs in a computer system when the CPU spends more time swapping pages in and out of memory than executing actual processes. This happens when
there is insufficient physical memory, causing frequent page faults and excessive paging activity. Thrashing reduces system performance and makes processes run very
slowly. There are many cause of thrashing as discussed below.

High Degree of Multiprogramming

If the number of processes keeps on increasing in the memory then the number of frames allocated to each process will be decreased. So, fewer frames will be available
for each process. Due to this, a page fault will occur more frequently and more CPU time will be wasted in just swapping in and out of pages and the utilization will keep on
decreasing.

For example:
Let free frames = 400
Case 1: Number of processes = 100
Then, each process will get 4 frames.

Case 2: Number of processes = 400


Each process will get 1 frame.
Case 2 is a condition of thrashing, as the number of processes is increased, frames per process are decreased. Hence CPU time will be consumed just by swapping
pages.

Lacks of Frames

If a process has fewer frames then fewer pages of that process will be able to reside in memory and hence more frequent swapping in and out will be required. This may
lead to thrashing. Hence a sufficient amount of frames must be allocated to each process in order to prevent thrashing.

Recovery of Thrashing

Do not allow the system to go into thrashing by instructing the long-term scheduler not to bring the processes into memory after the threshold.
If the system is already thrashing then instruct the mid-term scheduler to suspend some of the processes so that we can recover the system from thrashing.
Performance in Virtual Memory

Let p be the page fault rate( 0 <= p <= 1).


if p = 0 no page faults
if p =1, every reference is a fault.

Effective access time (EAT) = (1-p)* Memory Access Time + p * Page fault time.
Page fault time = page fault overhead + swap out + swap in +restart overhead

The performance of a virtual memory management system depends on the total number of page faults, which depend on paging policies and frame allocation

Read more about - Techniques to handle Thrashing

Frame Allocation

A number of frames allocated to each process in either static or dynamic.

Static Allocation: The number of frame allocations to a process is fixed.


Dynamic Allocation: The number of frames allocated to a process changes.

Paging Policies

Fetch Policy: It decides when a page should be loaded into memory.


Replacement Policy: It decides which page in memory should be replaced.
Placement Policy: It decides where in memory should a page be loaded.

Applications of Virtual memory


Virtual memory has the following important characteristics that increase the capabilities of the computer system.

Increased Effective Memory: One major practical application of virtual memory is, virtual memory enables a computer to have more memory than the physical
memory using the disk space. This allows for the running of larger applications and numerous programs at one time while not necessarily needing an equivalent amount
of DRAM.

Memory Isolation: Virtual memory allocates a unique address space to each process and that also plays a role in process segmentation. Such separation increases
safety and reliability based on the fact that one process cannot interact with and or modify another’s memory space through a mistake, or even a deliberate act of
vandalism.

Efficient Memory Management: Virtual memory also helps in better utilization of the physical memories through methods that include paging and segmentation.
It can transfer some of the memory pages that are not frequently used to disk allowing RAM to be used by active processes when required in a way that assists in
efficient use of memory as well as system performance.

Simplified Program Development: For case of programmers, they don’t have to consider physical memory available in a system in case of having virtual
memory. They can program ‘as if’ there is one big block of memory and this makes the programming easier and more efficient in delivering more complex applications.

Management of Virtual Memory


Here are 5 key points on how to manage virtual memory:

Adjust the Page File Size

Automatic Management: All contemporary operating systems including Windows contain the auto-configuration option for the size of the empirical page file. But
depending on the size of the RAM, they are set automatically, although the user can manually adjust the page file size if required.

Manual Configuration: For tuned up users, the setting of the custom size can sometimes boost up the performance of the system. The initial size is usually advised
to be set to the minimum value of 1.

Place the Page File on a Fast Drive

SSD Placement: If this is feasible, the page file should be stored in the SSD instead of the HDD as a storage device. It has better read and write times, and the
virtual memory may prove beneficial in an SSD.

Separate Drive: Regarding systems having multiple drives involved, the page file needs to be placed on a different drive than the OS and that shall in turn improve
its performance.

Monitor and Optimize Usage

Performance Monitoring: Employ the software tools used in monitoring the performance of the system in tracking the amounts of virtual memory. High page file
usage may signify that there is a lack of physical RAM or that virtual memory needs a change of settings or addition in physical RAM.

Regular Maintenance: Make sure there is no toolbar or other application running in the background, take time and uninstall all the tool bars to free virtual memory.

Disable Virtual Memory for SSD


Sufficient RAM: If for instance your system has a big physical memory, for example 16GB and above then it would be advised to freeze the page file in order to
minimize SSD usage. But it should be done, in my opinion, carefully and only if the additional signals that one decides to feed into his applications should not likely use
all the available RAM.

Optimize System Settings

System Configuration: Change some general properties of the system concerning virtual memory efficiency. This also involves enabling additional control options
in Windows such as adjusting additional system setting option on the operating system. Using other options in different operating systems such as Linux that provides
different tools and commands to help in adjusting how virtual memory is utilized.

Regular Updates: Ensure that your drivers are run in their newest version because new releases contain some enhancements and issues regarding memory
management.

Benefits of Using Virtual Memory

Supports Multiprogramming & Larger Programs : Virtual memory allows multiple processes to reside in memory at once by using demand paging. Even
programs larger than physical memory can be executed efficiently.

Maximizes Application Capacity : With virtual memory, systems can run more applications simultaneously, including multiple large ones. It also allows only
portions of programs to be loaded at a time, improving speed and reducing memory overhead.

Eliminates Physical Memory Limitations : There's no immediate need to upgrade RAM as virtual memory compensates using disk space.

Boosts Security & Isolation : By isolating the memory space for each process, virtual memory enhances system security. This prevents interference between
applications and reduces the risk of data corruption or unauthorized access.

Improves CPU & System Performance: Virtual memory helps the CPU by managing logical partitions and memory usage more effectively. It allows for cost-
effective, flexible resource allocation, keeping CPU workloads optimized and ensuring smoother multitasking.

Enhances Memory Management Efficiency : Virtual memory automates memory allocation, including moving data between RAM and disk without user
intervention. It also avoids external fragmentation, using more of the available memory effectively and simplifying OS-level memory management.

Limitation of Virtual Memory

Slower Performance:
Virtual memory can slow down the system, because it often needs to move data between RAM and the hard drive. Hard drives are much slower than RAM, so this can
make the computer respond more slowly, especially when running many programs.

Risk of Data Loss:


There is a higher risk of losing data if something goes wrong, like a power failure or hard disk crash, while the system is moving data between RAM and the disk. This can
lead to data corruption or loss.

More Complex System:


Managing virtual memory makes the operating system more complex. It has to keep track of both real memory (RAM) and virtual memory and make sure everything is in
the right place. This adds to the workload of the system.

Read more about - Virtual Memory Questions

Virtual Memory vs Physical Memory


Let us compare the virtual memory with the physical memory.

Feature Virtual Memory Physical Memory (RAM)

An abstraction that extends the available memory by using disk The actual hardware (RAM) that stores data and instructions currently being used by the
Definition storage CPU

Location On the hard drive or SSD On the computer's motherboard

Speed Slower (due to disk I/O operations) Faster (accessed directly by the CPU)

Capacity Larger, limited by disk space Smaller, limited by the amount of RAM installed

Cost Lower (cost of additional disk storage) Higher (cost of RAM modules)

Data
Indirect (via paging and swapping) Direct (CPU can access data directly)
Access

Volatility Non-volatile (data persists on disk) Volatile (data is lost when power is off)

You might also like