0% found this document useful (0 votes)
2 views

## CH 04 Notes-OS

The document covers memory management in operating systems, detailing concepts such as main memory, logical and physical address spaces, and various memory allocation techniques including contiguous and non-contiguous methods. It explains the importance of memory management for efficient utilization, fragmentation issues, and techniques like swapping and paging. Additionally, it discusses the structure and function of page tables in translating logical addresses to physical addresses.

Uploaded by

deshvan111
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

## CH 04 Notes-OS

The document covers memory management in operating systems, detailing concepts such as main memory, logical and physical address spaces, and various memory allocation techniques including contiguous and non-contiguous methods. It explains the importance of memory management for efficient utilization, fragmentation issues, and techniques like swapping and paging. Additionally, it discusses the structure and function of page tables in translating logical addresses to physical addresses.

Uploaded by

deshvan111
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

OPERATING SYSTEM

UNIT 04 MEMORY MANAGEMENT

Sr. No. Contents Page No


Main Memory
4.1 Background
4.2 Swapping
4.3 Contiguous Memory Allocation
4.4 Paging
4.5 Structure of the Page Table
4.6 Segmentation
Virtual Memory
4.7 Background
4.8 Demand Paging
4.9 Copy-on-Write
4.10 Page Replacement
4.11 Allocation of Frames
4.12 Thrashing

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 1


OPERATING SYSTEM

4.1 Main Memory:


The main memory is central to the operation of a Modern Computer. Main Memory is a large array
of words or bytes, ranging in size from hundreds of thousands to billions. Main memory is a
repository of rapidly available information shared by the CPU and I/O devices. Main memory is the
place where programs and information are kept when the processor is effectively utilizing
them. Main memory is associated with the processor, so moving instructions and information into
and out of the processor is extremely fast. Main memory is also known as RAM (Random Access
Memory). This memory is volatile. RAM loses its data when a power interruption occurs.

In a multiprogramming computer, the Operating System resides in a part of memory, and the rest is
used by multiple processes. The task of subdividing the memory among different processes is called
Memory Management. Memory management is a method in the operating system to manage
operations between main memory and disk during process execution. The main aim of memory
management is to achieve efficient utilization of memory.
Memory Management is Required for,
 Allocate and de-allocate memory before and after process execution.
 To keep track of used memory space by processes.
 To minimize fragmentation issues.
 To proper utilization of main memory.
 To maintain data integrity while executing of process.

Logical and Physical Address Space


 Logical Address Space: An address generated by the CPU is known as a “Logical Address”. It is
also known as a Virtual address. Logical address space can be defined as the size of the process.
A logical address can be changed.
 Physical Address Space: An address seen by the memory unit (i.e the one loaded into the
memory address register of the memory) is commonly known as a “Physical Address”. A Physical
address is also known as a Real address. The set of all physical addresses corresponding to these
logical addresses is known as Physical address space. A physical address is computed by MMU.
The run-time mapping from virtual to physical addresses is done by a hardware device Memory
Management Unit(MMU). The physical address always remains constant.
Static and Dynamic Loading
Loading a process into the main memory is done by a loader. There are two different types of loading
:
 Static Loading: Static Loading is basically loading the entire program into a fixed address. It
requires more memory space.
 Dynamic Loading: The entire program and all data of a process must be in physical memory for
the process to execute. So, the size of a process is limited to the size of physical memory. To gain

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 2


OPERATING SYSTEM
proper memory utilization, dynamic loading is used. In dynamic loading, a routine is not loaded
until it is called. All routines are residing on disk in a relocatable load format. One of the
advantages of dynamic loading is that the unused routine is never loaded. This loading is useful
when a large amount of code is needed to handle it efficiently.
Static and Dynamic Linking
To perform a linking task a linker is used. A linker is a program that takes one or more object files
generated by a compiler and combines them into a single executable file.
 Static Linking: In static linking, the linker combines all necessary program modules into a single
executable program. So there is no runtime dependency. Some operating systems support only
static linking, in which system language libraries are treated like any other object module.
 Dynamic Linking: The basic concept of dynamic linking is similar to dynamic loading. In dynamic
linking, “Stub” is included for each appropriate library routine reference. A stub is a small piece
of code. When the stub is executed, it checks whether the needed routine is already in memory
or not. If not available then the program loads the routine into memory.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 3


OPERATING SYSTEM

4.2 Swapping:
When a process is executed it must have resided in memory. Swapping is a process of swapping a
process temporarily into a secondary memory from the main memory, which is fast compared to
secondary memory. A swapping allows more processes to be run and can be fit into memory at one
time. The main part of swapping is transferred time and the total time is directly proportional to the
amount of memory swapped. Swapping is also known as roll-out, or roll because if a higher priority
process arrives and wants service, the memory manager can swap out the lower priority process and
then load and execute the higher priority process. After finishing higher priority work, the lower
priority process swapped back in memory and continued to the execution process.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 4


OPERATING SYSTEM

Benefits of Swapping
Here, are major benefits/pros of swapping:

 It offers a higher degree of multiprogramming.


 Allows dynamic relocation. For example, if address binding at execution time is being used,
then processes can be swap in different locations. Else in case of compile and load time
bindings, processes should be moved to the same location.
 It helps to get better utilization of memory.
 Minimum wastage of CPU time on completion so it can easily be applied to a priority-based
scheduling method to improve its performance.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 5


OPERATING SYSTEM

4.3 Contiguous Memory Allocation:


The main memory should accommodate both the operating system and the different client
processes. Therefore, the allocation of memory becomes an important task in the operating
system. The memory is usually divided into two partitions: one for the resident operating system and
one for the user processes. We normally need several user processes to reside in memory
simultaneously. Therefore, we need to consider how to allocate available memory to the processes
that are in the input queue waiting to be brought into memory. In adjacent memory allotment, each
process is contained in a single contiguous segment of memory.
Memory Allocation
To gain proper memory utilization, memory allocation must be allocated efficient manner. One of the
simplest methods for allocating memory is to divide memory into several fixed-sized partitions and
each partition contains exactly one process. Thus, the degree of multiprogramming is obtained by the
number of partitions.
 Multiple partition allocation: In this method, a process is selected from the input queue and
loaded into the free partition. When the process terminates, the partition becomes available for
other processes.
 Fixed partition allocation: In this method, the operating system maintains a table that indicates
which parts of memory are available and which are occupied by processes. Initially, all memory
is available for user processes and is considered one large block of available memory. This
available memory is known as a “Hole”. When the process arrives and needs memory, we search
for a hole that is large enough to store this process. If the requirement is fulfilled then we
allocate memory to process, otherwise keeping the rest available to satisfy future requests.
While allocating a memory sometimes dynamic storage allocation problems occur, which
concerns how to satisfy a request of size n from a list of free holes. There are some solutions to
this problem:

First Fit

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 6


OPERATING SYSTEM
In the First Fit, the first available free hole fulfil the requirement of the process allocated.

Here, in this diagram, a 40 KB memory block is the first available free hole that can store process A
(size of 25 KB), because the first two blocks did not have sufficient memory space.

Best Fit
In the Best Fit, allocate the smallest hole that is big enough to process requirements. For this, we
search the entire list, unless the list is ordered by size.

Here in this example, first, we traverse the complete list and find the last hole 25KB is the best
suitable hole for Process A(size 25KB). In this method, memory utilization is maximum as compared
to other memory allocation techniques.

Worst Fit
In the Worst Fit, allocate the largest available hole to process. This method produces the largest
leftover hole.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 7


OPERATING SYSTEM

Here in this example, Process A (Size 25 KB) is allocated to the largest available memory block which
is 60KB. Inefficient memory utilization is a major issue in the worst fit.

Difference between Contiguous and Non-contiguous Memory Allocation :


S.NO. Contiguous Memory Allocation Non-Contiguous Memory Allocation

Contiguous memory allocation allocates Non-Contiguous memory allocation


1. consecutive blocks of memory to a allocates separate blocks of memory to a
file/process. file/process.

2. Faster in Execution. Slower in Execution.

3. It is easier for the OS to control. It is difficult for the OS to control.

Overhead is minimum as not much address


More Overheads are there as there are
4. translations are there while executing a
more address translations.
process.

Both Internal fragmentation and external


Only External fragmentation occurs in Non-
5. fragmentation occurs in Contiguous
Contiguous memory allocation method.
memory allocation method.

It includes single partition allocation and


6. It includes paging and segmentation.
multi-partition allocation.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 8


OPERATING SYSTEM

S.NO. Contiguous Memory Allocation Non-Contiguous Memory Allocation

7. Wastage of memory is there. No memory wastage is there.

In contiguous memory allocation, swapped- In non-contiguous memory allocation,


8. in processes are arranged in the originally swapped-in processes can be arranged in
allocated space. any place in the memory.

It is of five types:
It is of two types: 1. Paging
9. 2. Multilevel Paging
1. Fixed(or static) partitioning
3. Inverted Paging
2. Dynamic partitioning
4. Segmentation
5. Segmented Paging

It could be visualized and implemented


10. It could be implemented using Linked Lists.
using Arrays.

Degree of multiprogramming is fixed as


11. Degree of multiprogramming is not fixed
fixed partitions

Fragmentation
Fragmentation is defined as when the process is loaded and removed after execution from memory,
it creates a small free hole. These holes can not be assigned to new processes because holes are not
combined or do not fulfill the memory requirement of the process. To achieve a degree of
multiprogramming, we must reduce the waste of memory or fragmentation problems. In the
operating systems two types of fragmentation:
1. Internal fragmentation: Internal fragmentation occurs when memory blocks are allocated to the
process more than their requested size. Due to this some unused space is left over and creating
an internal fragmentation problem.Example: Suppose there is a fixed partitioning used for
memory allocation and the different sizes of blocks 3MB, 6MB, and 7MB space in memory. Now
a new process p4 of size 2MB comes and demands a block of memory. It gets a memory block of
3MB but 1MB block of memory is a waste, and it can not be allocated to other processes too.
This is called internal fragmentation.
2. External fragmentation: In External Fragmentation, we have a free memory block, but we can
not assign it to a process because blocks are not contiguous. Example: Suppose (consider the
above example) three processes p1, p2, and p3 come with sizes 2MB, 4MB, and 7MB

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 9


OPERATING SYSTEM
respectively. Now they get memory blocks of size 3MB, 6MB, and 7MB allocated respectively.
After allocating the process p1 process and the p2 process left 1MB and 2MB. Suppose a new
process p4 comes and demands a 3MB block of memory, which is available, but we can not
assign it because free memory space is not contiguous. This is called external fragmentation.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 10


OPERATING SYSTEM

4.4 Paging:
Paging is a memory management scheme that eliminates the need for a contiguous allocation of
physical memory. The process of retrieving processes in the form of pages from the secondary storage
into the main memory is known as paging. The basic purpose of paging is to separate each procedure
into pages. Additionally, frames will be used to split the main memory. This scheme permits the
physical address space of a process to be non – contiguous.

In paging, the physical memory is divided into fixed-size blocks called page frames, which are the
same size as the pages used by the process. The process’s logical address space is also divided into
fixed-size blocks called pages, which are the same size as the page frames. When a process requests
memory, the operating system allocates one or more page frames to the process and maps the
process’s logical pages to the physical page frames.

The mapping between logical pages and physical page frames is maintained by the page table, which
is used by the memory management unit to translate logical addresses into physical addresses. The
page table maps each logical page number to a physical page frame number.

In a paging scheme, the logical deal with the region is cut up into steady-duration pages, and every
internet web page is mapped to a corresponding body within the physical deal with the vicinity. The
going for walks tool keeps a web internet web page desk for every method, which maps the system’s
logical addresses to its corresponding bodily addresses. When a method accesses memory, the CPU
generates a logical address, that is translated to a bodily address using the net page table. The
reminiscence controller then uses the physical cope to get the right of entry to the reminiscence.
 Logical Address or Virtual Address: This is a deal that is generated through the CPU and used by
a technique to get the right of entry to reminiscence. It is known as a logical or digital deal
because it isn’t always a physical vicinity in memory but an opportunity for a connection with a
place inside the device’s logical address location.
 Logical Address Space or Virtual Address Space: This is the set of all logical addresses generated
via a software program. It is normally represented in phrases or bytes and is split into regular-
duration pages in a paging scheme.
 Physical Address: This is a cope that corresponds to a bodily place in reminiscence. It is the
actual cope with this that is available on the memory unit and is used by the memory controller
to get admission to the reminiscence.
 Physical Address Space: This is the set of all bodily addresses that correspond to the logical
addresses inside the way’s logical deal with place. It is usually represented in words or bytes and
is cut up into fixed-size frames in a paging scheme.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 11


OPERATING SYSTEM

4.5 Structure of Page Table:


The data structure that is used by the virtual memory system in the operating system of a computer in
order to store the mapping between physical and logical addresses is commonly known as Page Table.
the logical address that is generated by the CPU is translated into the physical address with the help of
the page table.

 Thus page table mainly provides the corresponding frame number (base address of the frame)
where that page is stored in the main memory.

The above diagram shows the paging model of Physical and logical memory.

Characteristics of the Page Table

Some of the characteristics of the Page Table are as follows:

 It is stored in the main memory.


 Generally; the Number of entries in the page table = the Number of Pages in which the process
is divided.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 12


OPERATING SYSTEM
 PTBR means page table base register and it is basically used to hold the base address for the
page table of the current process.
 Each process has its own independent page table.

Techniques used for Structuring the Page Table

Some of the common techniques that are used for structuring the Page table are as follows:

1. Hierarchical Paging
2. Hashed Page Tables
3. Inverted Page Tables

Let us cover these techniques one by one;

Hierarchical Paging

Another name for Hierarchical Paging is multilevel paging.

 There might be a case where the page table is too big to fit in a contiguous space, so we may
have a hierarchy with several levels.
 In this type of Paging the logical address space is broke up into Multiple page tables.


Hierarchical Paging is one of the simplest techniques and for this purpose, a two-level page
table and three-level page table can be used.

Two Level Page Table

Consider a system having 32-bit logical address space and a page size of 1 KB and it is further divided
into:

 Page Number consisting of 22 bits.


 Page Offset consisting of 10 bits.

As we page the Page table, the page number is further divided into :

 Page Number consisting of 12 bits.


 Page Offset consisting of 10 bits.

Thus the Logical address is as follows:

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 13


OPERATING SYSTEM

In the above diagram,

P1 is an index into the Outer Page table.

P2 indicates the displacement within the page of the Inner page Table.

As address translation works from outer page table inward so is known as forward-mapped Page
Table.

Below given figure below shows the Address Translation scheme for a two-level page table

Three Level Page Table

For a system with 64-bit logical address space, a two-level paging scheme is not appropriate. Let us
suppose that the page size, in this case, is 4KB.If in this case, we will use the two-page level scheme
then the addresses will look like this:

Thus in order to avoid such a large table, there is a solution and that is to divide the outer page table,
and then it will result in a Three-level page table:

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 14


OPERATING SYSTEM

Hashed Page Tables

This approach is used to handle address spaces that are larger than 32 bits.

 In this virtual page, the number is hashed into a page table.


 This Page table mainly contains a chain of elements hashing to the same elements.

Each element mainly consists of :

1. The virtual page number


2. The value of the mapped page frame.
3. A pointer to the next element in the linked list.

Given below figure shows the address translation scheme of the Hashed Page Table:

The above Figure shows Hashed Page Table

The Virtual Page numbers are compared in this chain searching for a match; if the match is found then
the corresponding physical frame is extracted.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 15


OPERATING SYSTEM
In this scheme, a variation for 64-bit address space commonly uses clustered page tables.

Clustered Page Tables

 These are similar to hashed tables but here each entry refers to several pages (that is 16)
rather than 1.
 Mainly used for sparse address spaces where memory references are non-contiguous and
scattered

Inverted Page Tables

The Inverted Page table basically combines A page table and A frame table into a single data structure.

 There is one entry for each virtual page number and a real page of memory
 And the entry mainly consists of the virtual address of the page stored in that real memory
location along with the information about the process that owns the page.
 Though this technique decreases the memory that is needed to store each page table; but it
also increases the time that is needed to search the table whenever a page reference occurs.

Given below figure shows the address translation scheme of the Inverted Page Table:

In this, we need to keep the track of process id of each entry, because many processes may have the
same logical addresses.

Also, many entries can map into the same index in the page table after going through the hash
function. Thus chaining is used in order to handle this.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 16


OPERATING SYSTEM

4.6 Segmentation:
A process is divided into Segments. The chunks that a program is divided into which are not necessarily
all of the exact sizes are called segments. Segmentation gives the user’s view of the process which
paging does not provide. Here the user’s view is mapped to physical memory.
Types of Segmentation in Operating System
 Virtual Memory Segmentation: Each process is divided into a number of segments, but the
segmentation is not done all at once. This segmentation may or may not take place at the run
time of the program.
 Simple Segmentation: Each process is divided into a number of segments, all of which are
loaded into memory at run time, though not necessarily contiguously.
There is no simple relationship between logical addresses and physical addresses in segmentation. A
table stores the information about all such segments and is called Segment Table.
What is Segment Table?
It maps a two-dimensional Logical address into a one-dimensional Physical address. It’s each table
entry has:
 Base Address: It contains the starting physical address where the segments reside in memory.
 Segment Limit: Also known as segment offset. It specifies the length of the segment.
The address generated by the CPU is divided into:
 Segment number (s): Number of bits required to represent the segment.
 Segment offset (d): Number of bits required to represent the size of the segment.

Advantages of Segmentation in Operating System


 No Internal fragmentation.
 Segment Table consumes less space in comparison to Page table in paging.
 As a complete module is loaded all at once, segmentation improves CPU utilization.
 The user’s perception of physical memory is quite similar to segmentation. Users can divide user
programs into modules via segmentation. These modules are nothing more than separate
processes’ codes.
 The user specifies the segment size, whereas, in paging, the hardware determines the page size.
 Segmentation is a method that can be used to segregate data from security operations.
 Flexibility: Segmentation provides a higher degree of flexibility than paging. Segments can be of
variable size, and processes can be designed to have multiple segments, allowing for more fine-
grained memory allocation.
 Sharing: Segmentation allows for sharing of memory segments between processes. This can be
useful for inter-process communication or for sharing code libraries.
 Protection: Segmentation provides a level of protection between segments, preventing one
process from accessing or modifying another process’s memory segment. This can help increase
the security and stability of the system.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 17


OPERATING SYSTEM

Disadvantages of Segmentation in Operating System


 As processes are loaded and removed from the memory, the free memory space is broken into
little pieces, causing External fragmentation.
 Overhead is associated with keeping a segment table for each activity.
 Due to the need for two memory accesses, one for the segment table and the other for main
memory, access time to retrieve the instruction increases.
 Fragmentation: As mentioned, segmentation can lead to external fragmentation as memory
becomes divided into smaller segments. This can lead to wasted memory and decreased
performance.
 Overhead: Using a segment table can increase overhead and reduce performance. Each segment
table entry requires additional memory, and accessing the table to retrieve memory locations
can increase the time needed for memory operations.
 Complexity: Segmentation can be more complex to implement and manage than paging. In
particular, managing multiple segments per process can be challenging, and the potential for
segmentation faults can increase as a result.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 18


OPERATING SYSTEM

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 19


OPERATING SYSTEM

4.7 Virtual Memory:


Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as
though it were part of the main memory. The addresses a program may use to reference memory are
distinguished from the addresses the memory system uses to identify physical storage sites and
program-generated addresses are translated automatically to the corresponding machine addresses.
The size of virtual storage is limited by the addressing scheme of the computer system and the amount
of secondary memory available not by the actual number of main storage locations.
It is a technique that is implemented using both hardware and software. It maps memory addresses
used by a program, called virtual addresses, into physical addresses in computer memory.
1. All memory references within a process are logical addresses that are dynamically translated
into physical addresses at run time. This means that a process can be swapped in and out of the
main memory such that it occupies different places in the main memory at different times
during the course of execution.
2. A process may be broken into a number of pieces and these pieces need not be continuously
located in the main memory during execution. The combination of dynamic run-time address
translation and the use of a page or segment table permits this.
If these characteristics are present then, it is not necessary that all the pages or segments are present
in the main memory during execution. This means that the required pages need to be loaded into
memory whenever required. Virtual memory is implemented using Demand Paging or
Demand Segmentation.
.
Advantages of Virtual Memory
 More processes may be maintained in the main memory: Because we are going to load only
some of the pages of any particular process, there is room for more processes. This leads to
more efficient utilization of the processor because it is more likely that at least one of the more
numerous processes will be in the ready state at any particular time.
 A process may be larger than all of the main memory: One of the most fundamental restrictions
in programming is lifted. A process larger than the main memory can be executed because of
demand paging. The OS itself loads pages of a process in the main memory as required.
 It allows greater multiprogramming levels by using less of the available (primary) memory for
each process.
 It has twice the capacity for addresses as main memory.
 It makes it possible to run more applications at once.
 Users are spared from having to add memory modules when RAM space runs out, and
applications are liberated from shared memory management.
 When only a portion of a program is required for execution, speed has increased.
 Memory isolation has increased security.
 It makes it possible for several larger applications to run at once.
 Memory allocation is comparatively cheap.
 It doesn’t require outside fragmentation.
 It is efficient to manage logical partition workloads using the CPU.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 20


OPERATING SYSTEM
 Automatic data movement is possible.
Disadvantages of Virtual Memory
 It can slow down the system performance, as data needs to be constantly transferred between
the physical memory and the hard disk.
 It can increase the risk of data loss or corruption, as data can be lost if the hard disk fails or if
there is a power outage while data is being transferred to or from the hard disk.
 It can increase the complexity of the memory management system, as the operating system
needs to manage both physical and virtual memory.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 21


OPERATING SYSTEM

4.8 Demand Paging:


Demand paging can be described as a memory management technique that is used in operating
systems to improve memory usage and system performance. Demand paging is a technique used in
virtual memory systems where pages enter main memory only when requested or needed by the CPU.
In demand paging, the operating system loads only the necessary pages of a program into memory at
runtime, instead of loading the entire program into memory at the start.
A page fault occurred when the program needed to access a page that is not currently in memory.
The operating system then loads the required pages from the disk into memory and updates the page
tables accordingly. This process is transparent to the running program and it continues to run as if the
page had always been in memory.
Pure Demand Paging
Pure demand paging is a specific implementation of demand paging. The operating system only loads
pages into memory when the program needs them. In on-demand paging only, no pages are initially
loaded into memory when the program starts, and all pages are initially marked as being on disk.
Benefits of the Demand Paging
So in the Demand Paging technique, there are some benefits that provide efficiency of the operating
system.
 Efficient use of physical memory: Query paging allows for more efficient use because only the
necessary pages are loaded into memory at any given time.
 Support for larger programs: Programs can be larger than the physical memory available on the
system because only the necessary pages will be loaded into memory.
 Faster program start: Because only part of a program is initially loaded into memory, programs
can start faster than if the entire program were loaded at once.
 Reduce memory usage: Query paging can help reduce the amount of memory a program needs,
which can improve system performance by reducing the amount of disk I/O required.
Drawbacks of the Demand Paging
 Page Fault Overload: The process of swapping pages between memory and disk can cause a
performance overhead, especially if the program frequently accesses pages that are not currently
in memory.
 Degraded performance: If a program frequently accesses pages that are not currently in memory,
the system spends a lot of time swapping out pages, which degrades performance.
 Fragmentation: Query paging can cause physical memory fragmentation, degrading system
performance over time.
 Complexity: Implementing query paging in an operating system can be complex, requiring
complex algorithms and data structures to manage page tables and swap space.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 22


OPERATING SYSTEM
Working Process of Demand Paging
So, let’s understand this with the help of an example. Suppose we want to run a process P which has
four pages P0, P1, P2, and P3. Currently, in the page table, we have pages P1 and P3.

Demand Paging

So there are some steps that are followed in the working process of the demand paging in the
operating system.
 Program Execution: When a program starts, the operating system creates a process for the
program and allocates a portion of memory to the process.
 Creating page tables: The operating system creates page tables for processes, which track which
program pages are currently in memory and which are on disk.
 Page fault handling: A page fault occurred when the program attempted to access a page that is
not currently in memory. The operating system interrupts the program and checks the page tables
to see if the required page is on disk.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 23


OPERATING SYSTEM
 Page Fetch: If the required page is on disk, the operating system fetches the page from the disk
and loads it into memory.
The page table is then updated to reflect the page’s new location in memory.
 Resuming the program: Once the required pages have been loaded into memory, the operating
system resumes execution of the program where it left off. The program continues to run as if the
page had always been in memory.
 Page replacement: If there is not enough free memory to hold all the pages a program needs, the
operating system may need to replace one or more pages currently in memory with pages
currently in memory. on the disk. The page replacement algorithm used by the operating system
determines which pages are selected for replacement.
 Page cleanup: When a process terminates, the operating system frees the memory allocated to
the process and cleans up the corresponding entries in the page tables.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 24


OPERATING SYSTEM

4.9 Copy-on-Write:
Copy on Write or simply COW is a resource management technique. One of its main use is in the
implementation of the fork system call in which it shares the virtual memory(pages) of the OS.
In UNIX like OS, fork() system call creates a duplicate process of the parent process which is called as
the child process.
The idea behind a copy-on-write is that when a parent process creates a child process then both of
these processes initially will share the same pages in memory and these shared pages will be
marked as copy-on-write which means that if any of these processes will try to modify the shared
pages then only a copy of these pages will be created and the modifications will be done on the
copy of pages by that process and thus not affecting the other process.
Suppose, there is a process P that creates a new process Q and then process P modifies page 3.
The below figures shows what happens before and after process P modifies page 3.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 25


OPERATING SYSTEM

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 26


OPERATING SYSTEM

4.10 Page Replacement:


In an operating system that uses paging for memory management, a page replacement algorithm is
needed to decide which page needs to be replaced when a new page comes in.
Page Fault: A page fault happens when a running program accesses a memory page that is mapped
into the virtual address space but not loaded in physical memory. Since actual physical memory is
much smaller than virtual memory, page faults happen. In case of a page fault, Operating System
might have to replace one of the existing pages with the newly needed page. Different page
replacement algorithms suggest different ways to decide which page to replace. The target for all
algorithms is to reduce the number of page faults.

Page Replacement Algorithms:

1. First In First Out (FIFO): This is the simplest page replacement algorithm. In this algorithm, the
operating system keeps track of all pages in the memory in a queue, the oldest page is in the front
of the queue. When a page needs to be replaced page in the front of the queue is selected for
removal.
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames.Find the number of
page faults.

Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page
Faults.
when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available in
memory so it replaces the oldest page slot i.e 1. —>1 Page Fault. 6 comes, it is also not available in
memory so it replaces the oldest page slot i.e 3 —>1 Page Fault. Finally, when 3 come it is not
available so it replaces 0 1 page fault.
Belady’s anomaly proves that it is possible to have more page faults when increasing the number of
page frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we
consider reference strings 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4, and 3 slots, we get 9 total page faults, but if
we increase slots to 4, we get 10-page faults.
PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 27
OPERATING SYSTEM
2. Optimal Page replacement: In this algorithm, pages are replaced which would not be used for the
longest duration of time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frame. Find
number of page fault.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it is not used
for the longest duration of time in the future.—>1 Page fault. 0 is already there so —> 0 Page
fault. 4 will takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already available in the
memory.
Optimal page replacement is perfect, but not possible in practice as the operating system cannot
know future requests. The use of Optimal Page replacement is to set up a benchmark so that other
replacement algorithms can be analyzed against it.
3. Least Recently Used: In this algorithm, page will be replaced which is least recently used.
Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frames.
Find number of page faults.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 28


OPERATING SYSTEM

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so —> 0 Page fault. when 3 came it will take the place of 7 because it is least
recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available in the
memory.
4. Most Recently Used (MRU): In this algorithm, page will be replaced which has been used
recently. Belady’s anomaly can occur in this algorithm.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so–> 0 page fault
when 3 comes it will take place of 0 because it is most recently used —>1 Page fault
when 0 comes it will take place of 3 —>1 Page fault

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 29


OPERATING SYSTEM
when 4 comes it will take place of 0 —>1 Page fault
2 is already in memory so —> 0 Page fault
when 3 comes it will take place of 2 —>1 Page fault
when 0 comes it will take place of 3 —>1 Page fault
when 3 comes it will take place of 0 —>1 Page fault
when 2 comes it will take place of 3 —>1 Page fault
when 3 comes it will take place of 2 —>1 Page fault

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 30


OPERATING SYSTEM

4.11 Allocation of Frames:


An important aspect of operating systems, virtual memory is implemented using demand paging.
Demand paging necessitates the development of a page-replacement algorithm and a frame
allocation algorithm. Frame allocation algorithms are used if you have multiple processes; it helps
decide how many frames to allocate to each process.
There are various constraints to the strategies for the allocation of frames:
 You cannot allocate more than the total number of available frames.
 At least a minimum number of frames should be allocated to each process. This constraint is
supported by two reasons. The first reason is, as less number of frames are allocated, there is an
increase in the page fault ratio, decreasing the performance of the execution of the process.
Secondly, there should be enough frames to hold all the different pages that any single
instruction can reference.
Frame allocation algorithms –
The two algorithms commonly used to allocate frames to a process are:
1. Equal allocation: In a system with x frames and y processes, each process gets equal number of
frames, i.e. x/y. For instance, if the system has 48 frames and 9 processes, each process will get
5 frames. The three frames which are not allocated to any process can be used as a free-frame
buffer pool.
 Disadvantage: In systems with processes of varying sizes, it does not make much sense to
give each process equal frames. Allocation of a large number of frames to a small process
will eventually lead to the wastage of a large number of allocated unused frames.
2. Proportional allocation: Frames are allocated to each process according to the process size.
For a process pi of size si, the number of allocated frames is ai = (si/S)*m, where S is the sum of
the sizes of all the processes and m is the number of frames in the system. For instance, in a
system with 62 frames, if there is a process of 10KB and another process of 127KB, then the first
process will be allocated (10/137)*62 = 4 frames and the other process will get (127/137)*62 =
57 frames.
 Advantage: All the processes share the available frames according to their needs, rather
than equally.
Global vs Local Allocation –
The number of frames allocated to a process can also dynamically change depending on whether
you have used global replacement or local replacement for replacing pages in case of a page fault.
1. Local replacement: When a process needs a page which is not in the memory, it can bring in the
new page and allocate it a frame from its own set of allocated frames only.
 Advantage: The pages in memory for a particular process and the page fault ratio is affected
by the paging behavior of only that process.
 Disadvantage: A low priority process may hinder a high priority process by not making its
frames available to the high priority process.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 31


OPERATING SYSTEM
2. Global replacement: When a process needs a page which is not in the memory, it can bring in
the new page and allocate it a frame from the set of all frames, even if that frame is currently
allocated to some other process; that is, one process can take a frame from another.
 Advantage: Does not hinder the performance of processes and hence results in greater
system throughput.
 Disadvantage: The page fault ratio of a process can not be solely controlled by the process
itself. The pages in memory for a process depends on the paging behavior of other processes
as well.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 32


OPERATING SYSTEM

4.12 Thrashing:
Thrashing is a condition or a situation when the system is spending a major portion of its time
servicing the page faults, but the actual processing done is very negligible.
Causes of thrashing:
1. High degree of multiprogramming.
2. Lack of frames.
3. Page replacement policy.
Thrashing’s Causes
Thrashing has an impact on the operating system’s execution performance. Thrashing also causes
serious performance issues with the operating system. When the CPU’s usage is low, the process
scheduling mechanism tries to load multiple processes into memory at the same time, increasing
the degree of Multi programming.
In this case, the number of processes in the memory exceeds the number of frames available in the
memory. Each process is given a set number of frames to work with.
If a high-priority process arrives in memory and the frame is not vacant at the moment, the other
process occupying the frame will be moved to secondary storage, and the free frame will be allotted
to a higher-priority process.
We may also argue that as soon as the memory is full, the procedure begins to take a long time to
swap in the required pages. Because most of the processes are waiting for pages, the CPU utilization
drops again.
As a result, a high level of multi programming and a lack of frames are two of the most common
reasons for thrashing in the operating system.

The basic concept involved is that if a process is allocated to few frames, then there will be too
many and too frequent page faults. As a result, no useful work would be done by the CPU and the

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 33


OPERATING SYSTEM
CPU utilization would fall drastically. The long-term scheduler would then try to improve the CPU
utilization by loading some more processes into the memory thereby increasing the degree of multi
programming. This would result in a further decrease in the CPU utilization triggering a chained
reaction of higher page faults followed by an increase in the degree of multi programming, called
Thrashing.
Locality Model –
A locality is a set of pages that are actively used together. The locality model states that as a process
executes, it moves from one locality to another. A program is generally composed of several
different localities which may overlap.
For example, when a function is called, it defines a new locality where memory references are made
to the instructions of the function call, it’s local and global variables, etc. Similarly, when the
function is exited, the process leaves this locality.
Techniques to handle:
1. Working Set Model –
This model is based on the above-stated concept of the Locality Model.
The basic principle states that if we allocate enough frames to a process to accommodate its current
locality, it will only fault whenever it moves to some new locality. But if the allocated frames are
lesser than the size of the current locality, the process is bound to thrash.
According to this model, based on parameter A, the working set is defined as the set of pages in the
most recent ‘A’ page references. Hence, all the actively used pages would always end up being a
part of the working set.
The accuracy of the working set is dependent on the value of parameter A. If A is too large, then
working sets may overlap. On the other hand, for smaller values of A, the locality might not be
covered entirely.

If D is the total demand for frames and is the working set size for process i,

Now, if ‘m’ is the number of frames available in the memory, there are 2 possibilities:
 (i) D>m i.e. total demand exceeds the number of frames, then thrashing will occur as some
processes would not get enough frames.
 (ii) D<=m, then there would be no thrashing.
2. Page Fault Frequency –
A more direct approach to handling thrashing is the one that uses the Page-Fault Frequency
concept.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 34


OPERATING SYSTEM

The problem associated with Thrashing is the high page fault rate and thus, the concept here is to
control the page fault rate.
If the page fault rate is too high, it indicates that the process has too few frames allocated to it. On
the contrary, a low page fault rate indicates that the process has too many frames.
Upper and lower limits can be established on the desired page fault rate as shown in the diagram.
If the page fault rate falls below the lower limit, frames can be removed from the process. Similarly,
if the page fault rate exceeds the upper limit, more frames can be allocated to the process.
In other words, the graphical state of the system should be kept limited to the rectangular region
formed in the given diagram.
Here too, if the page fault rate is high with no free frames, then some of the processes can be
suspended and frames allocated to them can be reallocated to other processes. The suspended
processes can then be restarted later.

PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 35

You might also like