## CH 04 Notes-OS
## CH 04 Notes-OS
In a multiprogramming computer, the Operating System resides in a part of memory, and the rest is
used by multiple processes. The task of subdividing the memory among different processes is called
Memory Management. Memory management is a method in the operating system to manage
operations between main memory and disk during process execution. The main aim of memory
management is to achieve efficient utilization of memory.
Memory Management is Required for,
Allocate and de-allocate memory before and after process execution.
To keep track of used memory space by processes.
To minimize fragmentation issues.
To proper utilization of main memory.
To maintain data integrity while executing of process.
4.2 Swapping:
When a process is executed it must have resided in memory. Swapping is a process of swapping a
process temporarily into a secondary memory from the main memory, which is fast compared to
secondary memory. A swapping allows more processes to be run and can be fit into memory at one
time. The main part of swapping is transferred time and the total time is directly proportional to the
amount of memory swapped. Swapping is also known as roll-out, or roll because if a higher priority
process arrives and wants service, the memory manager can swap out the lower priority process and
then load and execute the higher priority process. After finishing higher priority work, the lower
priority process swapped back in memory and continued to the execution process.
Benefits of Swapping
Here, are major benefits/pros of swapping:
First Fit
Here, in this diagram, a 40 KB memory block is the first available free hole that can store process A
(size of 25 KB), because the first two blocks did not have sufficient memory space.
Best Fit
In the Best Fit, allocate the smallest hole that is big enough to process requirements. For this, we
search the entire list, unless the list is ordered by size.
Here in this example, first, we traverse the complete list and find the last hole 25KB is the best
suitable hole for Process A(size 25KB). In this method, memory utilization is maximum as compared
to other memory allocation techniques.
Worst Fit
In the Worst Fit, allocate the largest available hole to process. This method produces the largest
leftover hole.
Here in this example, Process A (Size 25 KB) is allocated to the largest available memory block which
is 60KB. Inefficient memory utilization is a major issue in the worst fit.
It is of five types:
It is of two types: 1. Paging
9. 2. Multilevel Paging
1. Fixed(or static) partitioning
3. Inverted Paging
2. Dynamic partitioning
4. Segmentation
5. Segmented Paging
Fragmentation
Fragmentation is defined as when the process is loaded and removed after execution from memory,
it creates a small free hole. These holes can not be assigned to new processes because holes are not
combined or do not fulfill the memory requirement of the process. To achieve a degree of
multiprogramming, we must reduce the waste of memory or fragmentation problems. In the
operating systems two types of fragmentation:
1. Internal fragmentation: Internal fragmentation occurs when memory blocks are allocated to the
process more than their requested size. Due to this some unused space is left over and creating
an internal fragmentation problem.Example: Suppose there is a fixed partitioning used for
memory allocation and the different sizes of blocks 3MB, 6MB, and 7MB space in memory. Now
a new process p4 of size 2MB comes and demands a block of memory. It gets a memory block of
3MB but 1MB block of memory is a waste, and it can not be allocated to other processes too.
This is called internal fragmentation.
2. External fragmentation: In External Fragmentation, we have a free memory block, but we can
not assign it to a process because blocks are not contiguous. Example: Suppose (consider the
above example) three processes p1, p2, and p3 come with sizes 2MB, 4MB, and 7MB
4.4 Paging:
Paging is a memory management scheme that eliminates the need for a contiguous allocation of
physical memory. The process of retrieving processes in the form of pages from the secondary storage
into the main memory is known as paging. The basic purpose of paging is to separate each procedure
into pages. Additionally, frames will be used to split the main memory. This scheme permits the
physical address space of a process to be non – contiguous.
In paging, the physical memory is divided into fixed-size blocks called page frames, which are the
same size as the pages used by the process. The process’s logical address space is also divided into
fixed-size blocks called pages, which are the same size as the page frames. When a process requests
memory, the operating system allocates one or more page frames to the process and maps the
process’s logical pages to the physical page frames.
The mapping between logical pages and physical page frames is maintained by the page table, which
is used by the memory management unit to translate logical addresses into physical addresses. The
page table maps each logical page number to a physical page frame number.
In a paging scheme, the logical deal with the region is cut up into steady-duration pages, and every
internet web page is mapped to a corresponding body within the physical deal with the vicinity. The
going for walks tool keeps a web internet web page desk for every method, which maps the system’s
logical addresses to its corresponding bodily addresses. When a method accesses memory, the CPU
generates a logical address, that is translated to a bodily address using the net page table. The
reminiscence controller then uses the physical cope to get the right of entry to the reminiscence.
Logical Address or Virtual Address: This is a deal that is generated through the CPU and used by
a technique to get the right of entry to reminiscence. It is known as a logical or digital deal
because it isn’t always a physical vicinity in memory but an opportunity for a connection with a
place inside the device’s logical address location.
Logical Address Space or Virtual Address Space: This is the set of all logical addresses generated
via a software program. It is normally represented in phrases or bytes and is split into regular-
duration pages in a paging scheme.
Physical Address: This is a cope that corresponds to a bodily place in reminiscence. It is the
actual cope with this that is available on the memory unit and is used by the memory controller
to get admission to the reminiscence.
Physical Address Space: This is the set of all bodily addresses that correspond to the logical
addresses inside the way’s logical deal with place. It is usually represented in words or bytes and
is cut up into fixed-size frames in a paging scheme.
Thus page table mainly provides the corresponding frame number (base address of the frame)
where that page is stored in the main memory.
The above diagram shows the paging model of Physical and logical memory.
Some of the common techniques that are used for structuring the Page table are as follows:
1. Hierarchical Paging
2. Hashed Page Tables
3. Inverted Page Tables
Hierarchical Paging
There might be a case where the page table is too big to fit in a contiguous space, so we may
have a hierarchy with several levels.
In this type of Paging the logical address space is broke up into Multiple page tables.
Hierarchical Paging is one of the simplest techniques and for this purpose, a two-level page
table and three-level page table can be used.
Consider a system having 32-bit logical address space and a page size of 1 KB and it is further divided
into:
As we page the Page table, the page number is further divided into :
P2 indicates the displacement within the page of the Inner page Table.
As address translation works from outer page table inward so is known as forward-mapped Page
Table.
Below given figure below shows the Address Translation scheme for a two-level page table
For a system with 64-bit logical address space, a two-level paging scheme is not appropriate. Let us
suppose that the page size, in this case, is 4KB.If in this case, we will use the two-page level scheme
then the addresses will look like this:
Thus in order to avoid such a large table, there is a solution and that is to divide the outer page table,
and then it will result in a Three-level page table:
This approach is used to handle address spaces that are larger than 32 bits.
Given below figure shows the address translation scheme of the Hashed Page Table:
The Virtual Page numbers are compared in this chain searching for a match; if the match is found then
the corresponding physical frame is extracted.
These are similar to hashed tables but here each entry refers to several pages (that is 16)
rather than 1.
Mainly used for sparse address spaces where memory references are non-contiguous and
scattered
The Inverted Page table basically combines A page table and A frame table into a single data structure.
There is one entry for each virtual page number and a real page of memory
And the entry mainly consists of the virtual address of the page stored in that real memory
location along with the information about the process that owns the page.
Though this technique decreases the memory that is needed to store each page table; but it
also increases the time that is needed to search the table whenever a page reference occurs.
Given below figure shows the address translation scheme of the Inverted Page Table:
In this, we need to keep the track of process id of each entry, because many processes may have the
same logical addresses.
Also, many entries can map into the same index in the page table after going through the hash
function. Thus chaining is used in order to handle this.
4.6 Segmentation:
A process is divided into Segments. The chunks that a program is divided into which are not necessarily
all of the exact sizes are called segments. Segmentation gives the user’s view of the process which
paging does not provide. Here the user’s view is mapped to physical memory.
Types of Segmentation in Operating System
Virtual Memory Segmentation: Each process is divided into a number of segments, but the
segmentation is not done all at once. This segmentation may or may not take place at the run
time of the program.
Simple Segmentation: Each process is divided into a number of segments, all of which are
loaded into memory at run time, though not necessarily contiguously.
There is no simple relationship between logical addresses and physical addresses in segmentation. A
table stores the information about all such segments and is called Segment Table.
What is Segment Table?
It maps a two-dimensional Logical address into a one-dimensional Physical address. It’s each table
entry has:
Base Address: It contains the starting physical address where the segments reside in memory.
Segment Limit: Also known as segment offset. It specifies the length of the segment.
The address generated by the CPU is divided into:
Segment number (s): Number of bits required to represent the segment.
Segment offset (d): Number of bits required to represent the size of the segment.
Demand Paging
So there are some steps that are followed in the working process of the demand paging in the
operating system.
Program Execution: When a program starts, the operating system creates a process for the
program and allocates a portion of memory to the process.
Creating page tables: The operating system creates page tables for processes, which track which
program pages are currently in memory and which are on disk.
Page fault handling: A page fault occurred when the program attempted to access a page that is
not currently in memory. The operating system interrupts the program and checks the page tables
to see if the required page is on disk.
4.9 Copy-on-Write:
Copy on Write or simply COW is a resource management technique. One of its main use is in the
implementation of the fork system call in which it shares the virtual memory(pages) of the OS.
In UNIX like OS, fork() system call creates a duplicate process of the parent process which is called as
the child process.
The idea behind a copy-on-write is that when a parent process creates a child process then both of
these processes initially will share the same pages in memory and these shared pages will be
marked as copy-on-write which means that if any of these processes will try to modify the shared
pages then only a copy of these pages will be created and the modifications will be done on the
copy of pages by that process and thus not affecting the other process.
Suppose, there is a process P that creates a new process Q and then process P modifies page 3.
The below figures shows what happens before and after process P modifies page 3.
1. First In First Out (FIFO): This is the simplest page replacement algorithm. In this algorithm, the
operating system keeps track of all pages in the memory in a queue, the oldest page is in the front
of the queue. When a page needs to be replaced page in the front of the queue is selected for
removal.
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames.Find the number of
page faults.
Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page
Faults.
when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available in
memory so it replaces the oldest page slot i.e 1. —>1 Page Fault. 6 comes, it is also not available in
memory so it replaces the oldest page slot i.e 3 —>1 Page Fault. Finally, when 3 come it is not
available so it replaces 0 1 page fault.
Belady’s anomaly proves that it is possible to have more page faults when increasing the number of
page frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we
consider reference strings 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4, and 3 slots, we get 9 total page faults, but if
we increase slots to 4, we get 10-page faults.
PROF. RAHUL P. BEMBADE,CSE,SOC,MIT ADTU 27
OPERATING SYSTEM
2. Optimal Page replacement: In this algorithm, pages are replaced which would not be used for the
longest duration of time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frame. Find
number of page fault.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it is not used
for the longest duration of time in the future.—>1 Page fault. 0 is already there so —> 0 Page
fault. 4 will takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already available in the
memory.
Optimal page replacement is perfect, but not possible in practice as the operating system cannot
know future requests. The use of Optimal Page replacement is to set up a benchmark so that other
replacement algorithms can be analyzed against it.
3. Least Recently Used: In this algorithm, page will be replaced which is least recently used.
Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frames.
Find number of page faults.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so —> 0 Page fault. when 3 came it will take the place of 7 because it is least
recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available in the
memory.
4. Most Recently Used (MRU): In this algorithm, page will be replaced which has been used
recently. Belady’s anomaly can occur in this algorithm.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so–> 0 page fault
when 3 comes it will take place of 0 because it is most recently used —>1 Page fault
when 0 comes it will take place of 3 —>1 Page fault
4.12 Thrashing:
Thrashing is a condition or a situation when the system is spending a major portion of its time
servicing the page faults, but the actual processing done is very negligible.
Causes of thrashing:
1. High degree of multiprogramming.
2. Lack of frames.
3. Page replacement policy.
Thrashing’s Causes
Thrashing has an impact on the operating system’s execution performance. Thrashing also causes
serious performance issues with the operating system. When the CPU’s usage is low, the process
scheduling mechanism tries to load multiple processes into memory at the same time, increasing
the degree of Multi programming.
In this case, the number of processes in the memory exceeds the number of frames available in the
memory. Each process is given a set number of frames to work with.
If a high-priority process arrives in memory and the frame is not vacant at the moment, the other
process occupying the frame will be moved to secondary storage, and the free frame will be allotted
to a higher-priority process.
We may also argue that as soon as the memory is full, the procedure begins to take a long time to
swap in the required pages. Because most of the processes are waiting for pages, the CPU utilization
drops again.
As a result, a high level of multi programming and a lack of frames are two of the most common
reasons for thrashing in the operating system.
The basic concept involved is that if a process is allocated to few frames, then there will be too
many and too frequent page faults. As a result, no useful work would be done by the CPU and the
If D is the total demand for frames and is the working set size for process i,
Now, if ‘m’ is the number of frames available in the memory, there are 2 possibilities:
(i) D>m i.e. total demand exceeds the number of frames, then thrashing will occur as some
processes would not get enough frames.
(ii) D<=m, then there would be no thrashing.
2. Page Fault Frequency –
A more direct approach to handling thrashing is the one that uses the Page-Fault Frequency
concept.
The problem associated with Thrashing is the high page fault rate and thus, the concept here is to
control the page fault rate.
If the page fault rate is too high, it indicates that the process has too few frames allocated to it. On
the contrary, a low page fault rate indicates that the process has too many frames.
Upper and lower limits can be established on the desired page fault rate as shown in the diagram.
If the page fault rate falls below the lower limit, frames can be removed from the process. Similarly,
if the page fault rate exceeds the upper limit, more frames can be allocated to the process.
In other words, the graphical state of the system should be kept limited to the rectangular region
formed in the given diagram.
Here too, if the page fault rate is high with no free frames, then some of the processes can be
suspended and frames allocated to them can be reallocated to other processes. The suspended
processes can then be restarted later.