0% found this document useful (0 votes)
3 views142 pages

Module 4 Ppt

The document provides an overview of memory management concepts, including logical and physical addresses, address binding types, and memory allocation techniques. It discusses paging, overlays, swapping, and fragmentation, along with their advantages and disadvantages. Various memory allocation strategies such as first-fit, best-fit, and worst-fit are also explained, highlighting their efficiency and potential issues.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views142 pages

Module 4 Ppt

The document provides an overview of memory management concepts, including logical and physical addresses, address binding types, and memory allocation techniques. It discusses paging, overlays, swapping, and fragmentation, along with their advantages and disadvantages. Various memory allocation strategies such as first-fit, best-fit, and worst-fit are also explained, highlighting their efficiency and potential issues.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 142

MODULE 4

Memory Management
• Logical address:It is a virtual address generated
by the CPU that can be viewed by the user
• Logical address is generated by the CPU during a
program execution
• The logical address is virtual as it does not exist
physically.Hence it is also called virtual address
• This address is used as a reference to access the
physical memory location(physical address)
• Logical address space:set of all logical address
generated by the CPU in reference to a program is
referred as logical address space
• Physical address is a location in a memory unit
• The user can never view the physical address of
program
• The user cannot directly access the physical
address.Instead the physical address is accessed
by its corresponding logical address.
• Physical address space:Set of physical address
corresponding to the logical address is called
physical address space
Address binding
• The Address Binding refers to the mapping of computer
instructions and data to physical memory locations.
• Both logical and physical addresses are used in
computer memory.
• It assigns a physical memory region to a logical pointer
by mapping a physical address to a logical address
known as a virtual address.
• The Memory Management Unit (MMU) translates
logical addresses into physical (RAM) addresses.
Mapping a logical address to a physical address is
called address binding or address mapping.
• There are mainly three types of an address
binding in the OS.
• Compile Time Address Binding
• Load Time Address Binding
• Execution Time or Dynamic Address Binding
Compile Time Address Binding
• It is the first type of address binding.
• It occurs when the compiler is responsible for
performing address binding, and the compiler interacts
with the operating system to perform the address
binding.
• when a program is executed, it allocates memory to
the system code of the computer.
• It will be done before loading the program into
memory.
• The compiler requires interacts with an OS memory
manager to perform compile-time address binding.
• The address binding assigns a logical address
to the beginning of the memory segment to
store the object code.
Load Time Address Binding

• It is another type of address binding. It is done


after loading the program in the memory, and
it would be done by the operating system
memory manager, i.e., loader.
• If memory allocation is specified when the
program is assigned, no program in its
compiled state may ever be transferred from
one computer to another.
Execution Time or Dynamic Address
Binding
• Execution time address binding is the most popular
type of binding for scripts that aren't compiled because
it only applies to variables in the program.
• When a variable in a program is encountered during
the processing of instructions in a script, the program
seeks memory space for that variable.
• The memory would assign the space to that variable
until the program sequence finished or unless a
specific instruction within the script released the
memory address connected to a variable.
• The dynamic type of address binding done by
the processor at the time of program
execution.
logical and physical address space
• In operating systems, logical and physical
addresses are used to manage and access
memory.
• Logical address:It is a virtual address generated
by the CPU while a program is running. It is
referred to as a virtual address because it does
not exist physically.
• Using this address, the CPU access the actual
address or physical address inside the memory,
and data is fetched from there.
• Physical address: Physical Address is the actual
address of the data inside the memory. The
logical address is a virtual address and the
program needs physical memory for its
execution.
• The user never deals with the Physical Address.
The user program generates the logical address
and is mapped to the physical address by the
Memory Management Unit(MMU).
• The translation from logical to physical
addresses is performed by the operating
system’s memory management unit.
• The MMU uses a page table to translate
logical addresses into physical addresses. The
page table maps each logical page number to
a physical frame number.
Difference between Logical Address and
Physical Address in Operating System
S.No Logical Address Physical Address
1 Logical address is rendered by CPU. Physical address is like a
location that is present in
the main memory.
2 It is a collection of all logical addresses It is a collection of all
rendered by the CPU. physical addresses
mapped to the connected
logical addresses.
3 Logical address of the program is visible We cannot view the
to the users. physical address of the
program.
4 Logical address is generated by the CPU. Physical address is
computed by MMU.
5 We can easily utilise the logical address to We can use the physical
access the physical address. address indirectly.
logical address space of a program
• In the above diagram, the base register is termed
the Relocation register. The relocation register is
a special register in the CPU and is used for the
mapping of logical addresses used by a program
to physical addresses of the system's main
memory.
• The value in the relocation register is added to
every address that is generated by the user
process at the time when the address is sent to
the memory.
• Suppose the base is at 14000, then an attempt
by the user to address location 0 is relocated
dynamically to 14000; thus access to location
356 is mapped to 14356.
• User program always deals with the logical
addresses. The Memory Mapping unit mainly
converts the logical addresses into the
physical addresses.
• The user program never sees the real physical
address space, it always deals with the Logical
addresses
• As we have two different type of addresses
Logical address in the range (0 to max) and
Physical addresses in the range(R to R+max)
where R is the value of relocation register.
• Base and Limit Registers
• A pair of base and limit registers define the
logical address space.
• The base register holds the smallest legal
physical memory address; the limit register
specifies the size of the range
Paging
• Paging is a memory management technique in which
process address space is broken into blocks of the
same size called pages (size is power of 2, between 512
bytes and 8192 bytes).
• The size of the process is measured in the number of
pages.
• Similarly, main memory is divided into small fixed-sized
blocks of (physical) memory called frames and the size
of a frame is kept the same as that of a page to have
optimum utilization of the main memory and to avoid
external fragmentation.
• Consider a logical address space of 64 pages
of 1024 words of each mapped onto a physical
memory of 32 frames.
a)How many bits are there in the logical
address?
b)How many bits are there in the physical
address?
a)How many bits are there in the logical
address?

m =???

Size of logical address space = 2m = No of pages × page size

2 m = 64 × 1024
m 6 10
2 =2 ×2
m 16
2 =2

»» m=16
b)How many bits are there in the
physical address?
Let (x) is number of bits in the physical address

x =???

Size of physical address space 2x= No of frames × frame size (frame size =
page size )

Size of physical address space = 32 × 1024

2 x =25 × 210

2 x = 215

number of required bits in the physical address=x =15 bit


Q)Consider a logical address space of 32 pages
of 1024 words per page, mapped onto a physical
memory of 16 frames.
• a. How many bits are required in the logical
address?
• b. How many bits are required in the physical
address?
overlays
• Overlaying is defined as "the process of
inserting a block of computer code or other
data into internal memory, replacing what is
already there."
• Overlaying is a method that permits
applications to be larger than the primary
memory.
• Overlays are typically used in embedded
systems due to physical memory limitations,
internal memory for a system-on-chip, and the
lack of virtual memory resources.
• In memory management, overlays work in the
following steps, such as:

1. The programmer divided the program into many


logical sections.
2. A small portion of the program had to remain in
memory at all times, but the remaining sections (or
overlays) were loaded only when needed.
3. The use of overlays allowed programmers to write
programs much larger than physical memory,
although memory usage depends on the programmer
rather than the operating system.
Swapping in Operating System

• Swapping is a memory management scheme


in which any process can be temporarily
swapped from main memory to secondary
memory so that the main memory can be
made available for other processes.
• It is used to improve main memory utilization.
In secondary memory, the place where the
swapped-out process is stored is called swap
space.
• Swapping has divided into two
• Swap-out is a method of removing a process
from RAM and adding it to the hard disk.
• Swap-in is a method of removing a program
from a hard disk and putting it back into the
main memory or RAM.
Advantages of Swapping

• It helps the CPU to manage multiple processes


within a single main memory.
• It helps to create and use virtual memory.
• Swapping allows the CPU to perform multiple
tasks simultaneously. Therefore, processes do
not have to wait very long before they are
executed.
• It improves the main memory utilization.
Disadvantages of Swapping

• If the computer system loses power, the user


may lose all information related to the
program in case of substantial swapping
activity.

• Chances of number of page faults occur

• Low processing performance


Contiguous memory allocation
• Allocating space to software applications is
referred to as memory allocation.
• Memory is a sizable collection of bytes.
• Contiguous and non-contiguous memory
allocation are the two basic types of memory
allocation
• Contiguous memory allocation is a memory
management technique used by operating
systems to allocate memory to processes in
contiguous blocks.
• In this technique, a process is allocated a single
block of memory that is contiguous or adjacent to
each other.
• This ensures that memory is efficiently utilized,
with minimal fragmentation and wasted memory.
Memory allocation
• Memory allocation is an action of assigning
the physical or the virtual memory address
space to a process (its instructions and data).
• The two fundamental methods of memory
allocation are
• static and dynamic memory allocation.
• The static memory allocation method assigns the
memory to a process, before its execution.
• On the other hand, the dynamic memory
allocation method assigns the memory to a
process, during its execution.
• The actual size, of the data required, is known at
the run time so, it allocates the exact memory
space to the program thereby, reducing the
memory wastage.
• Fixed and dynamic memory allocation schemes,
the operating system must keep list of each
memory location noting which are free and which
are busy.
• These partitions may be allocated by 4 ways:

• 1. First-Fit Memory Allocation


• 2. Best-Fit Memory Allocation
• 3. Worst-Fit Memory Allocation
• 4. Next-Fit Memory Allocation
Dynamic storage allocation problem
• How to satisfy a request of size n from a list of free holes.

P1(15 KB)
20 KB
10 KB

P2(30 KB) First fit:Allocate the


first hole that is big
35 KB enough
Best fit: Allocate the
P3(10 KB) smallest hole that is big
enough;must search
20 KB
entire list.
P4(5 KB) Worst fit: Allocate the
50 KB lagest hole.
Best-Fit Allocation
• Best-Fit Allocation is a memory allocation
technique used in operating systems to allocate
memory to a process.
• In Best-Fit, the operating system searches
through the list of free blocks of memory to find
the block that is closest in size to the memory
request from the process.
• Once a suitable block is found, the operating
system splits the block into two parts: the portion
that will be allocated to the process, and the
remaining free block.
• Advantages
• Memory Efficient. The operating system allocates
the job minimum possible space in the memory,
making memory management very efficient.
• To save memory from getting wasted, it is the
best method.
• Improved memory utilization
• Reduced memory fragmentation
• Minimizes external fragmentation

Disadvantages

• It is a Slow Process. Checking the whole


memory for each job makes the working of
the operating system very slow. It takes a lot
of time to complete the work.
• Increased computational overhead
• May lead to increased internal fragmentation
• Can result in slow memory allocation times
Worst fit allocation
• In this allocation technique, the process
traverses the whole memory and always
search for the largest hole/partition, and then
the process is placed in that hole/partition.
• It is a slow process because it has to traverse
the entire memory to search the largest hole.
• Here Process P1=30K is allocated with the
Worst Fit-Allocation technique, so it traverses
the entire memory and selects memory size
400K which is the largest, and hence there is
an internal fragmentation of 370K which is
very large and so many other processes can
also utilize this leftover space
• Advantages of Worst-Fit Allocation :
• Since this process chooses the largest
hole/partition, therefore there will be large
internal fragmentation.
• Now, this internal fragmentation will be quite
big so that other small processes can also be
placed in that leftover partition.
• Disadvantages of Worst-Fit Allocation :
• It is a slow process because it traverses all the
partitions in the memory and then selects the
largest partition among all the partitions,
which is a time-consuming process.
First-Fit Allocation
• First-Fit Allocation is a memory allocation
technique used in operating systems to
allocate memory to a process.
• In First-Fit, the operating system searches
through the list of free blocks of memory,
starting from the beginning of the list, until it
finds a block that is large enough to
accommodate the memory request from the
process.
• As illustrated above, the system assigns J1 the
nearest partition in the memory.
• As a result, there is no partition with sufficient
space is available for J3 and it is placed in the
waiting list
• Advantages of First-Fit Allocation include its
simplicity and efficiency, as the search for a
suitable block of memory can be performed
quickly and easily.
• Additionally, First-Fit can also help to minimize
memory fragmentation, as it tends to allocate
memory in larger blocks.
• Disadvantages of First-Fit Allocation include
poor performance in situations where the
memory is highly fragmented, as the search
for a suitable block of memory can become
time-consuming and inefficient.
• Additionally, First-Fit can also lead to poor
memory utilization, as it may allocate larger
blocks of memory than are actually needed by
a process.
Fragmentation.
• As processes are loaded and removed from
memory, the free memory space is broken
into little pieces.
• It happens after sometimes that processes
cannot be allocated to memory blocks
considering their small size and memory
blocks remains unused.
• This problem is known as Fragmentation.
• There are two types of fragmentation in OS
which are given as
• Internal fragmentation
• External fragmentation.
Internal Fragmentation.

• When a process is allocated to a memory


block, and if the process is smaller than the
amount of memory requested, a free space is
created in the given memory block.
• Due to this, the free space of the memory
block is unused, which causes internal
fragmentation.
• Let's suppose a process P1 with a size of 3MB
arrives and is given a memory block of 4MB.
As a result, the 1MB of free space in this block
is unused and cannot be used to allocate
memory to another process.
• It is known as internal fragmentation.
How to avoid internal fragmentation?
• The problem of internal fragmentation may
arise due to the fixed sizes of the memory
blocks. It may be solved by assigning space to
the process via dynamic partitioning.
• Dynamic partitioning allocates only the
amount of space requested by the process. As
a result, there is no internal fragmentation.
External Fragmentation

• External fragmentation happens when a dynamic


memory allocation method allocates some
memory but leaves a small amount of memory
unusable.
• The quantity of available memory is substantially
reduced if there is too much external
fragmentation.
• There is enough memory space to complete a
request, but it is not contiguous. It's known as
external fragmentation.
• Let's take the example of external
fragmentation.
• In the above diagram, you can see that there
is sufficient space (50 KB) to run a process (05)
(need 45KB), but the memory is not
contiguous.
• You can use compaction, paging, and
segmentation to use the free space to execute
a process.
How to remove external
fragmentation?
• This problem occurs when you allocate RAM
to processes continuously. It is done in paging
and segmentation, where memory is allocated
to processes non-contiguously.
• As a result, if you remove this condition,
external fragmentation may be decreased.
Difference between Internal Fragmentation and External
Fragmentation
Key Internal Fragmentation External Fragmentation
When there is a difference When there are small and non-
between required memory contiguous memory blocks
space vs allotted memory which cannot be assigned to
Definition
space, problem is termed as any process, the problem is
Internal Fragmentation. termed as External
Fragmentation.
Internal Fragmentation occurs External Fragmentation occurs
Memory Block Size when allotted memory blocks when allotted memory blocks
are of fixed size. are of varying size.
Internal Fragmentation occurs External Fragmentation occurs
when a process needs more when a process is removed
Occurrence space than the size of allotted from the main memory.
memory block or use less
space.
Best Fit Block Search is the Compaction is the solution for
Solution solution for internal external fragmentation.
fragmentation.
Internal Fragmentation occurs External Fragmentation occurs
Process when Paging is employed. when Segmentation is employed.
Compaction
• Compaction is another method for removing
external fragmentation.
• External fragmentation may be decreased when
dynamic partitioning is used for memory
allocation by combining all free memory into a
single large block.
• The larger memory block is used to allocate space
based on the requirements of the new processes.
This method is also known as defragmentation.
• It does that by moving all the processes towards
one end of the memory and all the available free
space towards the other end of the memory so
that it becomes contiguous.
• Before compaction, the main memory has some
free space between occupied space. This
condition is known as external fragmentation.
Due to less free space between occupied spaces,
large processes cannot be loaded into them.
Given six memory partitions of 300 KB , 600 KB , 350 KB , 200 KB , 750
KB , and 125 KB (in order), how would the first-fit, best-fit, and worst-fit
algorithms place processes of size 115 KB , 500 KB , 358 KB , 200 KB ,
and 375 KB (in order)

• In the First Fit algorithm the First available


Memory is allotted to the Process.
• In the Best Fit algorithm the Memory in which
the least amount of size will be wasted or left
over will be allotted.
• In the Worst Case Algorithm the Largest
Memory will be allotted.
300 KB processes of size
P1 115 KB
600 KB P2 500 KB
P3 358 KB
350 KB
P4 200 KB
P5 375 KB
200 KB

750 KB

125 KB
• First Fit Algorithm :
• 115 KB is put in 300 KB partition, leaving (185
KB, 600 KB, 350 KB,200 KB, 750 KB, 125 KB)
• 500 KB is put in 600 KB partition, leaving (185 KB,
100 KB, 350 KB,200 KB, 750 KB, 125 KB)
• 358 KB is put in 750 KB partition, leaving (185
KB, 100 KB, 350 KB,200 KB, 392 KB, 125 KB)
• 200 KB is put in 350 KB partition, leaving (185 KB,
100 KB, 150 KB,200 KB, 392 KB, 125 KB)
• 375 KB is put in 392 KB partition, leaving (185
KB, 100 KB, 150 KB,200 KB, 17 KB, 125 KB)
Memory size PROCESS Process size Internal
fragmentation
300 kb p1 115kb 185 kb
600 kb p2 500 kb 100 kb
350 kb p4 200 kb 150 kb
200 kb
750 kb P3,p5 (358),375 (392),17 kb
125 kb
• Best Fit Algorithm :
• 115 KB is put in 125 KB partition, leaving (300 KB, 600
KB, 350 KB,200 KB, 750 KB, 10 KB)
• 500 KB is put in 600 KB partition, leaving (300 KB, 100
KB, 350 KB,200 KB, 750 KB, 10 KB)
• 358 KB is put in 750 KB partition, leaving (300 KB, 100
KB, 350 KB,200 KB, 392 KB, 10 KB)
• 200 KB is put in 200 KB partition, leaving (300 KB, 100
KB, 350 KB, 0KB, 392 KB, 10 KB)
• 375 KB is put in 392 KB partition, leaving (300 KB, 100
KB, 350 KB, 0KB, 17 KB, 10 KB)
• 115 KB is put in 750 KB partition, leaving (300 KB,
600 KB, 350 KB,200 KB, 635 KB, 125 KB)
• 500 KB is put in 635 KB partition, leaving (300 KB,
600 KB, 350 KB,200 KB, 135 KB, 125 KB)
• 358 KB is put in 600 KB partition, leaving (300 KB,
242 KB, 350 KB,200 KB, 135 KB, 125 KB)
• 200 KB is put in 350 KB partition, leaving (300 KB,
242 KB, 150 KB,200 KB, 135 KB, 125 KB)
• 375 KB has to wait as no space is available which
is having 375KB of Free Memory
Memory partitioning
1. Fixed Partitioning : (Multiple Partitioning)
• Multi-programming with fixed partitioning is a
contiguous memory management technique in which
the main memory is divided into fixed sized partitions
which can be of equal or unequal size.
• Whenever we have to allocate a process memory then
a free partition that is big enough to hold the process
is found.
• Then the memory is allocated to the process.If there is
no free space available then the process waits in the
queue to be allocated memory. It is one of the most
oldest memory management technique which is easy
to implement.
2. Variable Partitioning :
• Multi-programming with variable partitioning is a
contiguous memory management technique in
which the main memory is not divided into
partitions and the process is allocated a chunk of
free memory that is big enough for it to fit.
• The space which is left is considered as the free
space which can be further used by other
processes. It also provides the concept of
compaction.
• In compaction the spaces that are free and the
spaces which not allocated to the process are
combined and single large memory space is
made.
S.NO. Fixed partitioning Variable partitioning
In multi-programming with
In multi-programming with fixed partitioning
variable partitioning the
1. the main memory is divided into fixed sized
main memory is not divided
partitions.
into fixed sized partitions.

In variable partitioning, the


2. Only one process can be placed in a partition. process is allocated a chunk
of free memory.

It does not utilize the main memory It utilizes the main memory
3.
effectively. effectively.

There is presence of internal fragmentation There is external


4.
and external fragmentation. fragmentation.

Degree of multi-
5. Degree of multi-programming is less.
programming is higher.

It is less easier to
6. It is more easier to implement.
implement.
Paging
• Paging is a memory management scheme that
eliminates the need for contiguous allocation
of physical memory.
• The process of retrieving processes in the
form of pages from the secondary storage into
the main memory is known as paging.
• For implementing paging ,the physical and logical
memory spaces are divided into the same fixed-
sized blocks.
• These fixed-sized blocks of physical memory are
called frames, and the fixed-sized blocks of logical
memory are called pages.
• One page of the process is to be stored in one of
the frames of the memory. The pages can be
stored at the different locations of the memory
but the priority is always to find the contiguous
frames or holes.
• The address generated by CPU for accessing
the frame is divided into two parts i.e. page
number and page offset.
Page Table
• Page table is a data structure.
• It maps the page number referenced by the CPU
to the frame number where that page is stored.
• Page table is stored in the main memory.
• Number of entries in a page table = Number of
pages in which the process is divided.
• Page Table Base Register (PTBR) contains the base
address of page table.
• Each process has its own independent page table.
• Page Table Base Register (PTBR) provides the
base address of the page table.
• The base address of the page table is added
with the page number referenced by the CPU.
• It gives the entry of the page table containing
the frame number where the referenced page
is stored.
• Page Table Entry

• A page table entry contains several information about


the page.
• The information contained in the page table entry
varies from operating system to operating system.
• The most important information in a page table entry is
frame number.

1. Frame Number-
• Frame number specifies the frame where the
page is stored in the main memory.
• The number of bits in frame number depends
on the number of frames in the main memory.
• Present/Absent bit – Present or absent bit says whether a
particular page you are looking for is present or absent. In case
if it is not present, that is called Page Fault. It is set to 0 if the
corresponding page is not in memory. Used to control page
fault by the operating system to support virtual memory.
Sometimes this bit is also known as valid/invalid bits.
• Protection bit – Protection bit says that what kind of protection
you want on that page. So, these bit for the protection of the
page frame (read, write etc).
• Referenced bit – Referenced bit will say whether this page has
been referred in the last clock cycle or not. It is set to 1 by
hardware when the page is accessed.
• Caching enabled/disabled – Some times we need the fresh
data. Let us say the user is typing some information from the
keyboard and your program should run according to the input
given by the user. In that case, the information will come into
the main memory. Therefore main memory contains the latest
information which is typed by the user
• Modified bit – Modified bit says whether the
page has been modified or not. Modified means
sometimes you might try to write something on
to the page.
Sometimes this modified bit is also called as the
Dirty bit.[In order to reduce the page fault service
time, a special bit called the dirty bit can be
associated with each page. The dirty bit is set to
"1" by the hardware whenever the page is
modified]
Segmentation
• Like Paging, Segmentation is also a memory
management scheme.
• It supports the user’s view of the memory.
The process is divided into the variable size
segments and loaded to the logical memory
address space.
• The logical address space is the collection of
variable size segments.
• Each segment has its name and length. For the
execution, the segments from logical memory
space are loaded to the physical memory
space.
• This segment number is used as an index in
the segment table, and offset value decides
the length or limit of the segment.
• The segment number and the offset together
combined to generates the address of the
segment in the physical memory space.
There are two parts to the segment table in OS:

1. Segment Base
• A segment's base address is also referred to as the
segment base. The memory segments' starting physical
addresses are contained in the segment base.

2. Segment Limit
• Another name for the segment limit is segment offset.
The segment's precise length is contained within it.
• Segment Table Base Register is referred to as STBR. The
segment table's base address is kept in the STBR.
• Segment Table Length Register, also known as STLR. The
number of segments a programme uses is stored in STLR.
The segment table itself is kept in the main memory as a
separate segment. If there are many segments, the
segment table may occasionally use a lot of memory.
TLB(Translation Lookaside Buffer)

• A translation look aside buffer (TLB) is a type


of memory cache that stores recent
translations of virtual memory to physical
addresses to enable faster retrieval.
• The TLB is based on the idea of "locality of
reference," which means it contains only the
entries of those pages that the central
processing unit (CPU) needs to access
frequently.
• Locality of reference
• In operating systems, the concept of locality of
reference states that, instead of loading the
entire process in the main memory, OS can
load only those number of pages in the main
memory that are frequently accessed by the
CPU and along with that, the OS can also load
only those page table entries which are
corresponding to those many pages.
• A Translation look aside buffer can be defined as a memory cache
which can be used to reduce the time taken to access the page
table again and again.

• It is a memory cache which is closer to the CPU and the time taken
by CPU to access TLB is lesser then that taken to access main
memory.

• In other words, we can say that TLB is faster and smaller than the
main memory but cheaper and bigger than the register.

• TLB follows the concept of locality of reference which means that it


contains only the entries of those many pages that are frequently
accessed by the CPU.
• In translation look aside buffers, there are tags and
keys with the help of which, the mapping is done.

• TLB hit is a condition where the desired entry is found


in translation look aside buffer. If this happens then the
CPU simply access the actual location in the main
memory.

• However, if the entry is not found in TLB (TLB miss)


then CPU has to access page table in the main memory
and then access the actual frame in the main memory.
• Therefore, in the case of TLB hit, the effective
access time will be lesser as compare to the
case of TLB miss.
• EAT = P (t + m) + (1 - p) (t + k.m + m)
• p → TLB hit rate
• t → time taken to access TLB
• m → time taken to access main memory
• k=1
?
• Consider a paging hardware with a TLB.
Assume that the entire page table and all the
pages are in the physical memory. It takes 10
milliseconds to search the TLB and 80
milliseconds to access the physical memory. If
the TLB hit ratio is 0.6, the effective memory
access time (in milliseconds) is …………?
Segmented Paging

• In Segmented Paging, the main memory is


divided into variable size segments which are
further divided into fixed size pages.

• Pages are smaller than segments.


• Each Segment has a page table which means
every program has multiple page tables.
• The logical address is represented as Segment
Number (base address), Page number and page
offset.
• Segment Number → It points to the
appropriate Segment Number.

• Page Number → It Points to the exact page


within the segment

• Page Offset → Used as an offset within the


page frame
• Each Page table contains the various
information about every page of the segment.
The Segment Table contains the information
about every segment.
• Each segment table entry points to a page
table entry and every page table entry is
mapped to one of the page within a segment.
Translation of logical address to physical address
• The CPU generates a logical address which is
divided into two parts: Segment Number and
Segment Offset. The Segment Offset must be less
than the segment limit. Offset is further divided
into Page number and Page Offset. To map the
exact page number in the page table, the page
number is added into the page table base.

• The actual frame number with the page offset is


mapped to the main memory to get the desired
word in the page of the certain segment of the
process.
Advantages of Segmented Paging
• It reduces memory usage.
• Page table size is limited by the segment size.
• Segment table has only one entry corresponding to one
actual segment.
• External Fragmentation is not there.
• It simplifies memory allocation.
Disadvantages of Segmented Paging
• Internal Fragmentation will be there.
• The complexity level will be much higher as compare to
paging.
• Page Tables need to be contiguously stored in the memory.
Virtual Memory
• Virtual Memory is a storage scheme that
provides user an illusion of having a very big
main memory. This is done by treating a part
of secondary memory as the main memory.
• Demand Paging is a popular method of virtual
memory management. In demand paging, the
pages of a process which are least used, get
stored in the secondary memory.
• Virtual memory is commonly implemented by
demand paging. It can also be implemented in
a segmentation system.
• Demand segmentation can also be used to
provide virtual memory.
Demand Paging

• A demand paging system is quite similar to a


paging system with swapping where processes
reside in secondary memory and pages are
loaded only on demand, not in advance.
• It suggests keeping all pages of the frames in
the secondary memory until they are
required. In other words, it says that do not
load any page in the main memory until it is
required.
• While executing a program, if the program
references a page which is not available in the
main memory because it was swapped out a
little ago, the processor treats this invalid
memory reference as a page fault and
transfers control from the program to the
operating system to demand the page back
into the memory.
Advantages

• Large virtual memory.


• More efficient use of memory.
• There is no limit on degree of
multiprogramming.
Disadvantages
• Number of tables and the amount of
processor overhead for handling page
interrupts are greater than in the case of the
simple paged management techniques.
Limitations of virtual memory

• Virtual memory runs slower than physical memory, so most


computers prioritize using physical memory when possible.
• Moving data between a computer's virtual and physical
memory requires more from the computer's hardware.
• The amount of storage that virtual memory can provide
depends on the amount of secondary storage a computer
has.
• If a computer only has a small amount of RAM, virtual
memory can cause thrashing[system spends an excessive
amount of time on page swapping rather than executing
useful work), which is when the computer must constantly
swap data between virtual and physical memory, resulting
in significant performance delays.
• It can take longer for applications to load or for a computer
to switch between applications when using virtual memory.
Page fault
• When the page referenced by the CPU is not
found in the main memory then the situation
is termed as Page Fault.
• Whenever any page fault occurs, then the
required page has to be fetched from the
secondary memory into the main memory.
• The page fault mainly generates an exception,
which is used to notify the operating system
that it must have to retrieve the "pages" from
the virtual memory in order to continue the
execution.
• Once all the data is moved into the physical
memory the program continues its execution
normally.
• First of all, internal table(that is usually the process
control block) for this process in order to determine
whether the reference was valid or invalid memory
access.
• If the reference is invalid, then we will terminate the
process. If the reference is valid, but we have not bought
in that page so now we just page it in.
• Then we locate the free frame list in order to find the
free frame.
• Now a disk operation is scheduled in order to read the
desired page into the newly allocated frame.
• When the disk is completely read, then the internal table
is modified that is kept with the process, and the page
table that mainly indicates the page is now in memory.
• Now we will restart the instruction that was interrupted
due to the trap. Now the process can access the page as
though it had always been in memory.
Thrashing and and its causes?
• Thrashing occurs when a Process spends more
time in paging or Swapping activities rather
than its execution.
• In Thrashing, the CPU is so much busy in
swapping that it cannot respond to the user
program as much as it required.
• Thrashing in Operating System affects the performance of
execution. Now in this thrashing tutorial, we will learn about the
causes of Thrashing.
• Initially, when the CPU utilization is low, then the process
scheduling mechanism loads many processes into the Memory
simultaneously so that the Degree of Multiprogramming can be
increased.
• In this situation, we have more processes than the available
number of frames in Memory. Allocation of the limited amount of
frames to each process.
• When any higher priority Process arrives in Memory and if the
frame is not freely available at that time, then the other process
that occupied the frame which resides in the frame will move to
secondary storage, and this free frame is now allocated to a newly
arrived higher priority process.
• In other words, we can say that as the Memory fills up, the process
starts to spend a lot of time for the required pages to be swapped
in; again, CPU utilization becomes low because most of the
processes are waiting for pages.
Page Replacement Algorithms
• A page replacement algorithm determines
how the victim page (the page to be replaced)
is selected when a page fault occurs.
• The aim is to minimize the page fault rate.
The efficiency of a page replacement
algorithm is evaluated by running it on a
particular string of memory references and
computing the number of page faults.
There are three types of Page Replacement
Algorithms.
They are:

• Optimal Page Replacement Algorithm


• First In First Out Page Replacement Algorithm
• Least Recently Used (LRU) Page Replacement
Algorithm
FIFO

• This is the simplest page replacement


algorithm. In this algorithm, the operating
system keeps track of all pages in the memory
in a queue, the oldest page is in the front of
the queue.
• When a page needs to be replaced page in
the front of the queue is selected for removal.
• Consider page reference string 1, 3, 0, 3, 5, 6,
3 with 3 page frames.
• Find the number of page faults.
Advantages of the FIFO Page Replacement
Algorithm

• Easy to understand and implement: The FIFO


page replacement algorithm is very
straightforward and easy to understand, making it
a simple algorithm to implement.
• Low overhead: The algorithm has low overhead
and does not require any additional data
structures to maintain information about page
references.
Disadvantages

• Poor performance: FIFO page replacement algorithm


can suffer from poor performance, especially when the
number of page faults is high.
• Belady’s Anomaly: FIFO page replacement algorithm
can result in a situation known as Belady’s Anomaly,
where the number of page faults can increase as the
number of frames increases.
• Does not consider page usage frequency: The FIFO
algorithm does not take into account how frequently a
page is used, and thus, pages that are heavily used may
be replaced by pages that are rarely used.
Optimal Page replacement

• The Optimal page replacement algorithm,


also known as the MIN (MINimum) page
replacement algorithm.
• It is a page replacement algorithm that aims
to minimize the number of page faults by
always replacing the page that will not be
used for the longest time in the future
• Consider the page references 7, 0, 1, 2, 0, 3, 0,
4, 2, 3, 0, 3, 2, 3 with 4 page frame. Find
number of page fault.
Advantages of Optimal Page
Replacement Algorithm
• Good performance: Optimal page replacement
algorithm tends to provide the best performance
in terms of reducing page faults, as it replaces the
page that will not be used for the longest time in
the future.
• Avoids Belady’s Anomaly: Optimal page
replacement algorithm avoids Belady’s Anomaly.
• Considers future page usage: Optimal page
replacement algorithm takes into account future
page usage, providing an accurate prediction of
which pages are likely to be used in the future.
• Impossible to implement: Optimal page replacement
algorithm is impossible to implement in practice, as it
requires knowledge of future page usage, which is not
available.
• Theoretical algorithm only: Optimal page replacement
algorithm is a theoretical algorithm used primarily for
performance comparison and evaluation purposes, rather
than as a practical algorithm for use in operating systems.
• No real-world implementation: The Optimal page
replacement algorithm does not have a real-world
implementation and is used only for comparison purposes
with other page replacement algorithms.
LRU Page Replacement Algorithm

• The LRU (Least Recently Used) page replacement


algorithm is a more sophisticated page replacement
algorithm that attempts to minimize the number of
page faults by replacing the page that has not been
used for a longest time.
• To implement LRU, the operating system can use a data
structure, such as a stack or a queue, to keep track of
the pages that are currently loaded in memory.
• When a page is accessed, it is moved to the top of the
stack or the head of the queue, indicating that it has
been recently used.
Advantages of LRU Page Replacement
Algorithm
• Good performance: LRU page replacement algorithm
tends to perform well in most cases and provides good
results in reducing page faults.
• Avoids Belady’s Anomaly: LRU page replacement
algorithm avoids Belady’s Anomaly, a situation where
the number of page faults can increase as the number
of frames increases, which is a problem that occurs
with the FIFO page replacement algorithm.
• Reflects page usage frequency: LRU page replacement
algorithm takes into account the frequency of page
usage, replacing pages that are least frequently used.
Disadvantages
• Complex implementation: LRU page replacement
algorithm can be complex to implement, especially
when implemented using linked lists or stacks, which
may require additional data structures to maintain
information about page references.
• High overhead: LRU page replacement algorithm may
have higher overhead compared to other page
replacement algorithms, due to the need to keep
track of page usage information.
• Poor performance in some cases: In some cases, LRU
page replacement algorithm may not perform as well
as other algorithms, such as the Optimal page
replacement algorithm, when page access patterns
are not consistent.
Most Recently Used (MRU)
• In this algorithm, page will be replaced which
has been used recently.
• Belady’s anomaly can occur in this algorithm.
• Both Optimal and MRU page replacement
algorithm replaces the most recently used
page.
• Most recently used page will be required after
the longest time.
• Hence, both these algorithms give the optimal
performance.

1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6, 1, 2,
3, 4, 5, 6 No of page fault?

You might also like