0% found this document useful (0 votes)
157 views

Memory Management

The document discusses various concepts related to memory management including memory partitioning, virtual memory, and paging. It covers fixed and dynamic partitioning, fragmentation, paging, segmentation, virtual memory concepts like swapping and page tables, and issues like thrashing. Placement strategies for dynamic partitioning like first fit, best fit, and next fit are also summarized.

Uploaded by

Glrmn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
157 views

Memory Management

The document discusses various concepts related to memory management including memory partitioning, virtual memory, and paging. It covers fixed and dynamic partitioning, fragmentation, paging, segmentation, virtual memory concepts like swapping and page tables, and issues like thrashing. Placement strategies for dynamic partitioning like first fit, best fit, and next fit are also summarized.

Uploaded by

Glrmn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 127

Memory Management

Memory Management

Memory Management concepts:


Memory Management requirements,
Memory Partitioning: Fixed, Dynamic Partitioning,
Buddy Systems,
Fragmentation, Paging, Segmentation,
Address translation.

Virtual Memory:
Concepts,
Swapping,
VM with Paging, Page Table Structure,
Inverted Page Table, Translation Look aside Buffer, Page Size,
VM with Segmentation,
VM with combined paging and segmentation.

Swapping issues: Thrashing

2
Memory Management
■ In a uniprogramming system, main memory is divided into two parts:

• one part for the operating system (resident monitor, kernel) and

• one part for the program currently being executed.

■ In a multiprogramming system, the “user” part of memory must be further


subdivided to accommodate multiple processes.

■ The task of subdivision is carried out dynamically by OS is known as


Memory Management.

■ Memory needs to be allocated to ensure a reasonable supply of ready


processes to consume available processor time.

■ Memory Management, involves swapping blocks of data from secondary


storage
Execution of a Program
■ Operating system brings into main memory a few pieces of the program.

■ Resident set - portion of process that is in main memory.

■ An interrupt is generated when an address is needed that is not in main


memory.

■ Operating system places the process in a blocking state.


Memory Management Requirements

1. Relocation

2. Protection

3. Sharing

4. Logical organization

5. Physical organization
Memory Management Terms

Term Description
Frame Fixed-length block of main
memory.
Page Fixed-length block of data in
secondary memory (e.g. on disk).
Segment Variable-length block of data that
resides in secondary memory.
Memory Management Requirements
1. Relocation:

❑ Programmer does not know where the program will be placed in memory
when it is executed.

❑ While the program is executing, it may be swapped to disk and returned


to main memory at a different location (relocated).

❑ Memory references must be translated in the code to actual physical


memory address.
Memory Management Requirements
2. Protection
❑ Each process should be protected against unwanted interference by other
processes, whether accidental or intentional.
❑ Processes should not be able to reference memory locations in another
process without permission.

❑ Memory protection requirement must be satisfied by the processor


(hardware) rather than the operating system (software)
Memory Management Requirements

3. Sharing

❑ Allow several processes to access the same portion of memory

▪ Ex. Number of processes executing same program

❑ Better to allow each process access to the same copy of the program
rather than have their own separate copy

❑ Protection mechanism must have flexibility to allow several processes to


access the same portion of memory

❑ Processes that are cooperating on some task may need to share access
to the same data structure.
Memory Management Requirements
4. Logical Organization

❑ Memory is organized linearly (usually)

- Main memory is usually organized as a linear, or 1-D address space, consisting of a


sequence of bytes or words.

- Secondary memory, at its physical level, is similarly organized.


Memory Management Requirements
5. Physical Organization

❑ Memory is organized into Two –levels 1.main memory 2. secondary


memory.

❑ Main memory :fast , expensive, volatile

❑ Secondary memory : Slow , less expensive, non-volatile

❑ Flow of information b/n main & secondary memory is major concern

❑ Memory available for a program plus its data may be insufficient

❑ Programmer does not know how much space will be available and where
that space will be.
Memory Management
❑ Principal operation of MM is to bring process from secondary memory to
main memory

❑ This typically involves Virtual Memory which in turn involves segmentation


and/or paging

❑ Prior to Virtual Memory, other simpler techniques were used

❑ Partitioning

▪ Fixed

▪ Variable / Dynamic

❑ Concept of paging

❑ Concept of segmentation
7.13
Contiguous Memory Partitioning
❑ Partition the available memory into regions with fixed boundaries.

1. Fixed Partitioning:

Equal-size Partitions

Unequal-size Partitions

2. Variable Partitioning
Fixed Partitioning
■ Equal-size partitions

❑ Any process whose size is less than or


equal to the partition size can be loaded
into an available partition

❑ If all partitions are full, the operating


system can swap a process out of a
partition
Fixed Partitioning
■ Problems:

❑ A program may be too big to fit in any partition.

❑ Main memory use is inefficient.

▪ Any program, no matter how small, occupies an entire partition.

▪ In our example, there may be a program whose length is less than 2


Mbytes; yet it occupies an 8-Mbyte partition whenever it is swapped in.

▪ This phenomenon, in which there is wasted space internal to a partition due


to the fact that the block of data loaded is smaller than the partition, is
referred to as internal fragmentation .
Solution: Unequal Size Partitions
■ Lessens both problems

❑ but doesn’t solve completely

■ In Figure

❑ Programs up to 16M can be accommodated


without overlay

❑ Smaller programs can be placed in smaller


partitions, reducing internal fragmentation
Placement Algorithm with Partitions
■ Equal-size partitions

▪ Because all partitions are of equal size, it does not matter which partition
is used.

▪ If all partitions occupied with processes not ready to run, swap one out
and load this guy in -Which one to swap out is a scheduling decision

■ Unequal-size partitions

❑ Can assign each process to the smallest partition within which it will fit.

❑ Queue for each partition.

❑ Processes are assigned in such a way as to minimize wasted memory


within a partition.
Fixed Partitioning
❑ Advantages

▪ Simple, require minimal OS and processing overhead

❑ Disadvantages

▪ Number of partitions specified at system generation time limits the


number of active processes system can support

▪ Internal fragmentation cannot be completely eliminated

• It is always possible to get small jobs which do not utilize partitions


fully

❑ Fixed partitioning is no where to be seen today

❑ Example: IBM Mainframe's OS/MFT(Multiprogramming with fixed number


of tasks)
Dynamic Partitioning
■ Purpose: to overcome the difficulties of fixed partitioning

■ Partitions are of variable length and number.

■ Process is allocated exactly as much memory as required.

■ this method starts out well, but eventually it leads to a situation in which
there are a lot of small holes in memory. As time goes on, memory
becomes more and more fragmented, and memory utilization declines.
This phenomenon is referred to as external fragmentation , indicating that
the memory that is external to all partitions becomes increasingly
fragmented.

■ Example:I BM Mainframe's OS/MVT OS/MFT(Multiprogramming with


variable number of tasks)
Dynamic Partitioning Example

• External Fragmentation

OS (8M) • Memory external to all processes is


fragmented
P2
P1
(14M) • Can resolve using compaction
(20M)
Empty (6M)
Empty
P4(8M)
P2
(56M)
(14M)
Empty (6M)

P3
(18M)

Empty (4M)

Refer to Figure 7.4


Dynamic Partitioning Example
Dynamic Partitioning
- In Compaction, shuffle all occupied areas of memory to one
end and leave all free memory space as a large block
- One technique for overcoming external fragmentation is
compaction : From time to time, the OS shifts the processes so that
they are contiguous and so that all of the free memory is together in
one block.

• Problem with compaction?


o Extra overhead in terms of resource utilization & large
response time
o Expensive
o Needs dynamic relocation of processes in memory
Contents

■ Memory Management requirements

■ Memory Partitioning: Fixed and Variable Partitioning,

■ Allocation Strategies (First Fit, Best Fit, and Worst Fit), Fragmentation,
Swapping.

■ Virtual Memory: Concepts, Segmentation, Paging, Address Translation,

■ Page Replacement Policies (FIFO, LRU, Optimal, Other Strategies),

■ Thrashing.

■ OS Services layer in the Mobile OS: Multimedia and Graphics Services,


Connectivity Services.
Dynamic Partitioning Placement Algorithm
■ Purpose: To overcome on problem of compaction

■ Operating system must decide which free block to allocate to a process

■ When more than one choice available, OS must decide cleverly which hole
to fill

■ Hole must be big enough to accommodate the to-be-loaded process

■ Placement algorithms:

1. First fit

2. Best fit

3. Next fit
Dynamic Partitioning Placement Algorithm
■ First-fit algorithm
❑ Scans memory from the beginning and chooses the first available block
that is large enough

❑ Simplest, Fastest

❑ May have many process loaded in the front end of memory that must be
searched over when trying to find a free block
Dynamic Partitioning Placement Algorithm
Best-fit algorithm
❑ scan all holes to see which is best

❑ Chooses the block that is closest in size to the request

❑ Worst performer overall

❑ Since smallest block is found for process, the smallest amount of


fragmentation is left

❑ Memory compaction must be done more often


Dynamic Partitioning Placement Algorithm
■ Next-fit
❑ Scans memory from the location of the last placement

❑ More often allocate a block of memory at the end of memory where the
largest block is found

❑ The largest block of memory is broken up into smaller blocks

❑ Compaction is required to obtain a large block at the end of memory


•The last block that was used was a
22-Mbyte block from which a
14-Mbyte partition was created.
Comparison
• Depends on exact sequence of process swapping and size of processes
• First fit
– Simplest and usually the best and fastest
– Splits regions towards beginning requiring more searches
• Next fit produces slightly worse results than first fit
– Tends to allocate more frequently towards the end of the memory, thus
largest block of free memory which usually appears at the end is quickly
fragmented requiring more frequent compaction
• Best fit is usually the worst performer
– Every allocation leaves behind smallest fragment of no good use
– Requires compaction even more frequently
• How about worst-fit strategy?
■ Worst-fit: Allocate the largest hole; must also search entire list

❑ Produces the largest leftover hole

First-fit and best-fit better than worst-fit in terms of speed and storage
utilization
1. Given memory partitions of 100 KB, 500 KB, 200 KB, 300
KB, and 600 KB. How would each of the First fit, Best-Fit
and Worst-Fit algorithms place processes of 212 KB, 417
KB, 112 KB, and 426 KB ?

2. Given memory partition of 100 KB, 500 KB, 200 KB and 600
KB (in order). Show with neat sketch how would each of the
first-fit, best-fit and worst fit algorithms place processes of
412 KB, 317 KB, 112 KB and 326 KB (in order). Which
algorithm is most efficient in memory allocation?
Buddy System
• To overcome on the drawbacks of Fixed & Variable partitioning
scheme.
• Memory blocks are available of size 2K words, where L<=K<=U
• 2L Smallest size block that is allocated
• 2U largest block that is allocated
• Generally 2U is the size of entire memory available for allocation
• Entire space available is treated as a single block of size 2U
• If a request of size s where 2U-1 < s <= 2U
– entire block is allocated
• Otherwise block is split into two equal buddies of size 2U-1.
– Process continues until smallest block greater than or equal to s
is generated
Buddy System
• The buddy system of partitioning relies on the fact that space
allocations can be conveniently handled in sizes of power of 2.
• There are two ways in which the buddy system allocates space.
• Suppose we have a hole which is the closest power of two. In that
case, that hole is used for allocation.
• In case we do not have that situation then we look for the next
power of 2 hole size, split it in two equal halves and allocate one of
these.
• Because we always split the holes in two equal sizes, the two are
“buddies”. Hence, the name buddy system.
• The buddy system has the advantage that it minimizes the internal
fragmentation.

• In practice, some Linux flavors use it.


Example of Buddy System
Tree Representation of Buddy System
Tree Representation of Buddy System
• Figure 7.7 shows a binary tree representation of the buddy allocation
immediately after the Release B request.

• The leaf nodes represent the current partitioning the memory.

• If two buddies are leaf nodes, then at least one must be allocated;

– otherwise they would be coalesced into a larger block.

• The buddy system is a reasonable compromise to overcome the


disadvantages of both the fixed and variable partitioning schemes,

• But in contemporary operating systems, virtual memory based on paging


and segmentation is superior.

• However, the buddy system has found application in parallel systems as an


efficient means of allocation and release for parallel programs. A modified
form of the buddy system is used for UNIX kernel memory allocation
Example on Buddy system
• A minicomputer uses the buddy system for
memory management. Initially it has one

block of 256K at address 0. After successive


requests of 7K, 26K, 34K and 19K come in,

how many blocks are left and what are their sizes
and addresses?
Example on Buddy system
• Solution:
• 7K: We recursively break the address space into 2 halves until we have:
8K - 8K - 16K - 32K - 64K -128 K
• The first segment is used to satisfy the 7K request.
• 26K: We use the 32K block to allocate the request.
8K - 8K - 16K - 32K - 64K -128 K
• 34K: We use the 64K block to satisfy the request:
8K - 8K - 16K - 32K - 64K -128 K
• 19K: Since 8K and 16K cannot satisfy the request on their own, we need to
break the big
• 128K block recursively until we get the size we need. The blocks will look like:
• 8K - 8K - 16K - 32K - 64K - 32K - 32K - 64K
Example on Buddy system
• A 1MB block of memory is allocated using
the buddy system.Show the result of the
following sequence:Request A 70,request
B 35,request C 80, return A,request D
60,return B,return D,return C.

• Show the binary tree representation


following Return B
Contents

■ Memory Management requirements

■ Memory Partitioning: Fixed and Variable Partitioning,

■ Allocation Strategies (First Fit, Best Fit, and Worst Fit), Fragmentation,
Swapping.

■ Paging, Segmentation, Address Translation,

■ Page Replacement Policies (FIFO, LRU, Optimal, Other Strategies),

■ Thrashing.
Addresses
■ Logical
❑ Reference to a memory location independent of the current
assignment of data/instruction to memory. Generated by CPU
■ Relative
❑ Type of logical address wherein address is specified as a
location relative to some known point, say a value in a register
■ Physical or Absolute
❑ The absolute address or actual location in main memory.
■ Typically, all of the memory references in a loaded process
are relative to the base address
❑ Hardware mechanism is used to translate logical/relative to
physical at the time of execution of instruction that contains the
reference
Paging
■ Fixed & variable size partitions are insufficient as involves internal / external
fragmentation.

■ What were the two problems with equal sized fixed partitions?
▪ Program too large for a partition
▪ Program too small for a partition
■ Partition memory into small equal fixed-size chunks and divide each process
into the same size chunks.

■ The chunks of a process are called pages and chunks of memory are called
frames.

■ Operating system maintains a page table for each process

❑ Contains the frame location for each page in the process

❑ Memory address consist of a page number and offset within the page

❑ Page number is used as an index to page table.


Paging
■ Each process has its own page table.

■ Each page table entry contains the frame number of the corresponding page
in main memory

■ A bit is needed to indicate whether the page is in main memory or not.

■ No external fragmentation

■ all frames (physical memory) can be used by processes

■ Possibility of Internal fragmentation

■ The physical memory used by a process is no longer contiguous

■ The logical memory of a process is still contiguous

■ The logical and physical addresses are separated

■ the process does not see the translation or the difference to having physical
memory
Advantages of Breaking up a Process

■ More processes may be maintained in main memory

❑ Only load in some of the pieces of each process

❑ With so many processes in main memory, it is very likely a process will


be in the Ready state at any particular time

■ A process may be larger than all of main memory


7.46
Processes and Frames

A.0
A.1
A.2
A.3
D.0
B.0
D.1
B.1
D.2
B.2
C.0
C.1
C.2
C.3
D.3
D.4
Assignment of Process Pages to Free Frames
Assignment of Process Pages to Free Frames
Page Tables for Example
Paging

● The page size is defined by the hardware.

● The page size is typically a power of 2.The size can be 512 bytes to 16
MB.

● The selection of power of 2 as a page size makes translation of logical


address into a page number and page offset easy.

● Address generated by CPU is divided into:

● Page number (p) – used as an index into a page table which contains
base address of each page in physical memory

● Page offset (d) – combined with base address to define the physical
memory address that is sent to the memory unit.

For given logical address space 2m and page size 2n


Paging- Summary

● The with simple paging, main memory is divided into many small equal-size
frames.
● Each process is divided into frame-size pages.
● Smaller processes require fewer pages; larger processes require more.
● When a process is brought in, all of its pages are loaded into available
frames, and a page table is set up.
● This approach solves many of the problems inherent in partitioning
Segmentation
■ User program and associated data now divided not into pages, but
segments which could be of unequal size

■ All segments of all programs do not have to be of the same length, May be
unequal, dynamic size

■ There is a maximum segment length, Simplifies handling of growing data


structures

■ Logical Address consist of two parts - a segment number and an offset

■ Similar to dynamic partitioning


A program however can now occupy more than one partitions
Partitions need not be contiguous
Segmentation
• Paging is invisible to the programmer, segmentation is usually visible and
is provided as a convenience for organizing programs and data
– Compiler or programmer assigns programs and data to different
segments
– One program may be further broken down into multiple segments for
purposes of modular programming
• Allows programs to be altered and recompiled independently
• Simplifies handling of growing data structures
• Logical address to physical address translation is now little complicated but
similar
– Segment table
• Length of segment and starting physical address
Segment Organization

■ Starting address corresponding segment in main memory

■ Each entry contains the length of the segment

■ A bit is needed to determine if segment is already in main memory

■ Another bit is needed to determine if the segment has been modified


since it was loaded in main memory
Address translation
• Extract segment number from logical address (left most
n bits)

• Find base address of this segment from segment table

• Compare offset (rightmost m bits) with segment length

• If ok, desired physical address = base address + offset


Segment no Start Length End
(start+length)
S2 222 198 420
S0 660 248 908
S3 996 604 1600
S1 1752 422 2174

• (0,198): 660+198=858

• (2,156)=222+156=377

• (1,530)=invalid,as offset > length

• (3,444)=996+444=1440

• (0,222)=Invalid as offset > length


Example
• 8-bit virtual address, 10-bit physical address,
and each page is 64 bytes.

• How many virtual pages? 4 pages

• How many physical pages? 16 frames

• How many entries in page table?


4 PTE

• Given page table = [2, 5, 1, 8], what’s the


physical address for virtual address 241? 561
Virtual Memory : Paging & Segmentation
Addresses
Key points in Memory Management

1) Memory references are logical addresses dynamically


translated into physical addresses at run time

❑ A process may be swapped in and out of main memory


occupying different regions at different times during
execution

2) A process may be broken up into pieces that do not need to


located contiguously in main memory

❑ Portion of a process that is in memory at any given time


is called resident set of the process
Breakthrough in Memory Management

■ If both of those two characteristics are present,

❑ then it is not necessary that all of the pages or all of the


segments of a process be in main memory during
execution.

■ If the next instruction, and the next data location are in


memory then execution can proceed

❑ at least for a time


Background
■ Virtual memory – separation of user logical memory from physical memory.

❑ Only part of the program needs to be in memory for execution

❑ Logical address space can therefore be much larger than physical


address space

❑ Allows address spaces to be shared by several processes

❑ Allows for more efficient process creation

■ Virtual memory can be implemented via:

❑ Demand paging

❑ Demand segmentation
Virtual Memory

Demand paging and demand segmentation mean that, when a


program is being executed, part of the program is in memory
and part is on disk.
This means that, for example, a memory size of 10 MB can
execute 10 programs, each of size 3 MB, for a total of 30
MB.
At any moment, 10 MB of the 10 programs are in memory
and 20 MB are on disk. There is therefore an actual memory
size of 10 MB, but a virtual memory size of 30 MB. Figure
7.11 shows the concept. Virtual memory, which implies
demand paging, demand segmentation or both, is used in
almost all operating systems today.
7.66
Virtual Memory

7.67
Virtual Memory
• If a ‘piece’ is not in memory and is required, processor generates
interrupt indicating memory access fault

– OS takes charge, puts process in blocked state

– OS issues disk I/O to bring in the desired piece

– Schedules another process in the mean time

– Once brought in, I/O interrupt is raised, OS takes control again


and puts the process in Ready queue

• VM Virtual Memory, Virtual Machine


Advantages of having only a portion
• More processes in memory at a given time with increase in
CPU utilization, throughput but no increase in response time
& turnaround time

• Processes can now be larger than main memory without any


tension on part of the programmer (overlaying)

• Why to waste memory with portions of program/data which


are being used only rarely

• Time is saved as unused pieces are not being swapped in/out

• Less I/O needed to load or swap user programs into memory,


so each user program would run faster.
Steps in handling a page fault
Issues
• To bring in a piece, some other piece needs to be thrown out
– Piece is thrown out just before it is used
– Go get that piece again almost immediately
– Leads to thrashing
• System spends to much time swapping the pieces rather than executing
instructions
• Problem is worsened if OS mistakes it to be an indicator to increase the
level of multi programming

• Solution: OS tries to guess which pieces are least likely to be


used in the near future
Virtual Memory Requirements
• Hardware must support paging and/or segmentation

• OS must support swapping of pages and/or segments


Virtual Memory + Paging
• Earlier paging: when all pages of a process are loaded into memory,
its page table is created and loaded

• Page Table Entry (PTE) now needs to have

– An extra bit to indicate whether page is in memory or not (P)

– Another bit to indicate whether it is modified since it was last


loaded (M)

• When somebody else comes to replace me, I will not need to


be written back to disk

– Some control bits

• For example protection or sharing at page level


Virtual Memory + Paging /Demand Paging

7.74
Page Table Entry

Present bit Modify bit


Page protection

• Implemented by associating protection bits with each


virtual page in page table
• Protection bits
present bit: map to a valid physical page?
read/write/execute bits: can read/write/execute?
user bit: can access in user mode?
• x86: PTE_P, PTE_W, PTE_U
• Checked by MMU on each memory access
Example:Valid (v) or Invalid (i) Bit In A Page Table

Process
Main Memory
Demand paging
• Page is needed reference to it

– invalid reference abort

– not-in-memory bring to memory

• Lazy swapper – never swaps a page into memory unless


page will be needed

• Swapper that deals with pages is a pager


Transfer of a Paged Memory to Contiguous Disk Space
Address Translation in Paging Page number
field > frame
number field
n>m
Page Tables
■ Generally one page table per process

■ Let us say a process requires 2GB virtual memory 2^31 pages /process

■ How many 512 bytes pages will it contain?


2^31/2^9=2^22 pages
■ As the size of page table increases, amount of memory required by it could
/process
be unacceptably high

■ Page tables are also stored in virtual memory

■ Page tables are subject to paging as other pages

■ When a process is running, part of its page table must be in main memory.

■ Page-table base register (PTBR) points to the page table

■ Page-table length register (PRLR) indicates size of the page table

■ In this scheme every data/instruction access requires two memory


accesses. One for the page table and one for the data/instruction.
Translation Lookaside Buffer
■ Each virtual memory reference can cause two physical memory
accesses
❑ One to fetch appropriate page table
❑ One to fetch appropriate data
■ Effect of doubling the memory access time!

■ To overcome this problem a high-speed cache is set up for page


table entries
❑ Called a Translation Lookaside Buffer (TLB)

❑ Contains page table entries that have been most recently used
TLB Operation
■ Given a virtual address, processor examines the TLB

■ If page table entry is present (TLB hit), the frame number is retrieved
and the real address is formed
■ If page table entry is not found in the TLB (TLB miss), the page
number is used to index the process page table

▪ First checks if page is already in main memory

• If yes, go ahead, update TLB to include this new entry

• If no, a page fault is issued to get the page (OS takes


over), page table is updated, instruction is re-executed

■ It have entries between 64 to 1024


Translation Lookaside Buffer
TLB Operation
Translation Lookaside Buffer
Translation Lookaside Buffer
Page table size issues
• Given:
– A 32 bit address space (4 GB)

– 4 KB pages

– A page table entry of 4 bytes

• Implication: page table is 4 MB per process!

• Observation: address space are often sparse


– Few programs use all of 2^32 bytes

• Change page table structures to save memory


– Trade translation time for page table space

Solution: Hierarchical page table


Two Level Hierarchical Page Table
• Some processor makes use of two-level scheme to organize large
page tables

• There is a page directory

– Each entry points to a page table

• If length of page directory is X & maximum length of a page table


is Y,

• A process can consist of up to X*Y pages.

• Maximum length of a page table is restricted to be equal to


one page
Two Level Hierarchical Page Table
4kbyte=2^12=page
size

PTE=2^2

2^32

Pages=2^20
Address Translation in Two-Level Paging System
Inverted Page Table
• Used on PowerPC, UltraSPARC, and IA-64
architecture

• Page number portion of a virtual address is


mapped into a hash value

• Hash value points to inverted page table

• Fixed proportion of real memory is required for


the tables regardless of the number of
processes
Inverted Page Table
• Page number

• Process identifier

• Control bits

• Chain pointer
Inverted Page Table
Page Size
• Effect on internal fragmentation?

– Smaller less internal fragmentation

• optimal memory utilization-We want less internal fragmentation

– Keep page size small

• But small page size more number of pages per process larger
page tables using virtual memory for page table potential double
page fault!

• Further, larger page size more efficient block transfer of data

– Due to physical characteristics of rotational devices

Vishal Kaushal
Page Size
■ Secondary memory is designed to efficiently transfer large blocks of
data so a large page size is better
■ Small page size, large number of pages will be found in main
memory
■ As time goes on during execution, the pages in memory will all
contain portions of the process near recent references => Page
faults low.

■ Increased page size causes pages to contain locations further from


any recent reference => Page faults rise.
Effect of page size on number of page faults
• With very small page size
– More pages in memory, thus lesser page faults
– Greater effect of principle of locality
• One page refers to nearby locations
• As page size increases
– Lesser pages in memory, thus more page faults
– Effect of locality reduced
• Each page will contain locations further and further away
from recent references
• Page fault rate is also determined by the number of frames
allocated to a process.
3. Size of physical memory & program size:

Vishal Kaushal
Page Size
Example Page Size
Ordinary Paging vs VM Paging
Paging Virtual memory paging

Main memory partitioned small into fixed -size Main memory partitioned into small fixed
chunks called frames -size chunks called frames

Program broken into pages by the compiler or Program broken into pages by the compiler or
memory management system memory management system

Internal fragmentation within frames Internal fragmentation within frames

No external fragmentation No external fragmentation

OS must maintain page table for each OS must maintain page table for each
process showing which frame each page process showing which frame each page
occupies occupies

OS must maintain a free frame list OS must maintain a free frame list

Processor uses page number, offset to Processor uses page number, offset to
calculate absolute address calculate absolute address

All the pages of a process must be in main Not All the pages of a process must be in
memory for process to run, unless overlays main memory for process. Page may be read
are used in as needed

Reading a page into main memory may


require writing a page out to disk
Segmentation
■ Memory consist of multiple address space/segments
■ Segments may be of unequal/dynamic in size
■ Memory reference consist of segment number, offset to form address.
■ Have advantages to programmer over non segmented memory
Easier to relocate segment than entire program
Simplifies handling of growing data structures
Allows programs to be altered and recompiled independently
Sharing among processes
Provides protection
Avoids allocating unused memory (no internal fragmentation)
Efficient translation -> Segment table small ( fit in MMU )
❑ Disadvantages
Segments have variable lengths -> how to fit?
Segments can be large -> external fragmentation
VM + Segmentation
• Earlier segmentation: each process has its own segment table

– When all of its segments are loaded into main memory, segment
table is created and loaded

• With VM, STEs become more complex

– P bit (present/absent)

– M bit (modify)

– Other control bits

• For example to support sharing or protection at segment


level
VM + Segmentation

7.103
Segment Table Entries

Present bit Modify bit


Address Translation in Segmentation
Ordinary Segmentation vs VM Segmentation

Simple Segmentation Virtual memory segmentation

Main memory not partitioned Main memory not partitioned

Program segments specified by the Program segments specified by the


programmer to the compiler programmer to the compiler

No Internal fragmentation No Internal fragmentation

External fragmentation External fragmentation

OS must maintain segment table for OS must maintain segment table for
each process showing the load address each process showing the load address
and length of each segment and length of each segment

OS must maintain a list of holes OS must maintain a list of holes

Processor uses segment number, offset Processor uses segment number, offset
to calculate absolute address to calculate absolute address

All the segments of a process must be Not all the segments of a process must
in main memory for process to run, be in main memory for process to run.
unless overlays are used Segment s may be read in as needed

Reading a segment into main memory


may require writing one/more segments
out to disk
Contents

■ Memory Management requirements

■ Memory Partitioning: Fixed and Variable Partitioning,

■ Allocation Strategies (First Fit, Best Fit, and Worst Fit), Fragmentation,
Swapping.

■ Virtual Memory: Concepts, Segmentation, Paging, Address Translation,

■ Page Replacement Policies (FIFO, LRU, Optimal, Other Strategies),

■ Thrashing.

■ OS Services layer in the Mobile OS: Multimedia and Graphics Services,


Connectivity Services.
What
happens
when there
is no free
frame?
Page Table When Some Pages Are Not in Main Memory
Page Fault
■ If there is a reference to a page, first reference to that page will
trap to operating system:
page fault
Operating system looks at another table to decide:
❑ Invalid reference ⇒ abort
❑ Just not in memory
Get empty frame
Swap page into frame
Reset tables
Set validation bit = v
Restart the instruction that caused the page fault
Steps in handling a page fault
What happens if there is no free frame?

■ Page replacement – find some page in memory, but not really in use,
swap it out
• Different algorithms, different performance
• Want an algorithm which will result in minimum number of page faults
■ Same page may be brought into memory several times
Page Replacement
■ Prevent over-allocation of memory by modifying page-fault service routine to
include page replacement

■ Use modify (dirty) bit to reduce overhead of page transfers – only modified
pages are written to disk

■ Page replacement completes separation between logical memory and


physical memory – large virtual memory can be provided on a smaller
physical memory
Basic Page Replacement
Find the location of the desired page on disk

Find a free frame:


- If there is a free frame, use it
- If there is no free frame, use a page replacement
algorithm to select a victim frame
Bring the desired page into the (newly) free frame; update the
page and frame tables

Restart the process


Page Replacement
Page-Replacement Algorithms
• Want lowest page-fault rate.

• Evaluate algorithm by running it on a particular string of memory references


(reference string) and computing the number of page faults on that string.

• In all our examples, the reference string is

ABCABDADBCB
Basic Replacement Algorithms
■ First-in, first-out (FIFO)

❑ Treats page frames allocated to a process as a circular buffer

❑ Pages are removed in round-robin style

❑ Simplest replacement policy to implement

❑ Page that has been in memory the longest is replaced

❑ These pages may be needed again very soon

❑ How would you implement FIFO strategy?


• Keep a queue and do round robin
Example: FIFO
• Suppose we have 3 page frames, 4 virtual pages, and
following reference stream:
– ABCABDADBCB
• Consider FIFO Page replacement:

Ref: A B C A B D A D B C B
Page:
1 A D C
2 B A
3 C B

– FIFO: 7 faults.
– When referencing D, replacing A is bad choice, since
need A again right away
Basic Replacement Algorithms
■ Optimal policy

❑ Selects for replacement that page for which the time to the next
reference is the longest

❑ Impossible to have perfect knowledge of future events


Example: Optimal(MIN)
• Suppose we have the same reference stream:
– ABCABDADBCB
• Consider Optimal Page replacement:

Ref: A B C A B D A D B C B
Page:
1 A C
2 B
3 C D

– MIN: 5 faults
– Where will D be brought in? Look for page not
referenced farthest in future.
Basic Replacement Algorithms
■ Least Recently Used (LRU)

❑ Replaces the page that has not been referenced for the longest time

❑ By the principle of locality, this should be the page least likely to be


referenced in the near future

❑ Each page could be tagged with the time of last reference. This would
require a great deal of overhead.
Example: LRU
• Suppose we have the same reference stream:
– ABCABDADBCB
• Consider LRU Page replacement:

Ref: A B C A B D A D B C B
Page:
1 A C
2 B
3 C D

– LRU: 5 faults
• What will LRU do?
– Same decisions as MIN here, but won’t always be true!
LRU Algorithm (Cont.)
■ How would you implement LRU strategy?

■ Stack implementation – keep a stack of page numbers in a double link form:

❑ Any time Page is referenced move it to the top

❑ Replace page from the bottom of the stack

❑ No search for replacement


Comparison of Algorithms
FIFO OPT/MIN LRU
Data structure for Queue NA DLL using counter
implementation or stack method
Traversal of Forward Forward Backward
reference string
Implementation Easy to Diff. to impl. Approximation of
understand n OPT ,impl is
imple. possible
Performance Not good Mainly used for Good
comparison study
Time to be Uses the time Uses the time Approximation of
considered when page was when a page is to OPT , Uses the
brought into be used time when a page
memory is used
Thrashing
■ If a process does not have “enough” pages, the page-fault rate is very high.
This leads to:

❑ low CPU utilization

❑ operating system thinks that it needs to increase the degree of


multiprogramming

❑ another process added to the system

■ Thrashing ≡ a process is busy swapping pages in and out

■ Swapping out a piece of a process just before that piece is needed

■ The processor spends most of its time swapping pieces rather than
executing user instructions
Thrashing (Cont.)

■ If global page replacement algorithm is used then thrashing occurs

■ To avoid this use local page replacement algorithm


How to avoid thrashing?
• Provide processes with as many frames as they need

• How to determine this?

– Look at how many frames a process actually "uses“

• Locality model

• Working set model

• Page Fault Frequency Model

You might also like