Memory Management
Memory Management
Memory Management
Virtual Memory:
Concepts,
Swapping,
VM with Paging, Page Table Structure,
Inverted Page Table, Translation Look aside Buffer, Page Size,
VM with Segmentation,
VM with combined paging and segmentation.
2
Memory Management
■ In a uniprogramming system, main memory is divided into two parts:
• one part for the operating system (resident monitor, kernel) and
1. Relocation
2. Protection
3. Sharing
4. Logical organization
5. Physical organization
Memory Management Terms
Term Description
Frame Fixed-length block of main
memory.
Page Fixed-length block of data in
secondary memory (e.g. on disk).
Segment Variable-length block of data that
resides in secondary memory.
Memory Management Requirements
1. Relocation:
❑ Programmer does not know where the program will be placed in memory
when it is executed.
3. Sharing
❑ Better to allow each process access to the same copy of the program
rather than have their own separate copy
❑ Processes that are cooperating on some task may need to share access
to the same data structure.
Memory Management Requirements
4. Logical Organization
❑ Programmer does not know how much space will be available and where
that space will be.
Memory Management
❑ Principal operation of MM is to bring process from secondary memory to
main memory
❑ Partitioning
▪ Fixed
▪ Variable / Dynamic
❑ Concept of paging
❑ Concept of segmentation
7.13
Contiguous Memory Partitioning
❑ Partition the available memory into regions with fixed boundaries.
1. Fixed Partitioning:
Equal-size Partitions
Unequal-size Partitions
2. Variable Partitioning
Fixed Partitioning
■ Equal-size partitions
■ In Figure
▪ Because all partitions are of equal size, it does not matter which partition
is used.
▪ If all partitions occupied with processes not ready to run, swap one out
and load this guy in -Which one to swap out is a scheduling decision
■ Unequal-size partitions
❑ Can assign each process to the smallest partition within which it will fit.
❑ Disadvantages
■ this method starts out well, but eventually it leads to a situation in which
there are a lot of small holes in memory. As time goes on, memory
becomes more and more fragmented, and memory utilization declines.
This phenomenon is referred to as external fragmentation , indicating that
the memory that is external to all partitions becomes increasingly
fragmented.
• External Fragmentation
P3
(18M)
Empty (4M)
■ Allocation Strategies (First Fit, Best Fit, and Worst Fit), Fragmentation,
Swapping.
■ Thrashing.
■ When more than one choice available, OS must decide cleverly which hole
to fill
■ Placement algorithms:
1. First fit
2. Best fit
3. Next fit
Dynamic Partitioning Placement Algorithm
■ First-fit algorithm
❑ Scans memory from the beginning and chooses the first available block
that is large enough
❑ Simplest, Fastest
❑ May have many process loaded in the front end of memory that must be
searched over when trying to find a free block
Dynamic Partitioning Placement Algorithm
Best-fit algorithm
❑ scan all holes to see which is best
❑ More often allocate a block of memory at the end of memory where the
largest block is found
First-fit and best-fit better than worst-fit in terms of speed and storage
utilization
1. Given memory partitions of 100 KB, 500 KB, 200 KB, 300
KB, and 600 KB. How would each of the First fit, Best-Fit
and Worst-Fit algorithms place processes of 212 KB, 417
KB, 112 KB, and 426 KB ?
2. Given memory partition of 100 KB, 500 KB, 200 KB and 600
KB (in order). Show with neat sketch how would each of the
first-fit, best-fit and worst fit algorithms place processes of
412 KB, 317 KB, 112 KB and 326 KB (in order). Which
algorithm is most efficient in memory allocation?
Buddy System
• To overcome on the drawbacks of Fixed & Variable partitioning
scheme.
• Memory blocks are available of size 2K words, where L<=K<=U
• 2L Smallest size block that is allocated
• 2U largest block that is allocated
• Generally 2U is the size of entire memory available for allocation
• Entire space available is treated as a single block of size 2U
• If a request of size s where 2U-1 < s <= 2U
– entire block is allocated
• Otherwise block is split into two equal buddies of size 2U-1.
– Process continues until smallest block greater than or equal to s
is generated
Buddy System
• The buddy system of partitioning relies on the fact that space
allocations can be conveniently handled in sizes of power of 2.
• There are two ways in which the buddy system allocates space.
• Suppose we have a hole which is the closest power of two. In that
case, that hole is used for allocation.
• In case we do not have that situation then we look for the next
power of 2 hole size, split it in two equal halves and allocate one of
these.
• Because we always split the holes in two equal sizes, the two are
“buddies”. Hence, the name buddy system.
• The buddy system has the advantage that it minimizes the internal
fragmentation.
• If two buddies are leaf nodes, then at least one must be allocated;
how many blocks are left and what are their sizes
and addresses?
Example on Buddy system
• Solution:
• 7K: We recursively break the address space into 2 halves until we have:
8K - 8K - 16K - 32K - 64K -128 K
• The first segment is used to satisfy the 7K request.
• 26K: We use the 32K block to allocate the request.
8K - 8K - 16K - 32K - 64K -128 K
• 34K: We use the 64K block to satisfy the request:
8K - 8K - 16K - 32K - 64K -128 K
• 19K: Since 8K and 16K cannot satisfy the request on their own, we need to
break the big
• 128K block recursively until we get the size we need. The blocks will look like:
• 8K - 8K - 16K - 32K - 64K - 32K - 32K - 64K
Example on Buddy system
• A 1MB block of memory is allocated using
the buddy system.Show the result of the
following sequence:Request A 70,request
B 35,request C 80, return A,request D
60,return B,return D,return C.
■ Allocation Strategies (First Fit, Best Fit, and Worst Fit), Fragmentation,
Swapping.
■ Thrashing.
Addresses
■ Logical
❑ Reference to a memory location independent of the current
assignment of data/instruction to memory. Generated by CPU
■ Relative
❑ Type of logical address wherein address is specified as a
location relative to some known point, say a value in a register
■ Physical or Absolute
❑ The absolute address or actual location in main memory.
■ Typically, all of the memory references in a loaded process
are relative to the base address
❑ Hardware mechanism is used to translate logical/relative to
physical at the time of execution of instruction that contains the
reference
Paging
■ Fixed & variable size partitions are insufficient as involves internal / external
fragmentation.
■ What were the two problems with equal sized fixed partitions?
▪ Program too large for a partition
▪ Program too small for a partition
■ Partition memory into small equal fixed-size chunks and divide each process
into the same size chunks.
■ The chunks of a process are called pages and chunks of memory are called
frames.
❑ Memory address consist of a page number and offset within the page
■ Each page table entry contains the frame number of the corresponding page
in main memory
■ No external fragmentation
■ the process does not see the translation or the difference to having physical
memory
Advantages of Breaking up a Process
A.0
A.1
A.2
A.3
D.0
B.0
D.1
B.1
D.2
B.2
C.0
C.1
C.2
C.3
D.3
D.4
Assignment of Process Pages to Free Frames
Assignment of Process Pages to Free Frames
Page Tables for Example
Paging
● The page size is typically a power of 2.The size can be 512 bytes to 16
MB.
● Page number (p) – used as an index into a page table which contains
base address of each page in physical memory
● Page offset (d) – combined with base address to define the physical
memory address that is sent to the memory unit.
● The with simple paging, main memory is divided into many small equal-size
frames.
● Each process is divided into frame-size pages.
● Smaller processes require fewer pages; larger processes require more.
● When a process is brought in, all of its pages are loaded into available
frames, and a page table is set up.
● This approach solves many of the problems inherent in partitioning
Segmentation
■ User program and associated data now divided not into pages, but
segments which could be of unequal size
■ All segments of all programs do not have to be of the same length, May be
unequal, dynamic size
• (0,198): 660+198=858
• (2,156)=222+156=377
• (3,444)=996+444=1440
❑ Demand paging
❑ Demand segmentation
Virtual Memory
7.67
Virtual Memory
• If a ‘piece’ is not in memory and is required, processor generates
interrupt indicating memory access fault
7.74
Page Table Entry
Process
Main Memory
Demand paging
• Page is needed reference to it
■ Let us say a process requires 2GB virtual memory 2^31 pages /process
■ When a process is running, part of its page table must be in main memory.
❑ Contains page table entries that have been most recently used
TLB Operation
■ Given a virtual address, processor examines the TLB
■ If page table entry is present (TLB hit), the frame number is retrieved
and the real address is formed
■ If page table entry is not found in the TLB (TLB miss), the page
number is used to index the process page table
– 4 KB pages
PTE=2^2
2^32
Pages=2^20
Address Translation in Two-Level Paging System
Inverted Page Table
• Used on PowerPC, UltraSPARC, and IA-64
architecture
• Process identifier
• Control bits
• Chain pointer
Inverted Page Table
Page Size
• Effect on internal fragmentation?
• But small page size more number of pages per process larger
page tables using virtual memory for page table potential double
page fault!
Vishal Kaushal
Page Size
■ Secondary memory is designed to efficiently transfer large blocks of
data so a large page size is better
■ Small page size, large number of pages will be found in main
memory
■ As time goes on during execution, the pages in memory will all
contain portions of the process near recent references => Page
faults low.
Vishal Kaushal
Page Size
Example Page Size
Ordinary Paging vs VM Paging
Paging Virtual memory paging
Main memory partitioned small into fixed -size Main memory partitioned into small fixed
chunks called frames -size chunks called frames
Program broken into pages by the compiler or Program broken into pages by the compiler or
memory management system memory management system
OS must maintain page table for each OS must maintain page table for each
process showing which frame each page process showing which frame each page
occupies occupies
OS must maintain a free frame list OS must maintain a free frame list
Processor uses page number, offset to Processor uses page number, offset to
calculate absolute address calculate absolute address
All the pages of a process must be in main Not All the pages of a process must be in
memory for process to run, unless overlays main memory for process. Page may be read
are used in as needed
– When all of its segments are loaded into main memory, segment
table is created and loaded
– P bit (present/absent)
– M bit (modify)
7.103
Segment Table Entries
OS must maintain segment table for OS must maintain segment table for
each process showing the load address each process showing the load address
and length of each segment and length of each segment
Processor uses segment number, offset Processor uses segment number, offset
to calculate absolute address to calculate absolute address
All the segments of a process must be Not all the segments of a process must
in main memory for process to run, be in main memory for process to run.
unless overlays are used Segment s may be read in as needed
■ Allocation Strategies (First Fit, Best Fit, and Worst Fit), Fragmentation,
Swapping.
■ Thrashing.
■ Page replacement – find some page in memory, but not really in use,
swap it out
• Different algorithms, different performance
• Want an algorithm which will result in minimum number of page faults
■ Same page may be brought into memory several times
Page Replacement
■ Prevent over-allocation of memory by modifying page-fault service routine to
include page replacement
■ Use modify (dirty) bit to reduce overhead of page transfers – only modified
pages are written to disk
ABCABDADBCB
Basic Replacement Algorithms
■ First-in, first-out (FIFO)
Ref: A B C A B D A D B C B
Page:
1 A D C
2 B A
3 C B
– FIFO: 7 faults.
– When referencing D, replacing A is bad choice, since
need A again right away
Basic Replacement Algorithms
■ Optimal policy
❑ Selects for replacement that page for which the time to the next
reference is the longest
Ref: A B C A B D A D B C B
Page:
1 A C
2 B
3 C D
– MIN: 5 faults
– Where will D be brought in? Look for page not
referenced farthest in future.
Basic Replacement Algorithms
■ Least Recently Used (LRU)
❑ Replaces the page that has not been referenced for the longest time
❑ Each page could be tagged with the time of last reference. This would
require a great deal of overhead.
Example: LRU
• Suppose we have the same reference stream:
– ABCABDADBCB
• Consider LRU Page replacement:
Ref: A B C A B D A D B C B
Page:
1 A C
2 B
3 C D
– LRU: 5 faults
• What will LRU do?
– Same decisions as MIN here, but won’t always be true!
LRU Algorithm (Cont.)
■ How would you implement LRU strategy?
■ The processor spends most of its time swapping pieces rather than
executing user instructions
Thrashing (Cont.)
• Locality model