0% found this document useful (0 votes)
3 views60 pages

virtual memory

The document discusses the principles and mechanisms of virtual memory in operating systems, highlighting the importance of logical to physical address translation and the ability to manage non-contiguous memory allocation. It covers concepts such as paging, segmentation, and the various policies and algorithms for memory management, including replacement and cleaning strategies. The goal is to optimize performance by maintaining as many processes in memory as possible while minimizing page faults and thrashing.

Uploaded by

raghad2201709
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views60 pages

virtual memory

The document discusses the principles and mechanisms of virtual memory in operating systems, highlighting the importance of logical to physical address translation and the ability to manage non-contiguous memory allocation. It covers concepts such as paging, segmentation, and the various policies and algorithms for memory management, including replacement and cleaning strategies. The goal is to optimize performance by maintaining as many processes in memory as possible while minimizing page faults and thrashing.

Uploaded by

raghad2201709
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 60

Operatin

g
Systems
: Chapter 8
Internals Virtual Memory
and Seventh Edition
Design William Stallings

Principle
s
Hardware and Control
Structures
 Two characteristics fundamental to
memory management:
1) all memory references are logical addresses that
are dynamically translated into physical
addresses at run time
2) a process may be broken up into a number of
pieces that don’t need to be contiguously
located in main memory during execution
 If these two characteristics are present, it
is not necessary that all of the pages or
segments of a process be in main memory
Terminology
Execution of a
Process
 Operating system brings into main memory a few
pieces of the program
 Resident set - portion of process that is in main
memory
 An interrupt is generated when an address is needed
that is not in main memory
 Operating system places the process
in a blocking state

Continued . . .
Execution of a
Process
 Piece of process that contains the logical address is
brought into main memory
 operating system issues a disk I/O Read request
 another process is dispatched to run while the disk
I/O takes place
 an interrupt is issued when disk I/O is complete,
which causes the operating system to place the
affected process in the Ready state
Implications
 More processes may be maintained in main memory
 only load in some of the pieces of each process
 with so many processes in main memory, it is very
likely a process will be in the Ready state at any
particular time
 A process may be larger than all of main memory
(programmers do not care)
Real and Virtual
Memory

Real memory
• main memory, the actual RAM

Virtual memory
• memory on disk
• allows for effective multiprogramming and
relieves the user of tight constraints of main
memory
Thrashing

A state in To avoid this, the


which the operating system
tries to guess,
system spends
based on recent
most of its history, which
time swapping pieces are least
process pieces likely to be used in
rather than the near future
executing
instructions
Principle of Locality
 Program and data references within a process tend to cluster
 Only a few pieces of a process will be needed over a short
period of time
 Therefore it is possible to make intelligent guesses about which
pieces will be needed in the future
 Avoids thrashing
 Locality of reference refers to a phenomenon in which a
computer program tends to access same set of memory
locations for a particular time period. In other words, Locality of
Reference refers to the tendency of the computer program to
access instructions whose addresses are near one another.
Paging Behavior

 During the lifetime of the


process, references are
confined to a subset of
pages
Support Needed for
Virtual Memory
For virtual memory to be practical and
effective:
• hardware must support paging and
segmentation
• operating system must include
software for managing the movement
of pages and/or segments between
secondary memory and main memory
Paging
 The term virtual memory is usually associated with
systems that employ paging
 Use of paging to achieve virtual memory was first
reported for the Atlas computer
 Each process has its own page table
 each page table entry contains the frame number
of the corresponding page in main memory
Memory
Managemen
t Formats
Address Translation
Inverted Page Table
 Fixed proportion of real memory is required for the
tables regardless of the number of processes or virtual
pages supported
 Structure is called inverted because it indexes page
table entries by frame number rather than by virtual
page number
Inverted Page Table
Translation Lookaside
Buffer (TLB)
 Each virtual memory  To overcome the
reference can cause effect of doubling the
two physical memory memory access time,
accesses: most virtual memory
 one to fetch the
schemes make use of
page table entry a special high-speed
cache called a
 one to fetch the
data translation
lookaside buffer
Use of a TLB
TLB
Operatio
n
Associative
Mapping
 The TLB only contains some of the page table entries
so we cannot simply index into the TLB based on page
number
 each TLB entry must include the page number as
well as the complete page table entry
 The processor is equipped with hardware that allows it
to interrogate simultaneously a number of TLB entries
to determine if there is a match on page number
Direct Versus
Associative Lookup
TLB and Cache Operation
Page Size
 The smaller the page size, the lesser the amount of
internal fragmentation
 however, more pages are required per process
 more pages per process means larger page tables
 for large programs in a heavily multiprogrammed
environment some portion of the page tables of
active processes must be in virtual memory instead of
main memory
 the physical characteristics of most secondary-
memory devices (hard disks) favor a larger page size
for more efficient block transfer of data
Paging Behavior of a
Program
Page Size
The design issue main memory is
of page size is getting larger and
related to the size address space
of physical main used by
memory and applications is also
program size growing

 Contemporary
programming techniques most obvious on
personal
used in large programs tend
computers where
to decrease the locality of applications are
references within a process becoming
increasingly
complex
Segmentation
Advantages:
• simplifies
 Segmentation handling of
allows the growing data
programmer to structures
view memory • allows programs
as consisting of to be altered
multiple and recompiled
independently
address spaces
• lends itself to
or segments
sharing data
among
processes
• lends itself to
protection
Segment
Organization
 Each segment table entry contains the starting
address of the corresponding segment in main
memory and the length of the segment
 A bit is needed to determine if the segment is already
in main memory
 Another bit is needed to determine if the segment has
been modified since it was loaded in main memory
Address
Translation
Combined Paging and
Segmentation

In a combined
paging/segmentation
Segmentation is visible to
system a user’s address the programmer
space is broken up into a
number of segments.
Each segment is broken
up into a number of
Paging is transparent to
fixed-sized pages which the programmer
are equal in length to a
main memory frame
Address Translation
Protection and
Sharing
 Segmentation lends itself to the implementation of
protection and sharing policies
 Each entry has a base address and length so
inadvertent memory access can be controlled
 Sharing can be achieved by segments referencing
multiple processes
Protection
Relationship
s
Operating System
Software
The design of the memory
management portion of an operating
system depends on three fundamental
areas of choice:
• whether or not to use virtual memory
techniques
• the use of paging or segmentation or both
• the algorithms employed for various
aspects of memory management
Policies for Virtual
Memory
 Key issue: Performance
 minimize page faults
Fetch Policy

 Determines when
Two
a page should be
main
brought into
types:
memory

Demand Prepagin
Paging g
Demand Paging
 Demand Paging
 only brings pages into main memory when a
reference is made to a location on the page
 many page faults when process is first started
 principle of locality suggests that as more and more
pages are brought in, most future references will be
to pages that have recently been brought in, and
page faults should drop to a very low level
Prepaging
 Prepaging
 pages other than the one demanded by a page fault
are brought in
 exploits the characteristics of most secondary
memory devices
 if pages of a process are stored contiguously in
secondary memory it is more efficient to bring in a
number of pages at one time
 ineffective if extra pages are not referenced
Placement Policy
 Determines where in real memory a process piece is to
reside
 Important design issue in a segmentation system
 Paging or combined paging with segmentation placing
is irrelevant because hardware performs functions with
equal efficiency
 For NUMA systems, an automatic placement
strategy is desirable to assign pages to the memory
module that provides the best performance
(nearest memory part to the processor)
Replacement Policy
 Deals
with the selection of a page in main
memory to be replaced when a new page
must be brought in
 objective is that the page that is removed be
the page least likely to be referenced in the
near future
 Themore elaborate the replacement policy
the greater the hardware and software
overhead to implement it
Frame Locking
 When a frame is locked the page currently stored in
that frame may not be replaced
 kernel of the OS as well as key control structures
are held in locked frames
 I/O buffers and time-critical areas may be locked
into main memory frames
 locking is achieved by associating a lock bit with
each frame
Basic Algorithms

Algorithms used
for the selection
of a page to
replace:
• Optimal
• Least recently used
(LRU)
• First-in-first-out (FIFO)
• Clock
Optimal Policy
 Selects the page for which the time to the
next reference is the longest
 Produces three page faults after the frame
allocation has been filled
Least Recently Used
(LRU)
 Replaces the page that has not been referenced for
the longest time
 By the principle of locality, this should be the page
least likely to be referenced in the near future
 Difficult to implement
 one approach is to tag each page with the time of
last reference
 this requires a great deal of overhead
LRU Example
First-in-First-out
(FIFO)
 Treats page frames allocated to a process as a circular
buffer
 Pages are removed in round-robin style
 simple replacement policy to implement
 Page that has been in memory the longest is replaced
FIFO Example
Clock Policy
 Requires the association of an additional bit with each
frame
 referred to as the use bit

 When a page is first loaded in memory or referenced,


the use bit is set to 1
 The set of frames is considered to be a circular buffer
 Any frame with a use bit of 1 is passed over by the
algorithm
 Page frames visualized as laid out in a circle
Clock Policy Example
Clock
Policy
Comparison of
Algorithms
Page Buffering
A replaced
 Improves page is not
lost, but rather
paging assigned to
performance one of two
lists:
and allows the
use of a
simpler page Modified page
Free page list
replacement list

policy

list of page
frames pages are
available for written out in
reading in clusters
pages
Resident Set
Management
 The OS must decide how many pages to bring into
main memory
 the smaller the amount of memory allocated to each
process, the more processes can reside in memory
 small number of pages loaded increases page faults
 beyond a certain size, further allocations of pages
will not effect the page fault rate
Resident Set Size
Fixed- Variable-
allocation allocation
 gives a process a fixed  allows the number of
number of frames in page frames allocated
main memory within to a process to be
which to execute varied over the
lifetime of the process
 when a page fault
occurs, one of the
pages of that process
must be replaced
Replacement Scope
 The scope of a replacement strategy can be
categorized as global or local
 both types are activated by a page fault when there
are no free page frames

Local
• chooses only among the resident pages of the process
that generated the page fault

Global
• considers all unlocked pages in main memory
Cleaning Policy
 Concerned with determining when a modified page
should be written out to secondary memory

Demand Cleaning
a page is written out to secondary memory only when it has
been selected for replacement

Precleaning
allows the writing of pages in batches
Load Control
 Determines the number of processes that will be
resident in main memory
 multiprogramming level

 Critical in effective memory management


 Too few processes, many occasions when all processes
will be blocked and much time will be spent in
swapping
 Too many processes will lead to thrashing
Multiprogramming
Process Suspension
 If the degree of multiprogramming is to be reduced,
one or more of the currently resident processes must
be swapped out

possibilities exist:
• Lowest-priority process
• Faulting process
• Last process activated
• Process with the smallest resident
set
• Largest process
Summary
 Desirable to:
 maintain as many processes in main memory as
possible
 free programmers from size restrictions in program
development
 With virtual memory:
 all address references are logical references that are
translated at run time to real addresses
 a process can be broken up into pieces
 two approaches are paging and segmentation
 management scheme requires both hardware and
software support

You might also like