0% found this document useful (0 votes)
26 views14 pages

Chapter 3 OS

Chapter 3 discusses memory management in both uniprogramming and multiprogramming systems, highlighting the need for dynamic memory subdivision by the operating system. Key concepts include memory requirements such as relocation, protection, sharing, logical organization, and physical organization, along with various memory management techniques like fixed and dynamic partitioning, paging, and segmentation. The chapter also addresses challenges like fragmentation and the importance of efficient memory allocation algorithms.

Uploaded by

yahyaomar2210
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views14 pages

Chapter 3 OS

Chapter 3 discusses memory management in both uniprogramming and multiprogramming systems, highlighting the need for dynamic memory subdivision by the operating system. Key concepts include memory requirements such as relocation, protection, sharing, logical organization, and physical organization, along with various memory management techniques like fixed and dynamic partitioning, paging, and segmentation. The chapter also addresses challenges like fragmentation and the importance of efficient memory allocation algorithms.

Uploaded by

yahyaomar2210
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Click to edit Master title style

Chapter 3
Memory Management

1
Click to edit Master title style
Introduction
In a uniprogramming system
• main memory is divided into two parts: one part for the operating system (resident monitor, kernel) other part for the
program currently being executed.
In a multiprogramming system
• the “user” part of memory must be further subdivided to accommodate multiple processes. The task of subdivision is
carried out dynamically by the operating system and is known as memory management.

Memory Management Terms


Frame • A fixed-length block of main memory.
• A fixed-length block of data that resides in secondary memory (such as a disk).
Page
• A page of data may temporarily be copied into a frame of main memory.
• A variable-length block of data that resides in secondary memory.
• An entire segment may temporarily be copied into an available region of main
Segment
memory (segmentation) or the segment may be divided into pages, which can be
individually copied into main memory (combined segmentation and paging).

2 2
Click toMANAGEMENT
MEMORY edit Master title style
REQUIREMENTS

1. Relocation: In a multiprogramming system, the available main memory is generally shared among a
number of processes. Typically, it is not possible for the programmer to know in advance which other
programs will be resident in main memory at the time of execution of his or her program.
2. Protection: Each process should be protected against unwanted interference by other processes,
whether accidental or intentional. Thus, programs in other processes should not be able to reference
memory locations in a process for reading or writing purposes without permission.
3. Sharing: Any protection mechanism must have the flexibility to allow several processes to access the
same portion of main memory.

3 3
Click toMANAGEMENT
MEMORY edit Master title style
REQUIREMENTS

4. Logical Organization: Most programs are organized into modules, some of which are unmodifiable (read
only, execute only) and some of which contain data that may be modified. If the operating system and
computer hardware can effectively deal with user programs and data in the form of modules of some
sort, then a number of advantages can be realized:
1) Modules can be written and compiled independently, with all references from one module to another
resolved by the system at run time.
2) With modest additional overhead, different degrees of protection (read only, execute only) can be
given to different modules.
3) It is possible to introduce mechanisms by which modules can be shared among processes.

4 4
Click toMANAGEMENT
MEMORY edit Master title style
REQUIREMENTS

5. Physical Organization: As we discussed computer memory is organized into at least two levels

Main memory Secondary memory


• provides fast access at relatively high cost • is slower and cheaper
• main memory is volatile • is slower and cheaper than main memory
• it does not provide permanent storage • is usually not volatile
• main memory holds programs and data currently in use • large capacity can be provided for long-term storage of
programs and data

the organization of the flow of information between main and secondary memory is a major system
concern. The responsibility for this flow could be assigned to the individual programmer, but this is
impractical and undesirable for two reasons:
1) The main memory available for a program and its data may be insufficient. In that case, the
programmer must engage in a practice known as overlaying, in which the program and data
are organized in such a way that various modules can be assigned the same region of
memory, with a main program responsible for switching the modules in and out as needed.
Even with the aid of compiler tools, overlay programming wastes programmer time.
2) In a multiprogramming environment, the programmer does not know at the time of coding
how much space will be available or where that space will be. 5 5
Memory
Click to
Management
edit MasterTechniques
title style
➢ MEMORY PARTITIONING: The principal operation of memory management is to bring processes
into main memory for execution by the processor.
Technique Description
• Main memory is divided into a number of static partitions at system generation time.
Fixed Partitioning
• A process may be loaded into a partition of equal or greater size.
• Partitions are created dynamically, so each process is loaded into a partition of exactly the same
Dynamic Partitioning
size as that process.
• Main memory is divided into a number of equal-size frames.
Simple Paging • Each process is divided into a number of equal-size pages of the same length as frames.
• A process is loaded by loading all of its pages into available, not necessarily contiguous, frames.
• Each process is divided into a number of segments.
Simple Segmentation • A process is loaded by loading all of its segments into dynamic partitions that need not be
contiguous.
• As with simple paging, except that it is not necessary to load all of the pages of a process.
Virtual Memory Paging
• Nonresident pages that are needed are automatically brought in later.
• As with simple segmentation, except that it is not necessary to load all of the segments of a
Virtual Memory
process.
Segmentation
• Nonresident segments that are needed are automatically brought in later.

6 6
Click to
Fixed Partitioning
edit Master title style
In most schemes for memory management, we
can assume the OS occupies some fixed portion
of main memory, and the rest of main memory is
available for use by multiple processes. The
simplest scheme for managing this available
memory is to partition it into regions with fixed
boundaries.
PARTITION SIZES:- two alternatives for fixed
partitioning.
One possibility is to make use of equal-size
partitions. In this case, any process whose size
is less than or equal to the partition size can be
loaded into any available partition. If all
partitions are full, and no process is in the
Ready or Running state, the operating system
can swap a process out of any of the partitions
and load in another process, so there is some
work for the processor.

7 7
Click to
Fixed Partitioning
edit Master title style
• Problems faces this Technique
A program may be too big to fit into a partition. the
programmer must design the program with the use of
overlays so only a portion of the program need be in main
memory at any one time. When a module is needed that is
not present, the user’s program must load that module
into the program’s partition, overlaying whatever
programs or data are there.
Main memory utilization is extremely inefficient. Any
program, no matter how small, occupies an entire
partition if program is less than the size of partition it use
the all size of partition This phenomenon, in which there
is wasted space internal to a partition due to the fact that
the block of data loaded is smaller than the partition, is
referred to as internal fragmentation
• Both of these problems can be lessened, though not
solved, by using unequal- size partitions. programs as
large as 16 Mbytes can be accommodated without
overlays. Partitions smaller than 8 Mbytes allow smaller
programs to be accommodated with less internal
fragmentation.

8 8
Click to edit ALGORITHM
PLACEMENT Master title style

equal-size partitions unequal-size partitions


two possible ways to assign processes to partitions.
the placement of processes in memory is trivial. As long as there The simplest way is to assign each process to the smallest
is any available partition, a process can be loaded into that partition within which it will fit. In this case, a scheduling queue
partition. Because all partitions are of equal size, it does not is needed for each partition to hold swapped-out processes
matter which partition is used. If all partitions are occupied with destined for that partition. The advantage of this approach is
processes that are not ready to run, then one of these processes that processes are always assigned in such a way as to
must be swapped out to make room for a new process. minimize wasted memory within a partition (internal
fragmentation)
Disadvantages of unequal-size partitions :
• The number of partitions specified at system generation time limits the number of active (not suspended) processes in the
system.
• Because partition sizes are preset at system generation time, small jobs will not utilize partition space efficiently. In an
environment where the main storage requirement of all jobs is known beforehand, this may be reasonable, but in most cases,
it is an inefficient technique.

9 9
Click to edit
Dynamic Partitioning
Master title style

• The partitions are of variable length and


number. When a process is brought into main
memory, it is allocated exactly as much
memory as it requires and no more.
• this method starts out well, but eventually
it leads to a situation in which there are a lot
of small holes in memory. As time goes on,
memory becomes more and more
fragmented, and memory utilization declines.
This phenomenon is referred to as external
fragmentation, indicating the memory that is
external to all partitions becomes
increasingly fragmented

1010
techniques for overcoming
Click to edit Master title
external fragmentation style

From time to time, the OS shifts the processes so they are contiguous and all of the free
memory is together in one block.
compaction The difficulty with compaction is that it is a time-consuming procedure and wasteful of
processor time.

Because memory compaction is time consuming, the OS designer must be clever in


deciding how to assign processes to memory (how to plug the holes). When it is time to
load or swap a process into main memory, and if there is more than one free block of
memory of sufficient size, then the operating system must decide which free block to
Placement allocate.
algorithm Best-fit chooses the block that is closest in size to the request.
First-fit begins to scan memory from the beginning and chooses the first available block
that is large enough.
Next-fit begins to scan memory from the location of the last placement and chooses the
next available block that is large enough.

1111
techniques for overcoming
Click to edit Master title
external fragmentation style

The first-fit algorithm The next- fit algorithm The best-fit algorithm
• is not only the simplest but • tends to produce slightly worse • despite its name, is usually the
usually the best and fastest as results than the first-fit. will more worst performer. Because this
well. frequently lead to an allocation algorithm looks for the smallest
• may litter the front end with small from a free block at the end of block that will satisfy the
free partitions that need to be memory. The result is that the requirement, it guarantees that the
searched over on each largest block of free memory, fragment left behind is as small as
subsequent first-fit pass. which usually appears at the end possible. Although each memory
of the memory space, is quickly request always wastes the
broken up into small fragments. smallest amount of memory, the
Thus compaction may be required result is that main memory is
more frequently with next-fit. quickly littered by blocks too
small to satisfy memory allocation
requests. Thus, memory
compaction must be done more
frequently than with the other
algorithms.

1212
Click to edit Master
REPLACEMENT ALGORITHM
title style

• In a multiprogramming system using dynamic partitioning, there will


come a time when all of the processes in main memory are in a blocked
state and there is insufficient memory, even after compaction.
• To avoid wasting processor time waiting for an active process to become
unblocked, the OS will swap one of the processes out of main memory to
make room for a new process or for a process in a Ready-Suspend state.

1313
Click to edit Master title style

Thank you

14

You might also like