Memory Management1
Memory Management1
Memory
A large array of addressable words or bytes. A data repository shared by the CPU and I/O devices.
OS responsibility
OS responsibility for memory management: Allocate and deallocate memory space as requested. Efficient utilization when the memory resource is heavily competed. Keep track of which parts of memory are currently being used and by whom.
file names, program names, printer/device names, user names inodes, job number, major/minor device numbers, process id (pid), uid, gid..
Address Binding
At Compile Time
If the location of the process and its storage is known and fixed ahead of time
In this case, the mapping of logical to physical addresses can be done statically A change in the physical address map will require a recompilation of the program
This is rare for general programs, but sometimes done for OS components
At Load Time
The compiler generates relocatable code The binding is done by the loader
At Run-time
Binding must be delayed until the program actually executes
This
is because parts of the program and its storage move around all the time Special hardware support for "page management" and so on are needed to accomplish this efficiently
Hardware device that maps virtual to physical address. In MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory. The user program deals with logical addresses; it never sees the real physical address.
Dynamic Loading
Routine is not loaded until it is called Better memory-space utilization; unused routine is never loaded Useful when large amounts of code are needed to handle infrequently occurring cases No special support from the operating system is required implemented through program design
Dynamic Linking
Linking postponed until execution time Small piece of code, stub, used to locate the appropriate memory-resident library routine Stub replaces itself with the address of the routine, and executes the routine Operating system needed to check if routine is in processes memory address Dynamic linking is particularly useful for libraries System also known as shared libraries
Overlays
Overlays
Advantages Needed when a process is larger than the amount of memory allocated to it. No special support needed from operating system. Problems Programming design of overlay structure is complex.
Swapping
A process
can be swapped temporarily out of memory to a backing store and then brought back into memory for continued execution.
Backing Store - fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images. Roll out, roll in - swapping variant used for priority based scheduling algorithms; lower priority process is swapped out, so higher priority process can be loaded and executed.
Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped.
Contiguous Allocation
Main memory usually into two partitions: Resident operating system, usually held in low memory with interrupt vector User processes then held in high memory Relocation registers used to protect user processes from each other, and from changing operating-system code and data Base register contains value of smallest physical address Limit register contains range of logical addresses each logical address must be less than the limit register MMU maps logical address dynamically
Multiple-partition allocation Hole block of available memory; holes of various size are scattered throughout memory When a process arrives, it is allocated memory from a hole large enough to accommodate it Operating system maintains information about: a) allocated partitions b) free partitions (hole)
OS process 5 OS process 5 OS process 5 process 9 process 8 process 2 process 2 process 2 OS process 5 process 9 process 10 process 2
First-fit: Allocate the first hole that is big enough Next-fit: like first fit, but search starting from the last allocation Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size Produces the smallest leftover hole Worst-fit: Allocate the largest hole; must also search entire list Produces the largest leftover hole
First-fit and best-fit better than worst-fit in terms of speed and storage utilization
First-Fit
Best-Fit
Worst-Fit
Fragmentation
External Fragmentation total memory space exists to satisfy a request, but it is not contiguous Internal Fragmentation allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used
Internal Fragmentation
i.e. wasted memory within a partition Occurs for fixed sized partitions Occurs when a process does not use the entire partition
Internal Fragmentation
2M 2M Process
4M 4M Process 8M 8M Process
Internal Fragmentation
External Fragmentation
i.e. wasted memory outside of the partition Occurs in variable sized partitions Occurs when holes are not large enough to hold a new process Solution: Compaction
External Fragmentation
External Fragmentation
3M Process
1M Process Does not fit
1M 32M
24M 21M 20M
11M Process
7M Process
10M Process
9M 2M 2M
Compaction
Operating system shuffles the memory. Free space is combined at one place and allocated space is combined at another place. This gives us the continuous free space. Eliminate holes by moving processes
Operating System
8M 3M Process 11M Process 1M 32M 24M 21M 20M 9M 2M 1M 11M 7M Process
Operating System
8M Process 3M Process
1M Process
11M Process 7M Process
Disadvantages of Partitioning
Fragmentation is unavoidable Must load entire process into memory Limits the number of active processes Solution: paging segmentation
Buddy system
Allocation and deallocation strategy is called Buddy System. size of allocatable block is power of 2 placement algorithm 1. find best-fit block 2. repeatedly divide block in half while process still fits As blocks become free, buddies should be coalesced
Example
Paging
Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter is available Divide physical memory into fixed-sized blocks called frames (size is power of 2, between 512 bytes and 8,192 bytes) Divide logical memory into blocks of same size called pages Keep track of all free frames To run a program of size n pages, need to find n free frames and load program Set up a page table to translate logical to physical addresses
Paging (Example)
Load pages into empty frames Sufficient frames must exist for entire process No compaction needed (pages can be loaded into non-contiguous frames)
Frame 0 1 2 3 4 5
Main Memory
Process D (1) A A Process D (2) Process D (3) A A Process D (4) Process B (1) Process B (2) Process B (3) Process B (4) Process B (5) Process B (6) Process C (1) Process C (2) Process C (3) Process C (4) Process D (5) Process D (6)
6
7 8 9 10
11
12 13 14 15
Page Tables
One page table for each process Page table translates logical addresses to physical addresses
Process B
Page Frame 0 4 1 5 2 6 3 7 4 8 5 9
Frame 0 1
Main Memory
Process D (1)
Process D (2) Process D (3) Process D (4) Process B (1) Process B (2) Process B (3) Process B (4) Process B (5) Process B (6) Process C (1) Process C (2) Process C (3) Process C (4)
2
3 4 5 6 7 8 9 10 11 12 13 14 15
Process D
Page Frame 0 0 1 1 2 2 3 3 4 14 5 15
Process D (5)
Process D (6)
Paging Hardware
Paging Example
Page table is kept in main memory Page-table base register (PTBR) points to the page table Page-table length register (PRLR) indicates size of the page table In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction. The two memory access problem can be solved by the use of a special fast-lookup hardware cache called translation look-aside buffers (TLBs) Some TLBs store address-space identifiers (ASIDs) in each TLB entry uniquely identifies each process to provide address-space protection for that process
TLB: Teff = h * (Ttlb + Tm) + (1 - h) * (Ttlb + 2Tm) =.80 * (20 + 100) + (1-.80) * (20 + 200) = (.80 * 120) + (0.20 * 220) = 140 nanoseconds Without TLB: Teff = 2Tm = 2 * 100 = 200 nanoseconds
Memory Protection
Valid-invalid bit attached to each entry in the page table: valid indicates that the associated page is in the process logical address space, and is thus a legal page invalid indicates that the page is not in the process logical address space
Segmentation
Memory Management Scheme that supports user view of memory. A program is a collection of segments. A segment is a logical unit such as
main program, procedure, function local variables, global variables,common block stack, symbol table, arrays
Protect each entity independently Allow each segment to grow independently Share each segment independently
3 4
2 3
user space
Segmentation Architecture
Logical address consists of a two tuple: <segment-number, offset>, Segment table maps two-dimensional physical addresses; each table entry has: base contains the starting physical address where the segments reside in memory limit specifies the length of the segment Segment-table base register (STBR) points to the segment tables location in memory Segment-table length register (STLR) indicates number of segments used by a program;
Segmentation Hardware
Example of Segmentation
Protection With each entry in segment table associate: validation bit = 0 illegal segment read/write/execute privileges Protection bits associated with segments; code sharing occurs at segment level Since segments vary in length, memory allocation is a dynamic storage-allocation problem