0% found this document useful (0 votes)
95 views49 pages

Synchronization in Multiprocessor Systems

The document provides an overview of process synchronization techniques in operating systems. It discusses the critical section problem that can occur when multiple processes access shared data concurrently. It introduces Peterson's solution and synchronization hardware like test-and-set and swap instructions to solve the critical section problem. It also explains semaphores as a classic synchronization primitive that uses wait() and signal() operations to control access to shared resources and critical sections. The document provides examples of using binary and counting semaphores to synchronize processes and ensure mutual exclusion.

Uploaded by

Huzaifa Anjum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views49 pages

Synchronization in Multiprocessor Systems

The document provides an overview of process synchronization techniques in operating systems. It discusses the critical section problem that can occur when multiple processes access shared data concurrently. It introduces Peterson's solution and synchronization hardware like test-and-set and swap instructions to solve the critical section problem. It also explains semaphores as a classic synchronization primitive that uses wait() and signal() operations to control access to shared resources and critical sections. The document provides examples of using binary and counting semaphores to synchronize processes and ensure mutual exclusion.

Uploaded by

Huzaifa Anjum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

UNIT- 3

Chapter 1: Process Synchronization

 Background

 The Critical-Section Problem

 Peterson’s Solution

 Synchronization Hardware

 Semaphores

 Classic Problems of Synchronization

 Monitors

Background

 A cooperating process can affect or be affected by other processes.

 Concurrent access to shared data may result in data inconsistency.

 Maintaining data consistency requires mechanisms to ensure the orderly execution of


cooperating processes.

Cooperating Processes: Shared Memory

#define BUFFER_SIZE 10

typedef struct {

...

} item;

item buffer[BUFFER_SIZE];

int in = 0;

int out = 0;

Producer and Consumer: Producer routine


while (true)

/* produce an item and put in nextProduced*/

while (count == BUFFER_SIZE)

; // do nothing

buffer [in] = nextProduced;

in = (in + 1) % BUFFER_SIZE;

count++;

Producer and Consumer: Consumer routine

while (true)

while (count == 0)

; // do nothing

nextConsumed = buffer[out];

out = (out + 1) % BUFFER_SIZE;

count--;

/* consume the item in nextConsumed */

Producer and Consumer

 Both producer and consumer routines are correct separately.

 They may not function correctly when run concurrently.

 If counter is 5 and both routines execute “counter++” and “counter--” concurrently, value of
counter may be 4 or 5 or 6. (while correct value of counter is 5)

 count++ could be implemented as

register1 = count
register1 = register1 + 1
count = register1

 count-- could be implemented as

register2 = count
register2 = register2 - 1
count = register2

 Consider this execution interleaving with “count = 5” initially (interleaving may be in possible in
different ways):

T0: producer execute register1 = count {register1= 5}


T1: producer execute register1 = register1 + 1 {register1= 6}
T2: consumer execute register2 = count {register2 = 5}
T3: consumer execute register2 = register2 - 1 {register2 = 4}
T4: producer execute count = register1 {count = 6 }
T5: consumer execute count = register2 {count = 4}

 An incorrect state occurs when both the processes are allowed to manipulate the variable
counter concurrently.

 A situation where several processes access and manipulate the same data concurrently and the
outcome of the execution depends on the particular order in which the execution takes place, is
called race condition.

 Thus only one process should be allowed to manipulate the counter at a time, thus
synchronization is required.

The Critical-Section Problem

 If n processes are competing to use some shared data.

 Each process has a code segment, called critical section, in which the shared data is accessed
e.g. changing common variables, updating a table, writing a file etc.

 It is necessary to ensure that when one process is executing in its critical section, no other
process is allowed to execute in its critical section.

Solution to CS using LOCKS

 Race conditions can be prevented by protecting the critical section by LOCK

 A process must acquire a lock before entering a critical section and releases the lock when it
exits the critical section.
Solution to Critical-Section Problem

General structure of process Pi

do {

entry section

critical section

exit section

remainder section

} while (true);

The Critical-Section Problem

 Each process must request permission to enter its critical section which is implemented as entry
section.

 Critical section is followed by exit section.

 The remaining code is remainder section.

 A solution to Critical-Section Problem must satisfy three requirements.

Three requirements for the solution to Critical-Section Problem

1. Mutual Exclusion:
If process Pi is executing in its critical section, then no other processes can be executing in their
critical sections.

2. Progress:

If no process is executing in its critical section and there exist some processes that wish to enter
their critical section,

then the selection of the processes that will enter the critical section next cannot be postponed
indefinitely

(only those processes can enter which are not executing in their remainder section)

3. Bounded Waiting:

A bound must exist on the number of times that other processes are allowed to enter their
critical sections after a process has made a request to enter its critical section and before that request is
granted

Critical-Section Problem

 Many kernel mode processes may be involved in race condition.

 Two general approaches are used to handle critical sections in OS

 Preemptive kernels

 It allows a process to be preempted while it is running in kernel mode.

 Nonpreemptive kernels

 A process will run until it exits kernel mode.

 Nonpreemptive kernels are free from race condition on kernel data structures.

 Nonpreemptive kernels: Windows XP, 2000, prior to Linux 2.6

 Preemptive kernels must be carefully designed to ensure that shared kernel data are free from
race conditions.

 Preemptive kernels is difficult to design for SMP.

 A preemptive kernels is suitable for real-time programming having short response time.

 Preemptive kernels: Linux 2.6 onwards, Solaris, IRIX

Peterson’s Solution
 Two process solution

 The two processes share two variables:

 int turn;

 Boolean flag[2]

 The variable turn indicates whose turn it is to enter the critical section.

 The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true
implies that process Pi is ready.

Algorithm for Process Pi

Solution to CS using LOCKS

 Race conditions can be prevented by protecting the critical section by LOCK

 A process must acquire a lock before entering a critical section and releases the lock when it
exits the critical section.

Synchronization Hardware
 Many systems provide hardware support for locking the critical section code

 Disabling the interrupts on Uniprocessor sytems

 Preventing interrupts to happen when a shared variable is being modified.

 Such code would execute without preemption which is the approach used for non-preemptive.

 This approach is inefficient for multiprocessor systems

 Operating systems using this is not broadly scalable

 Modern machines provide special atomic hardware instructions

 TestAndSet()

 Swap()

TestAndSet Instruction

 TestAndSet() instruction is executed atomically.

boolean TestAndSet (boolean *target)

boolean temp = *target;

*target = TRUE;

return temp;

Solution using TestAndSet

 Shared boolean variable lock, initialized to false.

Swap Instruction
 Swap() is executed atomically

void Swap (boolean *a, boolean *b)

boolean temp = *a;

*a = *b;

*b = temp:

Solution using Swap

 Shared Boolean variable lock initialized to FALSE

 Each process has a local Boolean variable key.

Bounded waiting algorithm using TestAndSet()

 TestAndSet() and Swap() algorithms support mutual exclusion but do not follow bounded
waiting.

 Bounded waiting algorithm using TestAndSet() instruction satisfies all critical section problems.

 Following data structures are initialized as false

boolean waiting [n];

boolean lock;
Semaphore

 Semaphore S: integer variable

 Two standard indivisible (atomic) operations can access semaphore

 wait() and signal()

(Originally called P() andV())

 All the modifications to the semaphore in the wait() and signal() must be executed atomically.

wait()

signal()
Semaphore: usage

 Counting semaphore: integer value can range over an unrestricted domain

 Binary semaphore: integer value can range only between 0 and 1, it is simpler to implement

 Also known as mutex locks

Semaphore: usage-1

 Binary semaphore can be used to deal with critical section problem for multiple processes.

 Semaphore mutex is initialized as 1.

Semaphore: usage-2

 Counting semaphore can be used to control access to a given resource consisting of a finite
number of instances.

 Semaphore is initialized as number of instances of a resource type.

 Each process wishes to use the resource, will perform wait() operation and decrement
semaphore by one.

 When a process release the resource, will perform signal() operation and increment semaphore
by one.

 Zero value of semaphore means no instance of the resource is available.

Semaphore: usage-3

 Two concurrent processes, P1 executing statement S1 and P2 executing statement S2.

 S1 should be followed by S2.

 Semaphore synch is initialized as zero.


Semaphore: Implementation

 Semaphore requires busy waiting.

 When a process is in its critical section, any other process tries to enter in its critical section
must loop continuously to check condition s ≤ 0.

 This type of semaphore is called a spinlock.

 Busy waiting wastes CPU cycle.

Modified definition of Semaphore

 When a process executes wait() and finds that value of semaphore is not positive, process blocks
itself rather than busy waiting by block() operation.

 The block operation places a process into a waiting queue associated with semaphore, and the
state of the process is switched to the waiting state.

Modified definition of Semaphore

 A blocked process on a semaphore will be restarted when some other process executes a
signal() operation.

 Blocked process is restarted by a wakeup() operation, which changes the process from waiting
state to ready state.

 Block() and wakeup() are provided as basic system calls by operating system.

Modified definition of Semaphore

 Each semaphore has an integer value and a list of processes.

 When a process must wait on a semaphore, it is added to the list of processes.

 A signal() removes one process from the list of waiting processes and awakens that process.
Semaphore Implementation with no Busy waiting

 With each semaphore there is an associated waiting queue. Each entry in a waiting queue has
two data items:

 value (of type integer)

 pointer to next record in the list

 Two operations:

 block() – place the process invoking the operation on the appropriate waiting queue.

 wakeup() – remove one of processes in the waiting queue and place it in the ready
queue.

Semaphore: Implementation

 Implementation must guarantee that no two processes can execute wait () and signal () on the
same semaphore at the same time

 Thus, it becomes the critical section problem where the wait and signal code are placed in the
critical section.

 In uniprocessor system, interrupts can be disabled during wait() and signal() operations.
 In SMP, wait() and signal() should be performed atomically.

 If critical section code is smaller, spinlock is a good solution

 There will be little busy waiting if critical section rarely occupied

 If applications spend lots of time in critical sections then block() and wakeup() solution is good
although it has context switching.

Deadlock and Starvation

 Deadlock

 Two or more processes are waiting indefinitely for an event that can be caused by only
one of the waiting processes

 Starvation

 Indefinite blocking. A process may never be removed from the semaphore queue in
which it is suspended (if implementation is LIFO).

Semaphore: Deadlocks

Classical Problems of Synchronization

 Bounded-Buffer Problem

 Readers and Writers Problem

 Dining-Philosophers Problem
Bounded-Buffer Problem

 Two processes share a common fixed size buffer. One of them, the producer, puts information
in buffer. Other one, the consumer, takes it out.

Problems:

 Producer wants to put a new item but buffer is already full.

 Consumer wants to take a new item but buffer is empty.

Solutions:

 When buffer is full, producer is sent to sleep and gets awakened when consumer takes an item
from buffer.

 When buffer is empty, consumer is sent to sleep and gets awakened when producer puts an
item in buffer.

 Shared data

 define N 10

/*buffer size*/

 semaphore full = 0

/*counts filled buffer slots*/

 semaphore empty = N

/*counts empty buffer slots*/

 semaphore mutex = 1

/*controls access to critical regions*/


Producer Process Consumer Process

do { … do {

//produce an item in nextp wait (full);

… wait (mutex);

wait (empty); …

wait (mutex); //remove an item from buffer to nextc




signal (mutex);
//add nextp to buffer
signal (empty);


signal (mutex);
//consume the item in nextc
signal (full);

} while (1);
} while (1);

Readers-Writers Problem

 A data object can be shared among several concurrent processes. Some processes (readers) only
read the content of the object but some processes (writers) write the content in object.

Problems:

 When two readers access the object, there is no adverse effects will result.

 When one writer and other reader; or two writer access the object simultaneously,
inconsistency may occur.

Solutions:

 Writers should have exclusive access of the object.

Shared data
 semaphore mutex = 1

/*to ensure mutual exclusion when readcount is


being updated*/

 semaphore wrt = 1

/*controls access to the shared object*/

 int readcount = 0

/*number of processes reading the shared object*/

Readers-Writers Problem (Writer Process)

Readers-Writers Problem(Reader Process)


Unit 3- Chapter-2
Deadlocks

 System Model

 Deadlock Characterization

 Methods for Handling Deadlocks

 Deadlock Prevention

 Deadlock Avoidance

 Deadlock Detection

 Recovery from Deadlock

Basic Concepts

 In multiprogramming environment, several processes may compete for a finite number of


resources.

 A set of blocked processes each holding a resource and waiting to acquire another resource
held by another process in the set, creates condition of deadlock.

 Example:

 System has 2 tape drives.


 P1 and P2 each hold one tape drive and each needs another one.

 A system consists of finite number of resources to be distributed among competing processes.

 The resources are partitioned into several types, each consisting of several instances.

 Processes may compete for same type of resources or different type of resources.

System Model

 Resource types R1, R2, . . ., Rm

e.g. CPU cycles, memory space, I/O devices (printers, tape drives), logical resources like semaphore,
monitor, files

 Each resource type Ri has Wi instances.

 Each process utilizes a resource in following sequence:

 request

 use

 release

 request

 A process must request a resource before using it. If the request can not be granted
immediately, then the requesting process must wait until it can acquire the resource

 use

 The process can operate on the resource

 release

 The process releases the resource.

 Request and release of resources are system calls.

Deadlock Characterization

 Deadlock can arise if four conditions hold simultaneously:

1. Mutual exclusion

2. Hold and wait


3. No preemption

4. Circular wait

Mutual exclusion

 At least one resource must be non-sharable mode i.e. only one process can use a resource at a
time.

 The requesting process must be delayed until the resource has been released.

 But mutual exclusion is required to ensure consistency and integrity of a database.

Hold and wait

 A process must be holding at least one resource and waiting to acquire additional resources
held by other processes.

No preemption

 A resource can be released only voluntarily by the process holding it after that process has
completed its task i.e. no resource can be forcibly removed from a process holding it.

Circular wait

 There exists a set {P0, P1, …, Pn} of waiting processes such that P0 is waiting for a resource that
is held by P1, P1 is waiting for a resource that is held by P2,…… …, Pn–1 is waiting for a resource
that is held by Pn, and Pn is waiting for a resource that is held by P0.

Resource-Allocation Graph

 Deadlocks can be described more precisely in terms of a directed graph called a system
resource allocation graph.

 A set of vertices V and a set of edges E.

 The set of vertices V is partitioned into two types of nodes:

 P = {P1, P2, …, Pn}, the set consisting of all the active processes in the system.

 R = {R1, R2, …, Rm}, the set consisting of all resource types in the system.

 Request edge is directed edge PiRi

 Assignment edge is directed edge RiPi

 Process
 Resource Type with 4 instances

 Pi requests instance of Rj

(Request edge)

 Pi is holding an instance of Rj

(Assignment edge)

P
i

P
i

 Resource allocation modeled by directed graphs

 Example 1:

 Resource R assigned to process A

 Example 2:

 Process B is requesting / waiting for resource S

 Example 3:

 Process C holds T, waiting for U

 Process D holds U, waiting for T

 C and D are in deadlock!


A B

R S

C D

Example of a Resource Allocation Graph

• The sets P, R, and E

• P = {P1, P2, P3}

• R = {R1, R2, R3, R4}

• E = {P1R1, P2R3, R1P2, R2P1, R2P2, R3P3 }

• Resource instances:

• One instance of resource type R1

• Two instances of resource type R2

• One instance of resource type R3

• Three instance of resource type R4


• Process States:

• Process P1 is holding an instance of resource type R 2, and is waiting for an instance of


resource type R1

• Process P2 is holding an instance of resource type R 1& R2, and is waiting for an instance
of resource type R3

• Process P3 is holding an instance of resource type R 3


Resource Allocation Graph with a Deadlock
Resource Allocation Graph with a Cycle but no Deadlock

Basic Facts

 If graph contains no cycles  no deadlock.

 If graph contains a cycle 

 if only one instance per resource type, then deadlock.

 if several instances per resource type, possibility of deadlock.

Getting into deadlock


A B C

Acquire R Acquire S Acquire T


Acquire S Acquire T Acquire R
Release R Release S Release T
Release S Release T Release R
A B C A B C A B C

R S T R S T R S T

Acquire R Acquire S Acquire T


A B C A B C
A B C

R S T R S T
R S T Deadlock!
Acquire T
Acquire S Acquire R
Methods for Handling Deadlocks

 Ensure that the system will never enter a deadlock state.

 Schemes are Deadlock Prevention and Deadlock Avoidance

 Allow the system to enter a deadlock state and then recover.

 Ignore the problem and pretend that deadlocks never occur in the system; used by most
operating systems, including UNIX.

Deadlock Prevention

Deadlock Prevention can be implemented by ensuring that at least one of four necessary
conditions for deadlock cannot hold.

Preventing Mutual Exclusion

 It is not required for sharable resources

e.g. opening a file in read mode by many processes.

 Deadlock may occur if request is made for non-sharable resources.


 Thus deadlock can be prevented if resources are sharable

Problem:

 But mutual exclusion must hold for non-sharable resources, thus it can not be avoided.
e.g. write permission for a file by many processes.

Preventing Hold and Wait

 It must be guaranteed that whenever a process requests a resource, it does not hold any other
resources.

 It can be implemented in two ways

 A process requests all its resources before it begins execution.

 A process can request for resources only when the process has none; if it has to
request other resources, first it should release and then request.

 Problem: Low resource utilization; starvation possible.

Preventing Nonpreemption

 If a process that is holding some resources requests another resource that cannot be
immediately allocated to it, then all resources currently being held should be released.

 Preempted resources are added to the list of resources for which the process is waiting.

 Process will be restarted only when it can regain its old resources, as well as the new ones
that it is requesting.

 Possible only if state of the process can be saved e.g. CPU registers, memory space.

Preventing Circular Wait

 A total ordering of all resource types is imposed.

 Resource Ri precedes Rj in the ordering if i < j.

 Each process requests resources in an increasing order of enumeration.

F(tape drive) = 1

F(disk drive) = 5

F(printer) = 12
Deadlock Prevention: summary

 Mutual exclusion

 Allow resources to be shared (may not be possible)

 Hold and wait

 Request all resources initially

 No preemption

 Take resources away

 Circular wait

 Order resources numerically

Deadlock Avoidance

 This method requires that the system has some additional prior information available.

 Each process declares the maximum number of resources of each type that it may need.

 The deadlock-avoidance algorithm dynamically examines the resource-allocation state to


ensure allocation of the a resource should not lead to deadlock.

 Resource-allocation state is defined by

 maximum demands of the processes.

 number of allocated resources,

 number of available resources

 Thus each request requires that the system consider

 the resources currently available,

 the resources currently allocated to each process,

 the future requests and releases of each process,

 to decide whether the current request can be satisfied or must wait to avoid a
possible future deadlock.
Safe State

 A state is safe if the system can allocate resources to each process (up to its maximum) in
some order and still avoid a deadlock.

 A system is in a safe state only if there exists a safe sequence.

 Maximum available tape drives in system = 12

Maximum Needs Allocated

Po 10 5

P1 4 2

P2 9 2

Available tape drives in system = 12

Maximum Needs Allocated Needs

Po 10 5 5

P1 4 2 2

P2 9 2 7

Now available tape drives = 3

 At time t0, the system is in a safe state. The sequence <P1, P0, P2> satisfies the safety condition.
Explanation:

 P1 can complete with current resources i.e. 3

 P0 can complete with resources as current+P1 i.e. 5

 P2 can complete with resources as current +P1+P0 i.e. 10

 Available tape drives in system = 12

Maximum Needs Allocated

Po 10 5

P1 4 2

P2 9 3
Available tape drives in system = 12

Maximum Needs Allocated Needs

Po 10 5 5

P1 4 2 2

P2 9 3 6

Available tape drives = 2

 Now, the system is not in safe state.

 Only P1 finishes the execution with current resources i.e. 2

 System is in safe state if there exists a safe sequence of all processes i.e. all the currently
running processes can finish their execution in definite time in any possible sequence.

 A safe state is not a deadlock state (or a deadlock state is an unsafe state).

 An unsafe state may leave system in deadlock state.

A sequence <P1, P2, …, Pn> is safe sequence for the

current allocation state if for each Pi,

 The resources that Pirequest can be satisfied by currently available resources + resources held
by all the Pi.

 If Pi resource needs are not immediately available, then Pi can wait until all Pjhave finished.

 When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and
terminate.

 When Pi terminates, Pi+1 can obtain its needed resources, and so on.

If no sequence occurs the system is said to be unsafe.

Safe State: Basic Facts

 If a system is in safe state  no deadlocks.

 If a system is in unsafe state  possibility of deadlock.

 Avoidance  ensure that a system will never enter an unsafe state.


Safe, Unsafe , Deadlock State

 On the basis of concept of a safe state, avoidance algorithms can be defined as

 to ensure that the system will never deadlock.

 to ensure that the system will always remain in a safe state.

 Whenever a process requests a resource that is currently available, the system must decide
whether the resource can be allocated immediately or whether the process must wait.

 The request is granted only if the allocation leaves the system in a safe state.

Resource-Allocation Graph Algorithm

 If we have a resource-allocation system with only one instance of each resource type, a
variant of the resource-allocation graph can be used for deadlock avoidance.

 request edge

 assignment edge

 claim edge.

 Claim edgePi - - - - ->Rj indicated that process Pi may request resource Rj; represented by a
dashed line.

 Claim edge converts to request edge when a process requests a resource PiRj .

 When a resource is released by a process, assignment edge reconverts to a claim edge.

 The request can be granted only if converting the request edge P i Rj to an assignment edge
Rj Pi does not result in the formation of a cycle in the resource-allocation graph.
 We check for safety by using a cycle-detection algorithm.

 An algorithm for detecting a cycle in this graph requires an order of n 2 operations, where n is
the number of processes in the system.

 allocation edge Ri Pj if Ri is currently held by Pj

 request edge Pi Rj if Pi has requested Rj

 claim edge Pi −−> Rj if Pi may eventually request Rj

Resource-Allocation Graph For Deadlock Avoidance

 Unsafe avoidance: do not allocate resource if it creates a cycle

 allocating R2 to P1 is OK

 allocating R2 to P2 is NOT ALLOWED because it leads to cycle in graph (unsafe state)


Banker’s Algorithm

 Resource-Allocation Graph Algorithm can not be applied to multiple instances

 Banker algorithm can be applied to multiple instances but is less efficient than the resource-
allocation graph scheme.

 When a new process enters the system, it must declare the maximum number of instances of
each resource type that it may need.

 This number should not exceed the total number of resources in the system

 When a process gets all its resources it must return them in a finite amount of time.

 The banker's algorithm can avoid deadlocks with multiple instances

 Looks at each request for resources and tests if the request moves the system into an unsafe
state

 If the system is still safe, then the request is granted

 If the system would become unsafe, then the request is denied

Data Structures for the Banker’s Algorithm

 n = number of processes,

 m = number of resources types

 Max [n] [m] max demand of Pn for Rm

 Allocation [n] [m] number of Rm that are allocated to Pn

 Available [m] number of resources of Rm that are unallocated

 Need [n] [m] number of Rm that may be needed by Pn

 Note: Need [n, m] = Max [n, m] – Allocation [n, m]

 Let n = number of processes,

m = number of resources types.

Data Structures:

 Available: (R1, R2,..…, Rm):

 A vector of length m indicates the number of available resources of each type.


 If available [ j] = k, there are k instances of resource type Rjavailable.

 Max:

 n x m matrix defines maximum demand of each process

 If Max [i, j] = k, then process Pimay request at most k instances of resource type Rj.

R0 R1 R2

P0 3 1 4

P1 2 4 3

 Allocation:

 n x m matrix defines number of resources of each type currently allocated to each


process

 If Allocation [i, j] = k then Pi is currently allocated k instances of Rj.

R0 R1 R2

P0 2 0 2

P1 1 3 2

 Need:

 n x m matrix (number of remaining resource need of each process)

 If Need [i, j] = k, then Pi may need k more instances of Rjto complete its task.

 Need [i, j] = Max [i, j] – Allocation [i, j].

R0 R1 R2

P0 1 1 2

P1 1 1 1
Safety Algorithm

1. Let Work and Finish be vectors of length m and n, respectively. Initialize:

Work := Available

Finish [ i] := false for i = 1,2,3, …, n.

2. Find an i such that both:

(a) Finish [i] = false

(b) NeediWork

If no such i exists, go to step 4.

3. Work := Work + Allocationi


Finish [i ] := true
go to step 2.

4. If Finish [i ] = true for all i, then the system is in a safe state.

 This algorithm may require an order of m x n 2 operations to decide whether a state is safe.

Resource-Request Algorithm for Process Pi

 Requesti is request vector for process Pi.

 If Requesti[ j] = k then process Pi wants k instances of resource type Rj.

1. If RequestiNeedigo to step 2.

Otherwise, raise error condition, since process has exceeded its maximum claim.

1. If RequestiAvailable, go to step 3.

Otherwise Pi must wait, since resources are not available.

3. Pretend to allocate requested resources to Pi by modifying the state as follows:

Available := Available -Requesti

Allocationi:= Allocationi + Requesti

Needi:= Needi – Requesti

 If resulting resource allocation is safe  the resources are allocated to Pi.


 If resulting resource allocation is unsafe  Pi must wait, and the old resource-
allocation state is restored

Summary of Banker’s Algorithm

To solve the problems, first run safety algorithm:

 Need = Max – Allocation

 Check the following:

 If Needi ≤ Available,

 Execute each i & finishi becomes true

 Available := Available (before execution) + Allocationi

 Find finish for all processes true and generate safe sequence

 If all processes are not in sequence, system state is not safe

If a request is made, run Resource-Request algorithm

Check the following:

 If Requesti ≤ Needi

 If Requesti ≤ Available,

Pretend to allocate request, change

 Available := Available - Requesti

 Allocationi := Allocationi + Requesti

 Needi := Needi – Requesti

 Again run Safety algorithm on updated data structure, find safe sequence

 If safe sequence exist, above request can be allocated, otherwise not

Example of Banker’s Algorithm

 5 processes P0 through P4

 3 resource types

 A (9 instances)

 B (5 instances)
 C (7 instances)

Max Allocatio Availabl


A B C A B C
n A B C
e
P 6 3 3 1 1 0 2 2 3
P
0 4 3 2 2 1 0
1
P 8 1 2 2 0 2
2
P 2 1 2 2 1 0
P
3 3 0 3 0 0 2
Snapshot at time T : Need = Max - Allocation.
4 0

Max Allocat Avail Need


A B C ion A B C A B C AB C
able
P 6 3 3 1 1 0 2 2 3 52 3
P 4 3 2 2 1 0
0 22 2
1
P 8 1 2 2 0 2 61 0
2
P 2 1 2 2 1 0 00 2
P 3 0 3 0 0 2
3 30 1
4 For P 
0

 Need0 (5 2 3) ≤ Available (2 2 3)

 Finish0 := false

 For P1

 Need1 (2 2 2) ≤ Available (2 2 3)
 Finish1 := true

 Available := Available (2 2 3) + Allocation1 (2 1 0)

:= 4 3 3

 For P2

 Need2 (6 1 0) ≤ Available (4 3 3)

 Finish2 := false

 For P3

 Need3 (0 0 2) ≤ Available (4 3 3)

 Finish3 := true

 Available := Available (4 3 3) + Allocation3 (2 1 0)

:= 6 4 3

 For P4

 Need4 (3 0 1) ≤ Available (6 4 3)

 Finish4 := true

 Available := Available (6 4 3) + Allocation4 (0 0 2)

:= 6 4 5

 For P0

 Need0 (5 2 3) ≤ Available (6 4 5)

 Finish0 := true

 Available := Available (6 4 5) + Allocation0 (1 1 0)

:= 7 5 5

 For P2

 Need2 (6 1 0) ≤ Available (7 5 5)

 Finish2 := true

 Available := Available (7 5 5) + Allocation2 (2 0 2)


:= 9 5 7

 The system is in a safe state since the sequence <P1, P3, P4, P0, P2> satisfies safety criteria.

Deadlock Detection

 Allow system to enter into deadlock state

 Detection algorithm

 Single Instance of Each Resource Type

 Several Instances of a Resource Type

 Recovery scheme

 Process Termination

 Resource Preemption

Single Instance of Each Resource Type

 A wait-for graph is a variant of the resource allocation graph.

 In wait-for graph

 Nodes are processes.

 PiPj (if Piis waiting for Pj to release a resource)

 Wait-for graph is obtained from the resource allocation graph by removing the nodes of
resource types and collapsing the appropriate edges.

 To detect deadlock, system needs to maintain the wait-for graph.

 Periodically system invokes an algorithm that searches for a cycle in the graph.

Resource-Allocation Graph and Wait-for Graph


Resource-Allocation Graph

Corresponding wait-for graph

Resource-Allocation Graph Corresponding wait-for graph


Single Instance of Each Resource Type

 An edge PiPj implies that Process Pi is waiting for the Process Pj to release a resource that Pi
needs.

 An edge PiPj exists in a wait-for graph if and only if the corresponding resource allocation
graph contains two edges PiRq& RqPj for some resource type Rq.

 Available: A vector of length m indicates the number of available resources of each type.

 Allocation: An n x m matrix defines the number of resources of each type currently allocated to
each process.

 Request: An n x m matrix indicates the current request of each process. If Request [i,j] = k, then
process Pi is requesting k more instances of resource type. Rj.
Detection Algorithm

1. Let Work and Finish be vectors of length m and n, respectively Initialize:

(a) Work := Available

(b) For i = 1,2, …, n, if Allocationi 0, then


Finish [i] := false; otherwise, Finish [i] := true.

2. Find an index i such that both:

(a) Finish [i] = false

(b) RequestiWork

If no such i exists, go to step 4.

3. Work := Work + Allocationi


Finish [i] := true
go to step 2.

4. If Finish [i] = false, for some i, 1 in,

then the system is in deadlock state.

For processes finish [i] = false, are deadlocked.

Example of Detection Algorithm

 5 processes P0 through P4

 3 resource types

 A (6 instances)

 B (2 instances)

 C (5 instances)

Snapshot at time T0:


Allocation Request Available

A B C A B C A B C

P0 0 1 1 0 0 0 0 0 0

P1 3 0 1 1 0 2

P2 2 0 1 0 0 0

P3 0 1 0 1 1 0

P4 1 0 2 1 0 2

 Sequence <P0, P2, P3, P4, P1> will result in Finish [i] = true for all i.

Example of Detection Algorithm

Snapshot at time T0:

Allocation Request Available

A B C A B C A B C

P0 0 1 1 0 0 0 0 0 0

P1 3 0 1 1 0 2

P2 2 0 1 0 0 2

P3 0 1 0 1 1 0

P4 1 0 2 1 0 2
 State of system?

 P0 will execute, resources held by process P0 will be added to available

 New available is (0 1 1) but is insufficient to fulfill other processes’ requests.

 Deadlock exists, consisting of processes P1, P2, P3, and P4.

Recovery from Deadlock

 Process Termination

 Resource Preemption

Recovery from Deadlock: Process Termination

 Abort all deadlocked processes

 Method will break the deadlock

 Very expensive

 Abort one process at a time until the deadlock cycle is eliminated

 It is also expensive because after aborting each process, a deadlock detection algorithm
must be invoked to find whether processes are still deadlocked.

 Issues related to aborting processes:

 If process was updating the file, it may lead to inconsistent state.

 Decision about the target process to be terminated

 Factors which determines in which order processes should be aborted?

 What is the priority of the process to be terminated.

 How long process has computed, and how much longer to completion.

 How many and what type of resources the process has used.

 How many more resources the process needs to complete execution.

 How many processes will need to be terminated.

 Nature of process: interactive or batch?

 Some resources can be preempted to break deadlock.

 Selecting a victim
 Which all resources from which processes are to be preempted

 Cost of preemption is important factor. (how much execution is completed)

 Rollback

 Either process is aborted completely.

 Or process can be rollbacked to some safe state.

 Starvation

 Same process may always be picked as victim causing starvation.

 Number of rollback can be included in cost factor to avoid starvation.

Unit 4
Chapter 1- Memory Management strategies

 Background

 Basic Hardware

 Address Binding

 Logical vs Physical Address Space

 Dynamic Loading

 Dynamic Linking and Shared Libraries

 Swapping

 Contiguous Allocation

 Paging

 Segmentation

Background

 Memory consists of a large array of words or bytes, each with its own address.

 The CPU fetches instructions from memory according to the value of the program counter.

 These instructions may cause additional loading from and storing to specific memory addresses.

 CPU fetch-execute cycle


 fetch next instruction from memory (based on program counter)

 decode the instruction

 possibly fetch operands from memory

 execute the instruction

 store the result in memory

 The memory unit sees only a stream of memory addresses

 It does not distinguish between instructions and data

 It does not care how the address was arrived

Basic Hardware

 CPU can access only storage like main memory and CPU registers.

 Thus any instruction to be executed and data being used by the instructions must be in one of
these storages.

 Registers that are built into CPU are accessible within one cycle of the CPU clock.

 CPU can decode instructions and perform operations on registers contents at the rate of one or
more operations per CPU clock tick.

 Memory which is accessed via a transaction on memory bus takes many CPU cycles to complete
thus CPU needs to wait till data or instructions are available.

 Frequency of memory access does not match with CPU work.

 Thus remedy is to add a faster memory i.e. cache between CPU and main memory.

 Operating system should be protected from user programs and user programs from each other.

 Each process should have separate memory space.

 There should be a range of legal address that processes may access.

 This protection can be provided by using two registers i.e. base register and limit register

 The base register holds smallest legal physical address

 The limit register specifies the size of the range.

 For example:

 Base register: 30004


 Limit register: 12090

 The program can access all addresses from 30004 through 42094 (inclusive)

 Protection of memory space is accomplished by having the CPU hardware compare every
address generated with registers.

 Any attempt by a program executing in user mode to access operating system memory or other
users’ memory results in a trap to the operating system, which treats the attempt as a fatal
error.

 The base and limit registers can be loaded only by operating system, which needs privileged
instruction.

 Since privileged instruction can be executed in kernel mode and only OS executes in kernel
mode.
 Thus only OS can load base and limit registers.

 This scheme allows OS to change the value of the registers but prevents user programs from
changing registers’ contents.

Address Binding

 A program resides on a disk as a binary executable file.

 Program must be brought into memory and placed within a process for it to be executed.

 The process may be moved between disk and memory during its execution depending on the
memory management in use.

 Input queue – collection of processes on the disk that are waiting to be brought into memory to
run the program.

 The binding of instructions and data to memory addresses can be done at any step along the
way.

 Address binding of instructions and data to memory addresses can happen at three different
stages:

 Compile time

 Load time

 Execution time

 Compile time

 If memory location is known at compile time, then absolute code can be generated

 Compiled code will start at that location and extend up from there.

 If, at some later time, the starting location changes, then it will be necessary to
recompile this code.

 Load time

 Compiler must generate relocatable code if memory location is not known at compile
time

 Final binding is delayed until load time

 Execution time

 Binding is delayed until run time if the process can be moved during its execution from
one memory segment to another.
 Special hardware must be available for address mapping (e.g., base and limit registers).

 Most general-purpose operating systems use this method.

Multistep Processing of a User Program

Logical vs. Physical Address Space

 The concept of a logical address space that is bound to a separate physicaladdress space is
central to proper memory management

 Logical address – generated by the CPU; also referred to as virtual address

 Physical address – address seen by the memory unit

 The set of all logical addresses generated by a program is a logical-address space.

 The set of all physical addresses corresponding to these logical addresses is a physical-address
space.
 Logical and physical addresses are the same in compile-time and load-time address-binding
schemes.

 Logical (virtual) and physical addresses differ in execution-time address-binding scheme.

 The user program deals with logical addresses; it never sees the real physical addresses.

Memory-Management Unit (MMU)(page 20)

 The memory-mapping hardware device (MMU) converts logical (virtual) addresses into physical
addresses.

 In MMU general scheme, the value of the relocation register (same as base register) is added to
every logical address generated by a user process at the time it is sent to memory

You might also like