We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 34
Chapter 7: CPU Scheduling and Deadlocks
CPU Scheduling (Basic Concepts )
Scheduling Criteria Deadlocks (Introduction) System Model Deadlock Characterization Methods for Handling deadlocks Deadlock Prevention Deadlock Avoidance Deadlock Detection Deadlock Recovery
Operating System Concepts 7.1
CPU Scheduling (Basic Concepts) CPU scheduling is the basis of multiprogrammed operating systems. By switching the CPU among multiple processes , the OS can make the computer more productive. The objective of multiprogramming is to have some process running at all times, in order to maximize CPU utilization. With multiprogramming, several processes are kept in memory at the same time and when one process has to wait (typically for the completion of some I/O request), the OS takes the CPU away from the process and gives the CPU to another process. This pattern continues. CPU – I/O Burst Cycle: The success of CPU scheduling depends on the process execution consisting of a cycle of CPU execution and I/O wait. Processes alternate between these two states. Process execution begins with a CPU burst that is followed by an I/O burst, then another CPU burst, then another I/O burst, and so on. Eventually, the last CPU burst will end with a system request to terminate execution, rather than with another I/O burst. The figure on the next page shows alternating sequence of CPU and I/O bursts.
Operating System Concepts 7.2
CPU Scheduling (Basic Concepts) Fig: Alternating sequence of CPU and I/O bursts
Operating System Concepts 7.3
CPU Scheduling (Basic Concepts) CPU Scheduler: Whenever the CPU becomes idle, the CPU scheduler (or short-term scheduler) selects one of the processes in the ready queue to be executed. The scheduler selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. The selection of a particular process depends on the scheduling algorithm. Preemptive Scheduling: CPU scheduling decisions may take place under the following four circumstances: When a process switches from running to waiting state When a process switches from running to ready state When a process switches from waiting to ready state When a process terminates. In first and last circumstances, there is no choice in terms of scheduling. A new process (if one exists in the ready queue) must be selected for execution. Scheduling under these two circumstances is called nonpreemptive. However, in second and third circumstances, there is a choice. The scheduling scheme in these circumstances is preemptive. Operating System Concepts 7.4 CPU Scheduling (Basic Concepts) Dispatcher: It is the another component involved in the CPU scheduling function. The dispatcher module gives control of the CPU to the process selected by the short-term scheduler. This function involves: Switching context Switching to user mode Jumping to the proper location in the user program to restart that program. The time taken for the dispatcher to stop one process and start another running is known as the dispatch latency.
Operating System Concepts 7.5
Scheduling Criteria Different CPU-scheduling algorithms have different properties and may favor one class of processes over another. Many criteria have been suggested for comparing these algorithms. The criteria include the following: CPU utilization: To Keep the CPU as busy as possible. Throughput: The number of processes completed per time unit. Turnaround time: Amount of time to execute a particular process. Waiting time: Amount of time a process has been waiting in the ready queue. Response time: Amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment). In comparing scheduling algorithms, we consider maximum CPU utilization and throughput, and minimum turnaround time, waiting time, and response time. In most cases, we optimize the average measure. However, in some circumstances we optimize the minimum and maximum values, rather than the average.
Operating System Concepts 7.6
Deadlocks (Introduction) In a multiprogramming environment, several processes may compete for a finite number of resources. A process requests resources; if the resources are not available at that time, the process enters a wait state. Waiting processes may never again change state, because the resources they have requested are held by other waiting processes. This situation is called a deadlock. System is deadlocked if there is a set of processes such that every process in the set is waiting for another process in the set.
Operating System Concepts 7.7
System Model A system consists of a finite number of resources to be distributed among a number of processes. These resources are partitioned into several types, each of which consists of some number of identical resources. A process must request a resource before using it, and must release the resource after using it. A process may request as many resources as it requires to carry out its designated task. Under the normal mode of operation, a process may utilize a resource in only the following sequence: Request: If the request cannot be granted immediately, then the requesting process must wait until it can acquire the resource. Use: The process can operate on the resource. Release: The process releases the resource.
Operating System Concepts 7.8
Deadlock Characterization Necessary Condition: A deadlock can arise if the following four conditions hold simultaneously. Mutual exclusion: Only one process at time can use the resource. If another process requests that resource, the requesting process must be delayed until the resource has been delayed. Hold and wait: A process must be holding at least one resource and waiting to acquire additional resources that are currently being held by other processes. No preemption: A resource can be released only voluntarily by the process holding it, after that process has completed its task. Circular wait: A set {P0, P1, …, P0} of waiting processes must exist such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0. Operating System Concepts 7.9 Deadlock Characterization (Cont.) Resource – Allocation Graph: Deadlocks can be described more precisely in terms of a directed graph called a system resource- allocation graph. This graph consists of a set of vertices V and a set of edges E. The set of vertices V is partitioned two different types of nodes: P = {P , P , …, P }, the set consisting of all the processes in the 1 2 n system. R = {R , R , …, R }, the set consisting of all resource types in the 1 2 m system. A directed edge from process Pi to resource type Rj is denoted by Pi Rj and it signifies that process Pi requested an instance of resource type Rj and is currently waiting for that resource. This directed edge is called request edge. A directed edge from resource type Rj to process Pi is denoted by Rj Pi and it signifies that an instance of resource type Rj has been allocated to process Pi . This directed edge is called an assignment edge.
Operating System Concepts 7.10
Deadlock Characterization (Cont.) Pictorially, we represent each process as a circle, and each resource type as a square. Since resource type may have more than one instance, we represent each such instance as a dot within the square. A request edge points to only the square, whereas an assignment edge must also designate one of the dots in the square. The figure below shows resource-allocation graph.
Operating System Concepts 7.11
Deadlock Characterization (Cont.) The resource-allocation graph show before depicts the following situation. The set P, R, and E P = {P1, P2, P3} R = {R1, R2, R3, R4} E = {P1 R1, P2 R3, R1 P2, R2 P2, R2 P1, R3 P3} Resource instances: One instance of resource type R1 Two instances of resource type R2 One instance of resource type R3 Three instances of resource type R4 Process states: Process p1 is holding an instance of resource type R2, and is waiting for an instance of resource type R1 Process P2 is holding an instance of R1 and R2, and is waiting for an resource type R3 Process P3 is holding an instance of R3
Operating System Concepts 7.12
Deadlock Characterization (Cont.) If graph contains no cycles, no deadlock occurs. But, If the graph contains a cycle and only one instance per resource is available , then deadlock occurs. Also, if the graph contains a cycle and there are several instances per resource type, then there is possibility of deadlock. The first figure below shows the resource allocation graph with a deadlock and the second figure shows the resource allocation graph with a cycle but no deadlock.
Operating System Concepts 7.13
Methods for Handling Deadlocks We can deal with deadlock problems in one of the three ways: We can use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a deadlock state – To ensure that deadlocks never occur, the system can use either a deadlock prevention or a deadlock-avoidance scheme. Deadlock prevention is a set of methods for ensuring that at least one of the necessary conditions for deadlock to occur cannot hold. Deadlock avoidance, on the other hand, requires that the OS be given in advance additional information concerning which resources a process will request and use during its lifetime. We can allow the system to enter a deadlock state, detect it, and recover – If a system does not employ either a deadlock- prevention or a deadlock-avoidance algorithm, then a deadlock situation may occur. In this environment, the system can provide an algorithm that examines the state of the system to determine whether a deadlock has occurred, and an algorithm to recover form the deadlock.
Operating System Concepts 7.14
Methods for Handling Deadlocks (Cont.) We can ignore the deadlock problem altogether, and pretend that deadlock never occur in the system. If the deadlock occurs, the system will stop functioning and will need to be restarted manually. This approach is used in systems in which deadlocks occur infrequently. This method is cheaper than other methods. This solution is used by most operating systems, including UNIX.
Operating System Concepts 7.15
Deadlock Prevention For a deadlock to occur, each of the four necessary conditions must hold. By ensuring that at least one of these conditions can not hold, we can prevent the occurrence of a deadlock. Mutual Exclusion: Mutual-exclusion must hold only for non- sharable resources. Sharable resources, on the other hand, do not require mutually exclusive access, and thus cannot be involved in a deadlock Hold and Wait: To ensure that the hold-and-wait condition never occurs in the system, we must guarantee that whenever a process requests a resource, it does not hold any other resources. One protocol that can be used requires process to request and be allocated all its resources before it begins execution. An alternative protocol allows a process to request resources only when the process has none. These protocols have two main disadvantages. First, resource utilization may be low, since many of the resources may be allocated but unused for a long period. Second, starvation is possible, since a process that needs several popular resources may have to wait indefinitely, because at least one of the resources that it needs is always allocated to some other process.
Operating System Concepts 7.16
Deadlock Prevention (Cont.) No Preemption: To ensure that no preemption does not hold, we can use the following protocols. If a process that is holding some resources requests another resource that cannot be immediately allocated to it, then all resources currently being held are preempted (or released). If a process requests some resources and if they are not available, we check whether they are allocated to some other process that is waiting for additional resources. If so, we preempt the desired resources from the waiting process and allocate them to the requesting process. Circular Wait: One way to ensure that circular wait never holds is to Impose a total ordering of all resource types, and require that each process requests resources in an increasing or decreasing order of enumeration.
Operating System Concepts 7.17
Deadlock Avoidance Deadlock avoidance requires additional information about how resources are to be requested. The various algorithms differ in the amount and type of information required. The simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need. Given a priori information about the maximum number of resources of each type that may be requested for each process, it is possible to construct an algorithm that ensures that the system will never enter a deadlock state. This algorithm defines the deadlock- avoidance approach. A deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that a circular-wait condition can never exist. The resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes. Some algorithms for deadlock avoidance are safe state, resource-allocation graph algorithm, and banker’s algorithm.
Operating System Concepts 7.18
Deadlock Avoidance (Cont.) Safe State: A state is safe if the system can allocate resources to each process in some order and still avoid a deadlock. More formally, a system is in safe state if there exists a safe sequence of all processes. A sequence of processes <P1, P2, …, Pn> is safe if for each Pi, the resources that Pi can still request can be satisfied by currently available resources + resources held by all the Pj, with j < i. If the resources that process Pi needs are not immediately available, then Pi can wait until all Pj have finished. When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and terminate. When Pi terminates, Pi+1 can obtain its needed resources, and so on. If a system is in safe state then no deadlocks occur. If a system is in unsafe state, then there is possibility of deadlock. This algorithm avoids deadlocks because it ensure that a system will never enter an unsafe state. Operating System Concepts 7.19 Deadlock Avoidance (Cont.) Fig: Safe, unsafe, and deadlock state spaces
Operating System Concepts 7.20
Deadlock Avoidance (Cont.) Resource-Allocation Graph Algorithm: In addition to the request and assignment edges, we introduce a new type of edge called a claim edge in the resource-allocation graph. Claim edge Pi Rj indicated that process Pj may request resource Rj; represented by a dashed line. Claim edge converts to request edge when a process requests a resource. When a resource is released by a process, assignment edge reconverts to a claim edge. Resources must be claimed a priori in the system. The request can be granted only if converting the request edge Pi Rj to assignment edge Rj Pi does not result in the formation of a cycle in the resource allocation graph.
Operating System Concepts 7.21
Deadlock Avoidance (Cont.) Fig: (a) Resource allocation graph for deadlock avoidance. (b) An unsafe state in a resource-allocation graph.
(a) (b)
Operating System Concepts 7.22
Deadlock Avoidance (Cont.) Banker’s Algorithm: The name was chosen because this algorithm could be used in a banking system to ensure that the bank never allocates its available cash such that it can no longer satisfy the needs of all its customers. When a new process enters the system, it must declare the maximum number of instances of each resource type that it may need. This number may not exceed the total number of resources in the system. When a process requests a set of resources, the system must determine whether the allocation of these resources will leave the system in a safe state. If it will, the resources are allocated; otherwise, the process must wait until some other process releases enough resources. Several data structures must be maintained to implement this algorithm. Let n = number of processes, and m = number of resources types. Available: A vector of length m indicates the number of available resources of each type. If available [j] = k, there are k instances of resource type Rj available.
Operating System Concepts 7.23
Deadlock Avoidance (Cont.) Max: An n x m matrix defines the maximum demand of each process. If Max [i,j] = k, then process Pi may request at most k instances of resource type Rj. Allocation: An n x m matrix defines the number of resources of each type currently allocated to each process. If Allocation[i,j] = k then Pi is currently allocated k instances of Rj. Need: An n x m matrix indicates the remaining resource need of each process. If Need[i,j] = k, then Pi may need k more instances of Rj to complete its task. Need [i,j] = Max[i,j] – Allocation [i,j]. Now, the algorithm for finding out whether or not a system is in a safe state can be described as follows:
Operating System Concepts 7.24
Deadlock Avoidance (Cont.) 1. Let Work and Finish be vectors of length m and n, respectively. Initialize: Work = Available Finish [i] = false for i - 1, 2, 3, …, n. 2. Find an i such that both: (a) Finish [i] = false (b) Needi Work If no such i exists, go to step 4. 3. Work = Work + Allocationi Finish[i] = true go to step 2. 4. If Finish [i] == true for all i, then the system is in a safe state.
Operating System Concepts 7.25
Deadlock Avoidance (Cont.) The algorithm for resource-request is described as follows: Let Requesti = request vector for process Pi. If Requesti [j] = k then process Pi wants k instances of resource type Rj. 1. If Requesti Needi go to step 2. Otherwise, raise error condition, since process has exceeded its maximum claim. 2. If Requesti Available, go to step 3. Otherwise Pi must wait, since resources are not available. 3. Pretend to allocate requested resources to Pi by modifying the state as follows: Available = Available – Request; Allocationi = Allocationi + Requesti; Needi = Needi – Requesti;; • If the resulting resource allocation state is safe the resources are allocated to Pi. • If this state is unsafe Pi must wait, and the old resource- allocation state is restored
Operating System Concepts 7.26
Deadlock Detection If a system does not employ either a deadlock-prevention or a deadlock-avoidance algorithm, then a deadlock situation may occur. In this environment, the system must provide: An algorithm that examines the state of the system to determine whether a deadlock has occurred. An algorithm to recover from the deadlock. 1. Single Instance of Each Resource Type: If all resources have only a single instance, then we can define a deadlock-detection algorithm that uses a variant of the resource-allocation graph, called a wait-for graph. We obtain this graph from the resource allocation graph by removing the nodes of type resource and collapsing the appropriate edge. More precisely, an edge Pi Pj in a wait-for graph implies that process Pi is waiting for Pj to release a resource that Pi needs. A deadlock exists in the system if and only if the wait-for graph contains a cycle. To detect deadlocks, the system needs to maintain the wait-for graph and periodically to invoke an algorithm that searches for a cycle in the graph.
Deadlock Detection (Cont.) 2. Several Instances of a Resource Type: The wait-for graph scheme is not applicable to a resource-allocation system with multiple instances of each resource type. The algorithm here employs several time-varying data structures that are similar to those used in the banker’s algorithm. Available: A vector of length m indicates the number of available resources of each type. Allocation: An n x m matrix defines the number of resources of each type currently allocated to each process. Request: An n x m matrix indicates the current request of each process. If Request [i, j] = k, then process Pi is requesting k more instances of resource type. Rj. Now the detection algorithm is described as follows:
Operating System Concepts 7.29
Deadlock Detection (Cont.) 1. Let Work and Finish be vectors of length m and n, respectively Initialize: (a) Work = Available (b) For i = 1,2, …, n, if Allocationi 0, then Finish[i] = false;otherwise, Finish[i] = true. 2. Find an index i such that both: (a) Finish[i] == false (b) Requesti Work If no such i exists, go to step 4. 3. Work = Work + Allocationi Finish[i] = true go to step 2. 4. If Finish[i] == false, for some i, 1 i n, then the system is in deadlock state. Moreover, if Finish[i] == false, then Pi is deadlocked.
Operating System Concepts 7.30
Deadlock Detection (Cont.) Detection-Algorithm Usage: When should we invoke the detection algorithm depends on two factors: 1. How often is a deadlock likely to occur? 2. How many processes will be affected by deadlock when it happens? If deadlocks occur frequently, then the detection algorithm should be invoked frequently. We could also invoke the deadlock detection algorithm every time a request for allocation cannot be granted immediately because deadlocks occur only when some process makes a request that cannot be granted immediately. Of course, invoking the deadlock-detection algorithm for every request may incur a considerable overhead in computation time. A less expensive alternative is simply to invoke the algorithm at less frequent intervals – for example, once per hour, or whenever CPU utilization drops below 40 percent.
Operating System Concepts 7.31
Recovery from Deadlock When a detection algorithm determines that a deadlock exists, there are two options for breaking a deadlock. One solution is simply to abort one or more processes to break the circular wait. The second option is to preempt some resources from one or more of the deadlocked processes. 1. Process Termination: To eliminate deadlocks by aborting a process, we use one of two methods. In both methods, the system reclaims all resources allocated to the terminated processes. 1. Abort all deadlocked processes – break the deadlock at a great expense. 2. Abort one process at a time until the deadlock cycle is eliminated – incurs considerable overhead, since, after each process is aborted , a deadlock-detection algorithm must be invoked to determine whether any processes are still deadlocked. Many factors may determine which process is chosen to abort, including: Priority of the process.
Operating System Concepts 7.32
Recovery from Deadlock (Cont.) How long the process has computed, and how much longer the process will compute before completing its designated task. How many and what type of resources the process has used. How many more resources the process needs in order to complete. How many processes will need to be terminated. Whether the process interactive or batch 2. Resource Preemption: To eliminate deadlocks using resource preemption, we successively preempt some resources from processes and give these resources to other processes until the deadlock cycle is broken. Some issues need to be addressed are: 1. Selecting a victim: Some process will have to rolled back to break deadlock. Select that process as victim that will incur minimum cost. 2. Rollback: Determine how far to roll back the process – Total rollback: Abort the process and then restart it.
Operating System Concepts 7.33
Recovery from Deadlock (Cont.) – More effective is to roll back the process only as far as necessary to break deadlock 3. Starvation: Starvation happens if same process is always chosen as victim. To avoid this, we include the number of rollbacks in the cost factor.