0% found this document useful (0 votes)
0 views71 pages

Mod 2 Process Scheduling

Module 2 covers processor management, including CPU scheduling, inter-process communication, and process states. It discusses various scheduling algorithms like FCFS and SJF, detailing their advantages, disadvantages, and examples of their implementation. The module emphasizes the importance of optimizing CPU utilization, throughput, and minimizing turnaround, waiting, and response times.

Uploaded by

theerthasjoy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views71 pages

Mod 2 Process Scheduling

Module 2 covers processor management, including CPU scheduling, inter-process communication, and process states. It discusses various scheduling algorithms like FCFS and SJF, detailing their advantages, disadvantages, and examples of their implementation. The module emphasizes the importance of optimizing CPU utilization, throughput, and minimizing turnaround, waiting, and response times.

Uploaded by

theerthasjoy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 71

Module 2 Syllabus

• Processor management : CPU scheduling -


Review of Multiprogramming concepts -
scheduling concepts - scheduling algorithm
- Multiprocessor scheduling,
• Inter process communication – pipes,
shared files, shared memory, message
based IPC. 1
Basic Concepts
• Maximum CPU utilization is obtained with
multiprogramming
– Idea is to have some process running at all times
& one process is executed until it must go for a
wait.
• What is CPU – I/O Burst Cycle?

– Process execution consists of a cycle of CPU


execution and I/O wait 2
Alternating Sequence of CPU And I/O Bursts

3
Process State
• Process – a program in execution
• process execution must progress in sequential
fashion
• As a process executes, it changes state
– new: Process is being created.
– running: Instructions are being executed.
– waiting: Process is waiting for some event to occur
(I/O completion or interrupt).
– ready: Process is waiting to be assigned to a
processor.
– terminated: Process has finished execution.
4
Process State

5
Process control Block (PCB)
• Each process is represented in the OS by a PCB
• Also called task control block.

6
Process control Block (PCB)
• Pointer: used to link with other PCBs
• Process state: may be new, ready, running,
waiting, terminated
• Program counter: indicates address of next
instruction to be executed for this process
• CPU registers: Depending on architecture,
registers varies in no. & type. Eg: Accumulator,
index register, stack pointers, GPR or CC
• Memory limits: value of base register, limit
register, page tables, etc
7
Process Scheduling Queues

8
Process Scheduling Queues
• Job queue
– set of all PCBs in the system.

• Ready queue
– set of all processes residing in main memory, ready
and waiting to execute.

• I/O waiting or device queues


– set of processes waiting for an I/O device.

9
Queuing diagram Representation of Process Scheduling

10
CPU Switch From Process to Process

11
Scheduler
• Process migrates between the scheduling queues
throughout its life
• OS must select process from these queues in
some fashion for scheduling purpose
• 2 types of schedulers:
– Long term scheduler or job scheduler
• Selects processes from job pool & loads them into
memory for execution
– Short term scheduler or CPU scheduler
• Selects from the processes that are ready to
execute & allocate CPU to one of them
12
CPU Scheduler
• Otherwise called short term scheduler

• It selects one process from memory that is ready


to execute and allocates the CPU to it
• Controls the degree of multiprogramming by
considering a good mix of CPU bound & I/O
bound processes

13
CPU Scheduler
• CPU scheduling decisions may take place when a
process:
1. Switches from running to waiting state
• Eg: I/O request
2. Switches from running to ready state
• Eg: Interrupt occurs
3. Switches from waiting to ready state
• Eg: Completion of I/O operation
4. Terminates
• Eg: Process completes
• Scheduling under 1 and 4 is non-preemptive because
there is no choice for scheduling 14
CPU Scheduler
• Non-preemptive: only schedule a new process when the
current one does not want CPU any more.
• Preemptive: Schedule a new process even when the
current process does not intend to give up the CPU

15
Dispatcher

• Dispatcher module gives control of the CPU to the


process selected by the short-term scheduler
• This involves:
– switching context

– switching to user mode

– jumping to the proper location in the user program to restart


that program

• Dispatch latency – time it takes for the dispatcher to


stop one process and start another running process 16
Scheduling Criteria
• For choosing an algorithm to be used in a particular
situation, we must consider the following criteria
– CPU utilization – keep the CPU as busy as possible
– Throughput – no. of processes that complete their
execution per time unit
– Turnaround time – interval from the time of submission to
time of completion of a process
– Waiting time – amount of time a process has been waiting
in the ready queue
– Response time – amount of time it takes from when a
request was submitted until the first response is produced,
not the output (for interactive time-sharing environment)
17
Example

• Suppose we have processes A, B, and C,


submitted at time 0
• We want to know the response time, waiting
time, and turnaround time of process A

turnaround time
wait time + +
response time = 0
A B C A B C A C A C Time

Gantt chart: visualize how processes execute.

18
Example

• Suppose we have processes A, B, and C,


submitted at time 0
• We want to know the response time, waiting
time, and turnaround time of process B

turnaround time
wait time +
response time
A B C A B C A C A C Time

19
Example

• Suppose we have processes A, B, and C,


submitted at time 0
• We want to know the response time, waiting
time, and turnaround time of process C

turnaround time
wait time + + +
response time
A B C A B C A C A C Time

20
Scheduling - Optimization Criteria

• Aim of OS is to maximize
– CPU utilization &
– throughput
• And to minimize
– turnaround time
– waiting time
– response time

21
Scheduling Algorithms
It deals with problem of deciding which process in ready
queue is to be allocated with CPU
1. FCFS (First Come First Served):
assigns the CPU based on the order of requests
 implementation is managed by FIFO queue
 Non-preemptive: A process keeps running on a CPU
until it is blocked or terminated
Adv: Simplest algorithm
Disadv: Short jobs can get stuck behind long jobs
22
First-Come, First-Served (FCFS)
• Example 1: time in ms
Process Burst Time
P1 24
P2 3
P3 3

• Assume processes arrive as: P1 , P2 , P3


The Gantt Chart for the schedule is:
P1 P2 P3

0 24 27 30

• Waiting time for P1 = 0; P2 = 24; P3 = 27


23
FCFS Scheduling (Cont)
Suppose processes arrive as: P2 , P3 , P1 .
The Gantt chart for the schedule is:
P2 P3 P1

0 3 6 30

• Waiting time for P1 = 6; P2 = 0; P3 = 3


• Average waiting time: (6 + 0 + 3)/3 = 3 ms
• Much better than previous case.
• Convoy effect or head-of-line blocking
– short process behind long process
24
First-Come, First-Served (FCFS)
• Example 2:
Process Arrival Time Burst Time
P1 0 3
P2 2 6
P3 4 4
P4 6 5
P5 8 2

The Gantt Chart for the schedule is:


P1 P2 P3 P4 P5

0 3 9 13 18 20

• Waiting time for P1 = 0; P2 = 3-2 = 1; P3 = 9-4 = 5 ;


P4 = 13-6 = 7 ; P5 = 18-8 = 10 ;
25
• Average waiting time: (0 + 1 + 5 + 7 + 10)/5 = 23/5 = 4.6 ms
First-Come, First-Served (FCFS)
• Example 2:
Process Arrival Time Burst Time
P1 0 3
P2 2 6
P3 4 4
P4 6 5
P5 8 2

The Gantt Chart for the schedule is:


P1 P2 P3 P4 P5

0 3 9 13 18 20

• Waiting time for P1 = 0; P2 = 3-2 = 1; P3 = 9-4 = 5 ;


P4 = 13-6 = 7 ; P5 = 18-8 = 10 ;
26
• Average waiting time: (0 + 1 + 5 + 7 + 10)/5 = 23/5 = 4.6 ms
First-Come, First-Served (FCFS)
• Exercise:
Process Arrival Time Burst Time
P1 1 3
P2 3 5
P3 5 2
P4 7 7
P5 9 6

27
Shortest-Job-First (SJF) Scheduling
• Each process is associated with the length of its next
CPU burst.
• Process with shortest CPU burst time is assigned to
CPU
• If 2 processes have same length of next CPU burst,
FCFS scheduling is used.
• Also called shortest next CPU burst or Shortest
Process Next (SPN)
• SJF is optimal – gives minimum average waiting time
for a given set of processes
– But the difficulty is knowing the length of the next CPU
request 28
Example 1 of SJF
Process Burst Time
P1 6
P2 8
P3 7
P4 3
• SJF scheduling Gantt chart
P4 P1 P3 P2

0 3 9 16 24

• Average waiting time = (3 + 16 + 9 + 0) / 4 = 7 ms


• Average turnaround time = (9 + 24 + 16 + 3) / 4 = 13 ms 29
If it was FCFS
Process Burst Time
P1 6
P2 8
P3 7
P4 3
• FCFS scheduling Gantt chart
P1 P2 P3 P4

0 6 14 21 24

• Average waiting time = (0 + 6 + 14 + 21) / 4 = 10.25 ms


30
• Average turnaround time = (6 + 14 + 21 + 24) / 4 = 16.25 ms
Example 2 of SJF
Process Arrival Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 0.0 3
• SJF scheduling Gantt chart
P4 P1 P3 P2

0 3 9 16 24

• Average waiting time = (3 + 14 + 5 + 0) / 4 = 5.5 ms 31


If it was FCFS Scheduling
Process Arrival Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 0.0 3
• FCFS scheduling Gantt chart
P1 P4 P2 P3

0 6 9 17 24

• Average waiting time = (0 + 7 + 13 + 6) / 4 = 6.5 ms 32


Determining Length of Next CPU Burst
• Can only predict the length
• Can be estimated as an exponential averaging
of the measured lengths of previous CPU
bursts,
1. t n actual length of n th CPU burst
2.  n 1 predicted value for the next CPU burst
3. for  , 0  1
4. Exponential average defines :

 n 1  t n  1    n

33
Examples of Exponential Averaging
• Value of tn contains most recent information

• Value of n stores the past history

• Parameter  controls the relative weight of


recent & past history in our prediction
• More commonly,  = ½ which means recent
history and past history are equally weighted
34
Examples of Exponential Averaging
• If  = 0, then

n+1 = 0*tn + (1-0)n = n


– Recent history has no effect

• If  = 1, then

n+1 = 1*tn + (1-1)n = tn


– Only the recent CPU burst matters

• If we expand the formula, we get:

n+1 =  tn+(1 - ) tn-1 + …+(1 -  )j  tn -j + …+(1 -  )n +1350


Prediction of the Length of the Next CPU Burst

With  = ½ & 0 = 10, exponential average prediction is shown above


36
SJF algorithm categories
• May be either preemptive or non-preemptive

• Choice arises when a new process arrives at


ready state while the previous process is
executing
• New process may have a shorter next CPU burst
than what is left of the currently executing
process
37
SJF algorithm categories
• Preemptive SJF
– Preempt the currently executing process
– Also called Shortest Remaining Time First scheduling
(SRT or SRTF)

• Non-preemptive SJF
– Allow the currently running process to finish its CPU
burst.
38
Example of Preemptive SJF or SRTF
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
• SRTF scheduling Gantt chart
P1 P2 P4 P1 P3

0 1 5 10 17 26

• Average waiting time = ((10-0-1) +(1-1)+ (17-2)+(5-3))/ 4 =


(9+0+15+2)/4 = 6.5 ms
• Average turnaround time = ((17-0)+(5-1)+(26-2)+(10-3))/4 = 39
Example of Non Preemptive SJF
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
• Non preemptive SJF scheduling Gantt chart
P1 P2 P4 P3

0 8 12 17 26

• Average waiting time = ((0) +(8-1)+ (17-2)+(12-3))/ 4 =


(0+7+15+9)/4 = 7.75 ms
• Average turnaround time = ((8-0)+(12-1)+(26-2)+(17-3))/4 = 40
If it was FCFS
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
• FCFS scheduling Gantt chart
P1 P2 P3 P4

0 8 12 21 26

• Average waiting time = ((0) +(8-1)+ (12-2)+(21-3))/ 4 =


(0+7+10+18)/4 = 8.75 ms
• Average turnaround time = ((8-0)+(12-1)+(21-2)+(26-3))/4 = 41
Priority Scheduling
• A priority number (integer) is associated with
each process
• The CPU is allocated to the process with the
highest priority (we use smallest integer as
highest priority)
• SJF is special case of general priority scheduling
– where priority is the predicted next CPU burst time
42
Priority Scheduling
• Priority can be defined internally or externally
• Internal Priority Egs:
– time limits,
– memory requirements,
– no. of open files,
– ratio of average I/O burst to average CPU Burst
• External Priority Egs:
– Importance of process
– Type & amount of funds paid for computer use
– Dept. sponsoring the work
– Political or other influences 43
Priority Scheduling
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
• Assume arrival time as 0 ms
• Priority scheduling Gantt chart
P2 P5 P1 P3 P4

0 1 6 16 18 19

• Average waiting time =(6+0+16+18+1)/ 4 =41/5 = 8.2 ms


44
• Average turnaround time =(16+1+18+19+6)/5 = 12 ms
Priority Scheduling
• Priority can be either preemptive or non-
preemptive
• When a process arrives at ready queue its priority
is compared with the priority of the current
running process
• Preemptive algorithm will preempt the CPU if the
priority of new process is higher
• Non-preemptive will put the new process at the
head of the ready queue.

45
Preemptive Priority Scheduling
Process Arrival Time Burst Time Priority
P1 0 10 3
P2 1 1 1
P3 2 2 4
P4 3 1 5
P5 4 5 2
• Preemptive Priority scheduling Gantt chart

P1 P2 P1 P5 P1 P3 P4

0 1 2 4 9 16 18 19

• Average waiting time =((9-0-3)+(1-1)+(16-2)+(18-3)+(4-4))/ 5 =


(6+0+14+15+0)/5 = 7 ms
• Average turnaround time =((16-0)+(2-1)+(18-2)+(19-3)+(9-4))/5 =
(16+1+16+16+5)/5 = 10.8 ms 46
Priority Scheduling
• Major problem with priority scheduling
algorithms:
– Indefinite blocking or starvation
• This algorithm can leave some low priority
processes waiting indefinitely for the CPU
• In heavily loaded computers, a steady stream of
higher priority processes can prevent a low priority
process from ever getting the CPU
47
Priority Scheduling
• Two things can happen
– Low priority process will run later on Sundays
when the system is lightly loaded or
– System will eventually crash & loose a unfinished
low priority processes.
• Eg: when IBM 7094 at MIT was shut down in 1973,
they found a low priority process that was submitted
in 1967 & had not yet been run. 48
Priority Scheduling
• Solution to starvation:
– Aging
• It is the technique of gradually increasing the
priority of processes that wait in the system for a
long time
• Eg: if priorities range from 127 (low) to 0 (high),
we could decrement the priority of a waiting
process by 1 every 15 minutes.
• It may not take more than 32 hours for a priority
127 process to age to priority of 0 process.
– Ie., (15 * 127) / 60 = 31.75 hours 49
Round Robin Scheduling
• Used for time sharing systems
• Similar to FCFS, but preemption is added to switch
between processes.
• Small unit of time called a time quantum or time slice is
defined
• Typically time quantum ranges from 10ms to 100 ms
• Ready queue is treated as circular queue
• New processes are added to the tail of a ready queue.
• CPU scheduler picks the first process from the ready
queue, sets a timer to interrupt after 1 time quantum and
dispatch the process
50
Round Robin Scheduling
• Two things can happen:
1. Process may have a CPU burst of less than 1 time
quantum.
-> Here process itself will release the CPU &
-> scheduler proceeds to the next process in ready
queue.
2. If the CPU burst of current running process is longer
than 1 time quantum, timer expires & will cause an
interrupt to OS
-> Context switching will happen & process will be put
in the ready queue
-> CPU scheduler selects the next process in the ready
queue 51
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
The Gantt chart is:
P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

waiting time of P1 = 26 – (4 * 5) = 6 ms
waiting time of P2 = 4 ms
waiting time of P3 = 7 ms
• Average waiting time = (6+4+7)/3 = 17/3 = 5.66 ms
• Average turnaround time = (30+7+10)/3 = 15.66 ms
• Typically, higher average turnaround than SJF, but better 52
Round Robin Scheduling
• RR scheduling is preemptive
– If a process exceeds 1 time quantum, that process is
preempted & is put back in ready queue.
• Performance of RR depends heavily on the size of time
quantum
– If the time quantum is very large, RR is same as FCFS
– If the time quantum is very small, RR is called processor
sharing which appears to the user as though each of n
processes has its own processor running at 1/n the
speed of the real processor
• Used in Control Data Corporation (CDC) hardware to
implement 10 peripheral processors with one set of
hardware & 10 set of registers. 53
Round Robin Scheduling
• Performance of RR also depends on context switching
• Relation between Time Quantum and Context Switch

54
Multilevel Queue
Ready queue is partitioned into separate queues:
-> foreground (interactive)
-> background (batch)
• Each queue has its own scheduling algorithm
– foreground – RR
– background – FCFS
• Possibilities of Scheduling between the queues
1. Fixed priority scheduling; (i.e., serve all from
foreground then from background). Possibility of
starvation.
-> interactive process preempts batch systems in
execution 55
Multilevel Queue
2. Time slice – each queue gets a certain amount of CPU
time which it can schedule amongst its processes;
– i.e., 80% to foreground in RR
– 20% to background in FCFS

• Advantage of this algorithm:


– Low scheduling overhead

• Disadvantage:
– Inflexible
56
Multilevel Queue Scheduling

57
Multilevel Feedback Queue
• A process can move between the various queues
• Idea is to separate processes with different CPU-
burst characteristics
• If a process uses too much CPU time, it will be
moved to a low priority queue
• This scheme leaves I/O bound & interactive
processes in higher priority queues
• If a process waits too long in a lower priority
queue, it may be moved to a higher priority
queue.
– This form of aging prevents starvation
58
Multilevel Feedback Queues

59
Multilevel Feedback Queue
• Scheduler executes all processes in queue 0
• Only when Queue 0 is empty it will execute
processes in queue 1
• Processes in queue 2 will be executed only if
queue 0 & 1 are empty
• Process that arrives for queue 1 will preempt a
process in queue 2
• Process that arrives for queue 0 will preempt a
process in queue 1
• Process entering ready queue is put in queue 0
60
Example of Multilevel Feedback Queue
• Three queues:
– Q0 – RR with time quantum 8 milliseconds
– Q1 – RR time quantum 16 milliseconds
– Q2 – FCFS
• Scheduling
– A new job enters queue Q0. When it gains CPU, job receives
8 ms. If it does not finish in 8 milliseconds, job is moved to
tail of queue Q1.
– When Q0 is empty, process at Q1 receives 16 ms. If it still
does not complete, it is preempted and moved to queue Q2.
• This algorithm gives the highest priority to any process
with CPU burst of 8 ms or less 61
Multilevel Feedback Queue
• Multilevel-feedback-queue scheduler is defined by
the following parameters:
– number of queues
– scheduling algorithms for each queue
– method used to determine when to upgrade a process
– method used to determine when to demote a process
– method used to determine which queue a process will
enter when that process needs service
62
Multilevel Feedback Queue
• Adv:
– Most general CPU scheduling algorithm
– Ie., it can be configured to match a specific system
under design

• Disadv:
– Requires some means of selecting values for all the
parameters to define the best scheduler.
– Most complex scheme.
63
Multiple-Processor Scheduling
• CPU scheduling is more complex when multiple CPUs are
available
• Homogeneous or identical processors systems within a
multiprocessor are systems in which any available
processor can be used to run any processes in the queue
• Heterogeneous or different processor system are
systems in which only programs compiled for a given
processor’s instruction set could be run on that
processor

64
Multiple-Processor Scheduling
• Even within homogeneous multiprocessor, there are
limitations:
1. Consider a system with an I/O device attached to a
private bus of one processor
– Processes wishing to use that device must be scheduled to run
on that processor; otherwise the device would not be available
2. If several identical processors are available, then load
sharing can occur
– Possible to provide separate queue for each processor
– But one processor could be idle with an empty queue, while
another processor was very busy.
65
Multiple-Processor Scheduling
• To prevent this situation, we can use a common ready
queue.
– All processes go into one queue & are scheduled onto any
available processor.
• In this scheme, 2 scheduling approaches may be used
• Approach 1:
– Each processor is self scheduling
– Each processor examines the common ready queue & selects a
process to execute.

66
Multiple-Processor Scheduling
• Approach 2:
– If we have multiple processors trying to access & update a
common data structure, each processor must be programmed
very carefully.
– We must ensure that 2 processors do not choose the same
process & that processes are not lost from the queue.
– So approach 2 avoids this problem by appointing 1 processor
as scheduler for the other processors, thus creating a master
slave structure.
67
Multiple-Processor Scheduling
• Some systems carry this structure one step further:
– 1 processor called master server handles all scheduling decisions,
I/O processing & other system activities
– Other processors execute only user codes.
– Adv: This asymmetric multiprocessing is far better than Symmetric
multiprocessing (SMP)
• Because only 1 processor accesses the system DS reducing the
need for data sharing.
– Disadv: not so efficient because I/O bound processes may
bottleneck on that particular processor that is performing all the
operations
• Typically Asymmetric MP is implemented first within an OS
then upgraded to SMP as the system evolves 68
Real time Scheduling
• Priority scheduling is used
• Dispatch latency must be small, so it is complex
• System calls must be preemptive
• Disallow process aging on real time processes
• Only few preemption can be practically added to kernel
– Solution is to make entire kernel preemptive by
providing protection to kernel data structures to avoid
modification by high priority process.
– Most effective and complex method in wide use.
– Used in Solaris 2
69
Real time Scheduling
• If higher priority process needs to read or modify kernel
data currently being accessed by another lower priority
process, the high priority process would be waiting for a
lower priority one to finish.
• This is called priority inversion.
• Solution is to use priority inheritance protocol
– All these processes inherits the high priority until they
are done with the resources in use.
– When they are finished, their priority reverts to its
original value.

70
Algorithm Evaluation
• To select the best CPU Scheduling algorithm
• Analytic evaluation:
– Uses the given algorithm & the system workload to produce a
formula or number that evaluates the performance of the
algorithm for that workload
• One type of analytic evaluation is Deterministic modeling
– it takes a particular predetermined workload and defines the
performance of each algorithm for that workload
– But this method is too specific & requires too much exact
knowledge (burst time)
• So, Queuing models, Simulations & Implementations (code
it, put in OS & see how it works) helps to get more
accurate algorithm evaluation in the given order
• Implementation gives better evaluation 71

You might also like