Mod 2 Process Scheduling
Mod 2 Process Scheduling
3
Process State
• Process – a program in execution
• process execution must progress in sequential
fashion
• As a process executes, it changes state
– new: Process is being created.
– running: Instructions are being executed.
– waiting: Process is waiting for some event to occur
(I/O completion or interrupt).
– ready: Process is waiting to be assigned to a
processor.
– terminated: Process has finished execution.
4
Process State
5
Process control Block (PCB)
• Each process is represented in the OS by a PCB
• Also called task control block.
6
Process control Block (PCB)
• Pointer: used to link with other PCBs
• Process state: may be new, ready, running,
waiting, terminated
• Program counter: indicates address of next
instruction to be executed for this process
• CPU registers: Depending on architecture,
registers varies in no. & type. Eg: Accumulator,
index register, stack pointers, GPR or CC
• Memory limits: value of base register, limit
register, page tables, etc
7
Process Scheduling Queues
8
Process Scheduling Queues
• Job queue
– set of all PCBs in the system.
• Ready queue
– set of all processes residing in main memory, ready
and waiting to execute.
9
Queuing diagram Representation of Process Scheduling
10
CPU Switch From Process to Process
11
Scheduler
• Process migrates between the scheduling queues
throughout its life
• OS must select process from these queues in
some fashion for scheduling purpose
• 2 types of schedulers:
– Long term scheduler or job scheduler
• Selects processes from job pool & loads them into
memory for execution
– Short term scheduler or CPU scheduler
• Selects from the processes that are ready to
execute & allocate CPU to one of them
12
CPU Scheduler
• Otherwise called short term scheduler
13
CPU Scheduler
• CPU scheduling decisions may take place when a
process:
1. Switches from running to waiting state
• Eg: I/O request
2. Switches from running to ready state
• Eg: Interrupt occurs
3. Switches from waiting to ready state
• Eg: Completion of I/O operation
4. Terminates
• Eg: Process completes
• Scheduling under 1 and 4 is non-preemptive because
there is no choice for scheduling 14
CPU Scheduler
• Non-preemptive: only schedule a new process when the
current one does not want CPU any more.
• Preemptive: Schedule a new process even when the
current process does not intend to give up the CPU
15
Dispatcher
turnaround time
wait time + +
response time = 0
A B C A B C A C A C Time
18
Example
turnaround time
wait time +
response time
A B C A B C A C A C Time
19
Example
turnaround time
wait time + + +
response time
A B C A B C A C A C Time
20
Scheduling - Optimization Criteria
• Aim of OS is to maximize
– CPU utilization &
– throughput
• And to minimize
– turnaround time
– waiting time
– response time
21
Scheduling Algorithms
It deals with problem of deciding which process in ready
queue is to be allocated with CPU
1. FCFS (First Come First Served):
assigns the CPU based on the order of requests
implementation is managed by FIFO queue
Non-preemptive: A process keeps running on a CPU
until it is blocked or terminated
Adv: Simplest algorithm
Disadv: Short jobs can get stuck behind long jobs
22
First-Come, First-Served (FCFS)
• Example 1: time in ms
Process Burst Time
P1 24
P2 3
P3 3
0 24 27 30
0 3 6 30
0 3 9 13 18 20
0 3 9 13 18 20
27
Shortest-Job-First (SJF) Scheduling
• Each process is associated with the length of its next
CPU burst.
• Process with shortest CPU burst time is assigned to
CPU
• If 2 processes have same length of next CPU burst,
FCFS scheduling is used.
• Also called shortest next CPU burst or Shortest
Process Next (SPN)
• SJF is optimal – gives minimum average waiting time
for a given set of processes
– But the difficulty is knowing the length of the next CPU
request 28
Example 1 of SJF
Process Burst Time
P1 6
P2 8
P3 7
P4 3
• SJF scheduling Gantt chart
P4 P1 P3 P2
0 3 9 16 24
0 6 14 21 24
0 3 9 16 24
0 6 9 17 24
n 1 t n 1 n
33
Examples of Exponential Averaging
• Value of tn contains most recent information
• If = 1, then
• Non-preemptive SJF
– Allow the currently running process to finish its CPU
burst.
38
Example of Preemptive SJF or SRTF
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
• SRTF scheduling Gantt chart
P1 P2 P4 P1 P3
0 1 5 10 17 26
0 8 12 17 26
0 8 12 21 26
0 1 6 16 18 19
45
Preemptive Priority Scheduling
Process Arrival Time Burst Time Priority
P1 0 10 3
P2 1 1 1
P3 2 2 4
P4 3 1 5
P5 4 5 2
• Preemptive Priority scheduling Gantt chart
P1 P2 P1 P5 P1 P3 P4
0 1 2 4 9 16 18 19
0 4 7 10 14 18 22 26 30
waiting time of P1 = 26 – (4 * 5) = 6 ms
waiting time of P2 = 4 ms
waiting time of P3 = 7 ms
• Average waiting time = (6+4+7)/3 = 17/3 = 5.66 ms
• Average turnaround time = (30+7+10)/3 = 15.66 ms
• Typically, higher average turnaround than SJF, but better 52
Round Robin Scheduling
• RR scheduling is preemptive
– If a process exceeds 1 time quantum, that process is
preempted & is put back in ready queue.
• Performance of RR depends heavily on the size of time
quantum
– If the time quantum is very large, RR is same as FCFS
– If the time quantum is very small, RR is called processor
sharing which appears to the user as though each of n
processes has its own processor running at 1/n the
speed of the real processor
• Used in Control Data Corporation (CDC) hardware to
implement 10 peripheral processors with one set of
hardware & 10 set of registers. 53
Round Robin Scheduling
• Performance of RR also depends on context switching
• Relation between Time Quantum and Context Switch
54
Multilevel Queue
Ready queue is partitioned into separate queues:
-> foreground (interactive)
-> background (batch)
• Each queue has its own scheduling algorithm
– foreground – RR
– background – FCFS
• Possibilities of Scheduling between the queues
1. Fixed priority scheduling; (i.e., serve all from
foreground then from background). Possibility of
starvation.
-> interactive process preempts batch systems in
execution 55
Multilevel Queue
2. Time slice – each queue gets a certain amount of CPU
time which it can schedule amongst its processes;
– i.e., 80% to foreground in RR
– 20% to background in FCFS
• Disadvantage:
– Inflexible
56
Multilevel Queue Scheduling
57
Multilevel Feedback Queue
• A process can move between the various queues
• Idea is to separate processes with different CPU-
burst characteristics
• If a process uses too much CPU time, it will be
moved to a low priority queue
• This scheme leaves I/O bound & interactive
processes in higher priority queues
• If a process waits too long in a lower priority
queue, it may be moved to a higher priority
queue.
– This form of aging prevents starvation
58
Multilevel Feedback Queues
59
Multilevel Feedback Queue
• Scheduler executes all processes in queue 0
• Only when Queue 0 is empty it will execute
processes in queue 1
• Processes in queue 2 will be executed only if
queue 0 & 1 are empty
• Process that arrives for queue 1 will preempt a
process in queue 2
• Process that arrives for queue 0 will preempt a
process in queue 1
• Process entering ready queue is put in queue 0
60
Example of Multilevel Feedback Queue
• Three queues:
– Q0 – RR with time quantum 8 milliseconds
– Q1 – RR time quantum 16 milliseconds
– Q2 – FCFS
• Scheduling
– A new job enters queue Q0. When it gains CPU, job receives
8 ms. If it does not finish in 8 milliseconds, job is moved to
tail of queue Q1.
– When Q0 is empty, process at Q1 receives 16 ms. If it still
does not complete, it is preempted and moved to queue Q2.
• This algorithm gives the highest priority to any process
with CPU burst of 8 ms or less 61
Multilevel Feedback Queue
• Multilevel-feedback-queue scheduler is defined by
the following parameters:
– number of queues
– scheduling algorithms for each queue
– method used to determine when to upgrade a process
– method used to determine when to demote a process
– method used to determine which queue a process will
enter when that process needs service
62
Multilevel Feedback Queue
• Adv:
– Most general CPU scheduling algorithm
– Ie., it can be configured to match a specific system
under design
• Disadv:
– Requires some means of selecting values for all the
parameters to define the best scheduler.
– Most complex scheme.
63
Multiple-Processor Scheduling
• CPU scheduling is more complex when multiple CPUs are
available
• Homogeneous or identical processors systems within a
multiprocessor are systems in which any available
processor can be used to run any processes in the queue
• Heterogeneous or different processor system are
systems in which only programs compiled for a given
processor’s instruction set could be run on that
processor
64
Multiple-Processor Scheduling
• Even within homogeneous multiprocessor, there are
limitations:
1. Consider a system with an I/O device attached to a
private bus of one processor
– Processes wishing to use that device must be scheduled to run
on that processor; otherwise the device would not be available
2. If several identical processors are available, then load
sharing can occur
– Possible to provide separate queue for each processor
– But one processor could be idle with an empty queue, while
another processor was very busy.
65
Multiple-Processor Scheduling
• To prevent this situation, we can use a common ready
queue.
– All processes go into one queue & are scheduled onto any
available processor.
• In this scheme, 2 scheduling approaches may be used
• Approach 1:
– Each processor is self scheduling
– Each processor examines the common ready queue & selects a
process to execute.
66
Multiple-Processor Scheduling
• Approach 2:
– If we have multiple processors trying to access & update a
common data structure, each processor must be programmed
very carefully.
– We must ensure that 2 processors do not choose the same
process & that processes are not lost from the queue.
– So approach 2 avoids this problem by appointing 1 processor
as scheduler for the other processors, thus creating a master
slave structure.
67
Multiple-Processor Scheduling
• Some systems carry this structure one step further:
– 1 processor called master server handles all scheduling decisions,
I/O processing & other system activities
– Other processors execute only user codes.
– Adv: This asymmetric multiprocessing is far better than Symmetric
multiprocessing (SMP)
• Because only 1 processor accesses the system DS reducing the
need for data sharing.
– Disadv: not so efficient because I/O bound processes may
bottleneck on that particular processor that is performing all the
operations
• Typically Asymmetric MP is implemented first within an OS
then upgraded to SMP as the system evolves 68
Real time Scheduling
• Priority scheduling is used
• Dispatch latency must be small, so it is complex
• System calls must be preemptive
• Disallow process aging on real time processes
• Only few preemption can be practically added to kernel
– Solution is to make entire kernel preemptive by
providing protection to kernel data structures to avoid
modification by high priority process.
– Most effective and complex method in wide use.
– Used in Solaris 2
69
Real time Scheduling
• If higher priority process needs to read or modify kernel
data currently being accessed by another lower priority
process, the high priority process would be waiting for a
lower priority one to finish.
• This is called priority inversion.
• Solution is to use priority inheritance protocol
– All these processes inherits the high priority until they
are done with the resources in use.
– When they are finished, their priority reverts to its
original value.
70
Algorithm Evaluation
• To select the best CPU Scheduling algorithm
• Analytic evaluation:
– Uses the given algorithm & the system workload to produce a
formula or number that evaluates the performance of the
algorithm for that workload
• One type of analytic evaluation is Deterministic modeling
– it takes a particular predetermined workload and defines the
performance of each algorithm for that workload
– But this method is too specific & requires too much exact
knowledge (burst time)
• So, Queuing models, Simulations & Implementations (code
it, put in OS & see how it works) helps to get more
accurate algorithm evaluation in the given order
• Implementation gives better evaluation 71