Job Scheduling
Job Scheduling
CPU Scheduling
Agenda
Basic Concepts
Scheduling Criteria
Scheduling Algorithms
Thread Scheduling
Multiple-Processor Scheduling
Real-Time CPU Scheduling
Operating Systems Examples
Algorithm Evaluation
6.2
4.1 Basic Concepts
6.3
CPU burst vs I/O burst
6.4
CPU burst vs. I/O burst –Diagram
(not required in the exam)
6.5
CPU Scheduler
6.6
Dispatcher in OS
6.7
Scheduling Criteria
6.8
Scheduling Criteria – in Time Axis
6.9
Scheduling Algorithm Optimization Criteria
6.10
Preemptive and Non-Preemptive Scheduling
1. In preemptive scheduling the CPU is allocated to the
processes for the limited time. While in Non-preemptive
scheduling, the CPU is allocated to the process till
it terminates or switches to waiting state (waiting for
I/O).
2. The executing process in preemptive scheduling can be
interrupted in the middle of execution whereas, the
executing process in non-preemptive scheduling is not
interrupted in the middle of execution.
3. Preemptive Scheduling is quite flexible because the
critical processes are allowed to access CPU as they
arrive into the ready queue. In Non-preemptive
scheduling even if a critical process enters the ready
queue the process running CPU is not disturbed.
6.11
CPU Scheduling Algorithms
3. Priority Scheduling
6.12
First- Come, First-Served (FCFS) Scheduling
6.13
6.14
First- Come, First-Served (FCFS) Scheduling
P1 P2 P3
0 24 27 30
6.15
Shortest-Job-First (SJF) Scheduling
6.16
Example of SJF
ProcessArrTime Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
P4 P1 P3 P2
0 3 9 16 24
6.17
Priority Scheduling
A priority number (integer) is associated with each process
The CPU is allocated to the process with the highest priority
(smallest integer ≡ highest priority)
Priorities can be assigned either internally or externally.
Internal priorities are assigned by the OS using criteria such
as average burst time, ratio of CPU to I/O activity, system
resource use, and other factors available to the kernel.
External priorities are assigned by users, based on the
importance of the job.
6.18
Example of Priority Scheduling
ProcessAarri Burst TimeTPriority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priority scheduling Gantt Chart
6.20
Round Robin (RR)
Round robin scheduling is similar to FCFS scheduling,
except that CPU bursts are assigned with limits called time
quantum.
When a process is given the CPU, a timer is set for a time
quantum.
If the process finishes its burst before the time quantum
timer expires, then it is swapped out of the CPU just like
the normal FCFS algorithm.
If the timer goes off first, then the process is swapped
out of the CPU and moved to the back end of the ready
queue.
6.21
Round Robin (RR)
The ready queue is maintained as a circular queue, so
when all processes have had a turn, then the scheduler
gives the first process another turn, and so on.
RR scheduling can give the effect of all processes sharing
the CPU equally, although the average wait time can be
longer than with other scheduling algorithms.
6.22
Round Robin (RR)
6.23
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
The Gantt chart is:
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
6.25
Multilevel Queue
6.26
Multilevel Queue
80% to foreground in RR
20% to background in FCFS
6.27
Multilevel Queue Scheduling
6.28
Multilevel Feedback Queue
Multilevel feedback queue scheduling is similar to the
ordinary multilevel queue scheduling described above,
but the difference is jobs may be moved from one
queue to another for a variety of reasons:
If the characteristics of a job change between CPU-
intensive and I/O intensive, then it switchs a job from
one queue to another.
Aging: a job that has waited for a long time can get
bumped up into a higher priority queue.
6.29
Multilevel Feedback Queue
Multilevel feedback queue scheduling is the most flexible,
because it can be tuned for any situation. But it is also
the most complex to implement because of all the
adjustable parameters. Some of the parameters which
define one of these systems include:
The number of queues.
The scheduling algorithm for each queue.
The methods used to transfer processes from one
queue to another.
The method used to determine which queue a process
enters initially.
6.30
Example of Multilevel Feedback Queue
Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is served FCFS
When it gains CPU, job receives 8 milliseconds
If it does not finish in 8 milliseconds, job is moved
to queue Q1
At Q1 job is again served FCFS and receives 16
additional milliseconds
If it still does not complete, it is preempted and
moved to queue Q2 6.31
Example of Multilevel Feedback Queue
(not required in the exam)
6.32
Multiple-Processor Scheduling
When multiple processors are available, then the
scheduling gets more complicated, because now there is
more than one CPU which must be kept busy and in
effective use at all times.
Load sharing revolves around balancing the load
between multiple processors.
Multi-processor systems may be heterogeneous, (
different kinds of CPUs ), or homogenous, ( all the same
kind of CPU ).
6.33
Approaches to Multiple-Processor Scheduling
Asymmetric multiprocessing – only one processor
accesses the system data structures, alleviating the need
for data sharing
Symmetric multiprocessing (SMP) – the most common
approach, when each processor is self-scheduling, all
processes in common ready queue, or each has its own
private queue of ready processes
6.34
Process Affinity
Processor affinity – the binding of a process or a thread
to a CPU, so that the process or thread will execute only
on the designated CPU. There are two types:
soft affinity: when the system attempts to keep
processes on the same processor but makes no
guarantees
hard affinity: in which a process specifies that it is not
to be moved between processors.
6.35
Multiple-Processor Scheduling
– Load Balancing
Load balancing attempts to keep workload evenly
distributed, so that one processor won't be sitting idle
while another is overloaded.
Balancing Methods:
Push migration is where the operating system checks
the load on each processor periodically. If there’s an
imbalance, some processes will be moved from one
processor onto another.
Pull migration is where a scheduler finds that there are
no more processes in the run queue for the processor.
In this case, it transfers a process onto its own queue
so it will have something to run.
6.36
Multicore Processors
Recent trend to place multiple processor cores on same
physical chip, which is faster and consumes less power.
It is also common that multiple CPU chips to run multiple
kernel threads concurrently.
6.37
Mutli-threading on Multi-core Processors
(not required in the exam)
6.38