0% found this document useful (0 votes)
3 views

Job Scheduling

The document discusses CPU scheduling, covering basic concepts, scheduling criteria, and various algorithms such as FCFS, SJF, and Round Robin. It highlights the importance of maximizing CPU utilization and minimizing wait times through different scheduling strategies, including preemptive and non-preemptive methods. Additionally, it addresses multiple-processor scheduling and the complexities involved in load balancing and processor affinity.

Uploaded by

apexlegand90
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Job Scheduling

The document discusses CPU scheduling, covering basic concepts, scheduling criteria, and various algorithms such as FCFS, SJF, and Round Robin. It highlights the importance of maximizing CPU utilization and minimizing wait times through different scheduling strategies, including preemptive and non-preemptive methods. Additionally, it addresses multiple-processor scheduling and the complexities involved in load balancing and processor affinity.

Uploaded by

apexlegand90
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Lecture 4

CPU Scheduling
Agenda

Basic Concepts
Scheduling Criteria
Scheduling Algorithms
Thread Scheduling
Multiple-Processor Scheduling
Real-Time CPU Scheduling
Operating Systems Examples
Algorithm Evaluation

6.2
4.1 Basic Concepts

CPU scheduling allows one process to use the


CPU while the execution of another process is on
hold.
Maximum CPU utilization obtained with
multitasking
CPU–I/O Burst Cycle – Process execution
consists of a cycle of CPU execution and I/O wait
CPU burst followed by I/O burst
CPU burst distribution is of main concern

6.3
CPU burst vs I/O burst

The important role of an OS is the act of


managing and scheduling these activities to
maximize the use of the resources and minimize
wait and idle time.
Process execution repeats the CPU burst and
I/O burst cycle.
When a process begins an I/O burst, another
process can use the CPU for a CPU burst

6.4
CPU burst vs. I/O burst –Diagram
(not required in the exam)

6.5
CPU Scheduler

Short-term scheduler selects from among the


processes in ready queue, and allocates the CPU to one
of them
Queue may be ordered in various ways
CPU scheduling decisions may take place when a
process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates

6.6
Dispatcher in OS

Dispatcher module gives control of the CPU to the


process selected by the short-term scheduler;
Dispatch latency – time it takes for the dispatcher to
stop one process and start another running

6.7
Scheduling Criteria

CPU utilization – keep the CPU as busy as possible


Throughput – The number of processes that complete
their execution per time unit
Turnaround time – amount of time to execute a
particular process
Waiting time – amount of time a process has been
waiting in the ready queue
Response time – amount of time it takes from when a
request was submitted until the first response is
produced, not output (for time-sharing environment)

6.8
Scheduling Criteria – in Time Axis

6.9
Scheduling Algorithm Optimization Criteria

Max CPU utilization


Max throughput
Min turnaround time
Min waiting time
Min response time

6.10
Preemptive and Non-Preemptive Scheduling
1. In preemptive scheduling the CPU is allocated to the
processes for the limited time. While in Non-preemptive
scheduling, the CPU is allocated to the process till
it terminates or switches to waiting state (waiting for
I/O).
2. The executing process in preemptive scheduling can be
interrupted in the middle of execution whereas, the
executing process in non-preemptive scheduling is not
interrupted in the middle of execution.
3. Preemptive Scheduling is quite flexible because the
critical processes are allowed to access CPU as they
arrive into the ready queue. In Non-preemptive
scheduling even if a critical process enters the ready
queue the process running CPU is not disturbed.
6.11
CPU Scheduling Algorithms

1. First-Come First-Serve Scheduling, FCFS

2. Shortest-Job-First Scheduling, SJF

3. Priority Scheduling

4. Round Robin Scheduling

5. Multilevel Queue Scheduling

6. Multilevel Feedback-Queue Scheduling

6.12
First- Come, First-Served (FCFS) Scheduling

FCFS is very simple - like customers waiting in line at the


bank or the post office.

However, FCFS can yield some very long average wait


times, particularly if the first process to get there takes a
long time. For example, consider the following Example

6.13
6.14
First- Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3
0 24 27 30

Waiting time for P1 = 0; P2 = 24; P3 = 27


Average waiting time: (0 + 24 + 27)/3 = 17

6.15
Shortest-Job-First (SJF) Scheduling

The idea behind the SJF algorithm is to pick the fastest


little job that needs to be done, get it out of the way first,
and then pick the next smallest fastest job to do next.
Technically this algorithm picks a process based on the
next shortest CPU burst, not the overall process time.
SJF is optimal – gives minimum average waiting time for a
given set of processes
The difficulty is knowing the length of the next CPU
request

6.16
Example of SJF
ProcessArrTime Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3

P4 P1 P3 P2
0 3 9 16 24

SJF scheduling chart


Average waiting time = (3 + 16 + 9 + 0) / 4 = 7

6.17
Priority Scheduling
A priority number (integer) is associated with each process
The CPU is allocated to the process with the highest priority
(smallest integer ≡ highest priority)
Priorities can be assigned either internally or externally.
Internal priorities are assigned by the OS using criteria such
as average burst time, ratio of CPU to I/O activity, system
resource use, and other factors available to the kernel.
External priorities are assigned by users, based on the
importance of the job.

6.18
Example of Priority Scheduling
ProcessAarri Burst TimeTPriority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priority scheduling Gantt Chart

Average waiting time = 8.2 m sec


6.19
Priority Scheduling - Problem
Priority scheduling can suffer from a major problem known
as indefinite blocking, or starvation, in which a low-
priority task can wait forever because there are always
some other jobs around that have higher priority.
Solution ≡ Aging – as time progresses increase the priority
of the process

6.20
Round Robin (RR)
Round robin scheduling is similar to FCFS scheduling,
except that CPU bursts are assigned with limits called time
quantum.
When a process is given the CPU, a timer is set for a time
quantum.
If the process finishes its burst before the time quantum
timer expires, then it is swapped out of the CPU just like
the normal FCFS algorithm.
If the timer goes off first, then the process is swapped
out of the CPU and moved to the back end of the ready
queue.

6.21
Round Robin (RR)
The ready queue is maintained as a circular queue, so
when all processes have had a turn, then the scheduler
gives the first process another turn, and so on.
RR scheduling can give the effect of all processes sharing
the CPU equally, although the average wait time can be
longer than with other scheduling algorithms.

6.22
Round Robin (RR)

6.23
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

In this example the average wait time is 5.66 ms.


Typically, higher average turnaround than SJF, but better
response
q should be large compared to context switch time
q usually 10ms to 100ms, context switch < 10 usec
6.24
Time Quantum and Context Switch Time

6.25
Multilevel Queue

When processes can be readily categorized, then multiple


separate queues can be established, each implementing
whatever scheduling algorithm is most appropriate for that
type of job, and/or with different parametric adjustments.
Note that under this algorithm jobs cannot switch from
queue to queue - Once they are assigned a queue, that is
their queue until they finish.
Each queue has its own scheduling algorithm:
foreground – RR
background – FCFS

6.26
Multilevel Queue

Scheduling must be done between the queues:


Fixed priority scheduling; (i.e., serve all from foreground
then from background). Possibility of starvation.
Time slice – each queue gets a certain amount of CPU
time which it can schedule amongst its processes; i.e.,

80% to foreground in RR
20% to background in FCFS

6.27
Multilevel Queue Scheduling

6.28
Multilevel Feedback Queue
Multilevel feedback queue scheduling is similar to the
ordinary multilevel queue scheduling described above,
but the difference is jobs may be moved from one
queue to another for a variety of reasons:
If the characteristics of a job change between CPU-
intensive and I/O intensive, then it switchs a job from
one queue to another.
Aging: a job that has waited for a long time can get
bumped up into a higher priority queue.

6.29
Multilevel Feedback Queue
Multilevel feedback queue scheduling is the most flexible,
because it can be tuned for any situation. But it is also
the most complex to implement because of all the
adjustable parameters. Some of the parameters which
define one of these systems include:
The number of queues.
The scheduling algorithm for each queue.
The methods used to transfer processes from one
queue to another.
The method used to determine which queue a process
enters initially.

6.30
Example of Multilevel Feedback Queue
Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is served FCFS
When it gains CPU, job receives 8 milliseconds
If it does not finish in 8 milliseconds, job is moved
to queue Q1
At Q1 job is again served FCFS and receives 16
additional milliseconds
If it still does not complete, it is preempted and
moved to queue Q2 6.31
Example of Multilevel Feedback Queue
(not required in the exam)

6.32
Multiple-Processor Scheduling
When multiple processors are available, then the
scheduling gets more complicated, because now there is
more than one CPU which must be kept busy and in
effective use at all times.
Load sharing revolves around balancing the load
between multiple processors.
Multi-processor systems may be heterogeneous, (
different kinds of CPUs ), or homogenous, ( all the same
kind of CPU ).

6.33
Approaches to Multiple-Processor Scheduling
Asymmetric multiprocessing – only one processor
accesses the system data structures, alleviating the need
for data sharing
Symmetric multiprocessing (SMP) – the most common
approach, when each processor is self-scheduling, all
processes in common ready queue, or each has its own
private queue of ready processes

6.34
Process Affinity
Processor affinity – the binding of a process or a thread
to a CPU, so that the process or thread will execute only
on the designated CPU. There are two types:
soft affinity: when the system attempts to keep
processes on the same processor but makes no
guarantees
hard affinity: in which a process specifies that it is not
to be moved between processors.

6.35
Multiple-Processor Scheduling
– Load Balancing
Load balancing attempts to keep workload evenly
distributed, so that one processor won't be sitting idle
while another is overloaded.
Balancing Methods:
Push migration is where the operating system checks
the load on each processor periodically. If there’s an
imbalance, some processes will be moved from one
processor onto another.
Pull migration is where a scheduler finds that there are
no more processes in the run queue for the processor.
In this case, it transfers a process onto its own queue
so it will have something to run.

6.36
Multicore Processors
Recent trend to place multiple processor cores on same
physical chip, which is faster and consumes less power.
It is also common that multiple CPU chips to run multiple
kernel threads concurrently.

6.37
Mutli-threading on Multi-core Processors
(not required in the exam)

6.38

You might also like