0% found this document useful (0 votes)
11 views

lecture_04

Lecture 4 covers process scheduling, a fundamental concept in multiprogrammed operating systems, detailing various scheduling algorithms and criteria. Key topics include CPU scheduling, turnaround time calculations, and different algorithms like FCFS, SJN, SRT, priority scheduling, and round-robin. Additionally, it discusses multilevel queues and thread scheduling, emphasizing the importance of optimizing CPU utilization and response times.

Uploaded by

Amanda James
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

lecture_04

Lecture 4 covers process scheduling, a fundamental concept in multiprogrammed operating systems, detailing various scheduling algorithms and criteria. Key topics include CPU scheduling, turnaround time calculations, and different algorithms like FCFS, SJN, SRT, priority scheduling, and round-robin. Additionally, it discusses multilevel queues and thread scheduling, emphasizing the importance of optimizing CPU utilization and response times.

Uploaded by

Amanda James
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 41

Lecture 4: Process Scheduling

Lecture 4: Process Scheduling


Basic Concepts
Scheduling Criteria
Scheduling Algorithms
Thread Scheduling
Multiple-Processor Scheduling

2
Objectives
To introduce process scheduling, which is the basis for
multiprogrammed operating systems
To describe various process scheduling algorithms

3
Basic Concepts
Maximum CPU utilization obtained with multiprogramming
CPU–I/O Burst Cycle – Process execution consists of a cycle
of CPU execution and I/O wait
CPU burst distribution

4
Alternating Sequence of CPU And I/O Bursts

5 I/O burst is short but


frequently been executed.
CPU Scheduler
Selects from among the processes in memory that are ready to
execute and allocates the CPU to one of them for execution.
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Scheduling under 1 and 4 is nonpreemptive
(unstoppable/uninterruptable – must finish execution)
All other scheduling is preemptive (stoppable/interruptable)
6
Dispatcher (Schedular)

7
Scheduling Criteria
CPU utilization – keep the CPU as busy as possible
Throughput – number of processes that complete their execution
per time unit
Turnaround time – amount of time to execute a particular process
(less is better)
Waiting time – amount of time a process has been waiting in the
ready queue (the more the worst)
Response time – amount of time it takes from when a request was
submitted until the first response is produced, not output (for time-
sharing environment) (less is better)

8
Scheduling Algorithm Optimization Criteria
Max CPU utilization
Max throughput
Min turnaround time
Min waiting time
Min response time

9
CPU Scheduling
General Formula
Formula
Turnaround Time (TAT) = (Finish Time – Arrival Time)
Average Turnaround Time = Sum of TAT / Number of processes
Waiting Time (WT)=(Turnaround time – CPU burst time)
Average Waiting Time = Sum of WT / Number of processes

10
First-Come, First-Served (FCFS) Scheduling
Non-preemptive.
Handles jobs according to their arrival time --the earlier they
arrive, the sooner they’re served.
Simple algorithm to implement -- uses a FIFO queue.
Good for batch systems; not so good for interactive ones.
Turnaround time is unpredictable.

12
First-Come, First-Served (FCFS) Scheduling
Processe Arrival Burst Priority
s Time Time
A 0 8 5
B 1 5 3
C 2 3 1
D 3 1 4
AssumeEthat the time given
4 are in nanosecond
7 (ns) 2

The FCFS Gantt Chart:

13
First-Come, First-Served (FCFS) Scheduling

Turnarou
Finish Arrival Burst Waiting
Processe nd
Time Time Time Time
s Time(TA
(FT) (AT) (BT) (WT)
T)
A 8 0 8 8 0
B 13 1 12 5 7
C 16 2 14 3 11
D 17 3 14 1 13
E 24 4 20 7 13
14 (8+12+14
(0+7+11
Average +14+20) / Average
Shortest Job Next (SJN) Scheduling
Non preemptive.
Handles jobs based on length of their CPU cycle time .
Use lengths to schedule process with shortest time.
Optimal gives minimum average waiting time for a given set of
processes.
Optimal only when all of jobs are available at same time and the
CPU estimates are available and accurate.
Doesn’t work in interactive systems because users don’t
estimate in advance CPU time required to run their jobs.

15
Shortest Job Next (SJN) Scheduling
Processe Arrival Burst Priority
s Time Time
A 0 8 5
B 1 5 3
C 2 3 1
D 3 1 4
E 4 7 2
Assume that the time given are in nanosecond (ns)

The SJN Gantt Chart:

16
Shortest Job Next (SJN) Scheduling

Turnarou
Finish Arrival Burst Waiting
Processe nd
Time Time Time Time
s Time(TA
(FT) (AT) (BT) (WT)
T)
A 8 0 8 8 0
B 17 1 16 5 11
C 12 2 10 3 7
D 9 3 6 1 5
E 24 4 20 7 13
17 (8+16+10 (0+11+7
Average Average
Shortest Remaining Time (SRT) Scheduling
Preemptive version of the SJN algorithm.
Processor allocated to job closest to completion.
This job can be preempted if a newer job in READY queue has a
“time to completion” that's shorter.
Can’t be implemented in interactive system --requires advance
knowledge of CPU time required to finish each job.
SRT involves more overhead than SJN.
OS monitors CPU time for all jobs in READY queue and
performs “context switching”.

18
Shortest Remaining Time (SRT) Scheduling
Context Switching Is Required by All Preemptive
Algorithms
When Job A is preempted
All of its processing information must be saved in its PCB for later
(when Job A’s execution is continued).
Contents of JobB’s PCB are loaded into appropriate registers so it can
start running again (context switch).
Later when Job A is once again assigned to processor,
another context switch is performed.
Info from pre-empted job is stored in its PCB.
Contents of Job A’s PCB are loaded into appropriate registers.
19
Shortest Remaining Time (SRT) Scheduling
Processe Arrival Burst Priority
s Time Time
A 0 8 5
B 1 5 3
C 2 3 1
D 3 1 4
E 4 7 2
Assume that the time given are in nanosecond (ns)

The SRT Gantt Chart:

20
Shortest Remaining Time (SRT) Scheduling

Turnarou
Finish Arrival Burst Waiting
Processe nd
Time Time Time Time
s Time(TA
(FT) (AT) (BT) (WT)
T)
A 17 0 17 8 9
B 10 1 9 5 4
C 6 2 4 3 1
D 4 3 1 1 0
E 24 4 20 7 13
21 (17+9+4
(9+4+1+
Priority Scheduling
Can be preemptive or non preemptive
Non preemptive algorithm which is commonly used in batch
systems
Preemptive algorithm which is commonly used in time critical
systems
Gives preferential treatment to important jobs
Allows the program with the highest priority to be processed first and
these high priority jobs are not interrupted until their CPU cycles (run
times) are completed or a natural wait occurs
If two or more jobs with equal priority, then uses FCFS policy within
the same priority group
22
Non- Preemptive Priority Scheduling
Processe Arrival Burst Priority
s Time Time
A 0 8 5
B 1 5 3
C 2 3 1
D 3 1 4
Ethat the time given
Assume 4 are in nanosecond
7 (ns) 2

The Non-Preem Priority Gantt Chart:

23
Non- Preemptive Priority Scheduling

Turnarou
Finish Arrival Burst Waiting
Processe nd
Time Time Time Time
s Time(TA
(FT) (AT) (BT) (WT)
T)
A 8 0 8 8 0
B 23 1 22 5 17
C 11 2 9 3 6
D 24 3 21 1 20
E 18 4 14 7 7
24 (8+22+9
(0+17+6
Average +21+14) / Average
Preemptive Priority Scheduling
Processe Arrival Burst Priority
s Time Time
A 0 8 5
B 1 5 3
C 2 3 1
D 3 1 4
Ethat the time given
Assume 4 are in nanosecond
7 (ns) 2

The Preem Priority Gantt Chart:

25
Preemptive Priority Scheduling

Turnarou
Finish Arrival Burst Waiting
Processe nd
Time Time Time Time
s Time(TA
(FT) (AT) (BT) (WT)
T)
A 24 0 24 8 16
B 16 1 15 5 10
C 5 2 3 3 0
D 17 3 14 1 13
E 12 4 8 7 1
26 (24+15+3
(16+10+0
Average +14+8) / Average
Round Robin (RR) Scheduling
Preemptive
Used extensively in interactive systems because it’s easy to
implement
Isn’t based on job characteristics but on a predetermined slice
of time that’s given to each job
Ensures CPU is equally shared among all active processes and
isn’t monopolized by any one job
Time slice is called a time quantum
Size is crucial to system performance 100 ms to 12 secs)

27
Round Robin (RR) Scheduling
If Job’s CPU Cycle < Time Quantum
If job’s last CPU cycle job is finished, then all resources allocated
to it are released completed job is returned to user.
If CPU cycle was interrupted by I/O request, then info about the
job is saved in its PCB it is linked at end of the appropriate I/O
queue.
Later, when I/O request has been satisfied, it is returned to end of READY
queue to await allocation of CPU.
Time Slices Should Be …
Long enough to allow 80 of CPU cycles to run to completion.
Flexible depends on the system.

28
Round Robin (RR) Scheduling
Processe Arrival Burst Priority
s Time Time
A 0 8 5
B 1 5 3
C 2 3 1
D 3 1 4
Ethat the time given
Assume 4 are in nanosecond
7 (ns) 2

The RR Gantt Chart:


Given: Time Quantum = 3 time slices

29
Round Robin (RR) Scheduling

Turnarou
Finish Arrival Burst Waiting
Processe nd
Time Time Time Time
s Time(TA
(FT) (AT) (BT) (WT)
T)
A 20 0 20 8 12
B 18 1 17 5 12
C 9 2 7 3 4
D 10 3 7 1 6
E 24 4 20 7 13
(20+17+7
30 (12+12+4
Average +7+20) / Average
Time Quantum and Context Switch Time

31
Turnaround Time Varies With The Time
Quantum

32
Multilevel Queue
Ready queue is partitioned into separate queues:
foreground (interactive) background (batch)
Each queue has its own scheduling algorithm
Foreground /first round – RR
background / next round – FCFS
Scheduling must be done between the queues
Fixed priority scheduling; (i.e., serve all from foreground
then from background). Possibility of starvation.
Time slice – each queue gets a certain amount of CPU time
which it can schedule amongst its processes; i.e., 80% to
foreground in RR and 20% to background in FCFS
33
Multilevel Queue Scheduling

34
Multilevel Feedback Queue
A process can move between the various queues; aging can be
implemented this way
Multilevel-feedback-queue scheduler defined by the following
parameters:
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a process
method used to determine when to demote a process
method used to determine which queue a process will enter
when that process needs service
35
Example of Multilevel Feedback Queue
Three queues:
 Q0 – RR with time quantum 8 milliseconds
 Q1 – RR time quantum 16 milliseconds
 Q2 – FCFS

Scheduling
 A new job enters queue Q0 which is served RR. When it gains CPU, job
receives 8 milliseconds. If it does not finish in 8 milliseconds, job is
moved to queue Q1.
 At Q1 job is again served RR and receives 16 additional milliseconds. If
it still does not complete, it is preempted and moved to queue Q2.

36
Multilevel Feedback Queues

37
Thread Scheduling
Distinction between user-level and kernel-level threads
Many-to-one and many-to-many models, thread library
schedules user-level threads to run on LWP
 Known as process-contention scope (PCS) since
scheduling competition is within the process

Kernel thread scheduled onto available CPU is system-


contention scope (SCS) – competition among all threads in
system

38
Multiple-Processor Scheduling
 CPU scheduling more complex when multiple CPUs are available
 Homogeneous processors within a multiprocessor
 Asymmetric multiprocessing – only one processor accesses the system data
structures, alleviating the need for data sharing
 Symmetric multiprocessing (SMP) – each processor is self-scheduling, all
processes in common ready queue, or each has its own private queue of ready
processes
 Processor affinity – process has affinity for processor on which it is currently
running
 soft affinity - natural affinity, is the tendency of a scheduler to try to keep
processes on the same CPU as long as possible. It is merely an attempt; if it is
ever infeasible, the processes certainly will migrate to another processor.
 hard affinity - is what a CPU affinity system call provides. It is a requirement,
and processes must adhere to a specified hard affinity. If a processor is bound
to CPU zero, for example, then it can run only on CPU zero.
39
Multicore Processors
Recent trend to place multiple processor cores on same
physical chip
Faster and consume less power
Multiple threads per core also growing
Takes advantage of memory stall to make progress on
another thread while memory retrieve happens

40
Multithreaded Multicore System

41
End of Lecture 4
Slides adopted from the book:

Abraham Silberschatz, Peter Baer Galvin, Greg Gagne,


“Operating System Concepts”, 9/E, John Wiley & Sons.

You might also like