Tasks Categories
Invocation
Periodic (time-triggered)
Aperiodic (event-triggered)
Creation
Static
Dynamic
Multi-Tasking System
Preemptive: higher-priority process taking
control of the processor from a lower-priority
Non-Preemptive : Each task can control the
CPU for as long as it needs it.
Why we need scheduling ?!
each computation (task) we want to execute needs
resources
resources: processor, memory segments,
communication, I/O devices etc.)
the computation must be executed in particular order
(relative to each other and/or relative to time)
the possible ordering is either completely or
statistically a priori known (described)
scheduling: assignment of processor to
computations;
allocation: assignment of other resources to
computations;
Real-time Scheduling Taxonomy
Job (Jij): Unit of work, scheduled and executed by system.
Jobs repeated at regular or semi-regular intervals modeled as
periodic
Task (Ti): Set of related jobs.
Jobs scheduled and allocated resources based on a set of
scheduling algorithms and access control protocols.
Scheduler: Module implementing scheduling algorithms
Schedule: assignment of all jobs to available processors,
produced by scheduler.
Valid schedule: All jobs meet their deadline
Clock-driven scheduling vs Event(priority)-driven scheduling
Fixed Priority vs Dynamic Priority assignment
Scheduling Periodic Tasks
In hard real-time systems, set of tasks are known apriori
Task T is a series of periodic Jobs J . Each task has the
i ij
following parameters
t - period, minimum interrelease interval between jobs in
i
Task Ti.
c - maximum execution time for jobs in task T .
i i
r - release time of the j Job in Task i (J in T ).
th
ij ij i
- phase of Task T , equal to r .
i i i1
u - utilization of Task T = c / t
i i i i
In addition the following parameters apply to a set of tasks
H - Hyperperiod = Least Common Multiple of p for all i: H =
i
lcm(pi), for all i.
U - Total utilization = Sum over all u .
i
Schedulable utilization of an algorithm U
s
If U < U the set of tasks can be guaranteed to be scheduled
s
Real-Time Scheduling Algorithms
Fixed Priority Dynamic Priority
Algorithms
Hybrid algorithms
Algorithms
Rate Deadline
Maximu
Monotoni Monotoni Earliest Least
m
c c Deadline Laxity
Urgency
schedulin schedulin First First
First
g g
Scheduling Algorithm
Static vs. Dynamic
Static Scheduling:
All scheduling decisions at compile time.
Temporal task structure fixed.
Precedence and mutual exclusion satisfied by
the schedule (implicit synchronization).
One solution is sufficient.
Any solution is a sufficient schedulability test.
Benefits
Simplicity
Scheduling Algorithm
Static vs. Dynamic
Dynamic Scheduling:
All scheduling decisions at run time.
Based upon set of ready tasks.
Mutual exclusion and synchronization enforced
by explicit synchronization constructs.
Benefits
Flexibility.
Only actually used resources are claimed.
Disadvantages
Guarantees difficult to support
Computational resources required for
scheduling
Scheduling Algorithm
Preemptive vs. Nonpreemptive
Preemptive Scheduling:
Event driven.
Each event causes interruption of running tasks.
Choice of running tasks reconsidered after each
interruption.
Benefits:
Can minimize response time to events.
Disadvantages:
Requires considerable computational resources for
scheduling
Scheduling Algorithm
Preemptive vs. Nonpreemptive
Nonpreemptive Scheduling:
Tasks remain active till completion
Scheduling decisions only made after task
completion.
Benefits:
Reasonable when
task execution times ~= task switching times.
Less computational resources needed for
scheduling
Disadvantages:
Can leads to starvation (not met the deadline)
especially for those real time tasks ( or high
priority tasks).
Rate Monotonic scheduling
Priority assignment based on rates of tasks
Higher rate task assigned higher priority
Schedulable utilization = 0.693 (Liu and Leyland)
Where Ci is the computation time, and Ti is the release period
If U < 0.693, schedulability is guaranteed
Tasks may be schedulable even if U > 0.693
RM example
Execution
Period Process
Time
8 1 P1
5 2 P2
10 2 P3
The utilization will be:
The theoretical limit for processes, under which we can conclude that
the system is schedulable is:
Since the system is schedulable!
Deadline Monotonic scheduling
Priority assignment based on relative deadlines
of tasks
Shorter the relative deadline, higher the priority
Earliest Deadline First (EDF)
Dynamic Priority Scheduling
Priorities are assigned according to deadlines:
Earlier deadline, higher priority
Later deadline, lower priority
The first and the most effectively widely used
dynamic priority-driven scheduling algorithm.
Effective for both preemptive and non-preemptive
scheduling.
Two Periodic Tasks
Execution profile of two periodic tasks
Process A
Arrives 0 20 40 …
Execution Time 10 10 10 …
End by 20 40 60 …
Process B
Arrives 0 50 100 …
Execution Time 25 25 25 …
End by 50 100 150 …
Question: Is there enough time for the execution of two periodic
tasks?
Five Periodic Tasks
Execution profile of five periodic tasks
Starting Execution
Deadline Time Arrival Time Process
110 20 10 A
20 20 20 B
50 20 40 C
90 20 50 D
70 20 60 E
Least Laxity First (LLF)
Dynamic preemptive scheduling with dynamic
priorities
Laxity : The difference between the time until a
tasks completion deadline and its remaining
processing time requirement.
a laxity is assigned to each task in the system and
minimum laxity tasks are executed first.
Larger overhead than EDF due to higher number
of context switches caused by laxity changes at
run time
Less studies than EDF due to this reason
Least Laxity First Cont
LLF considers the execution time of a task, which
EDF does not.
LLF assigns higher priority to a task with the
least laxity.
A task with zero laxity must be scheduled right
away and executed without preemption or it will
fail to meet its deadline.
The negative laxity indicates that the task will
miss the deadline, no matter when it is picked up
for execution.
Least Laxity First Example
Maximum Urgency First Algorithm
This algorithm is a combination of fixed and
dynamic priority scheduling, also called mixed
priority scheduling.
With this algorithm, each task is given an urgency
which is defined as a combination of two fixed
priorities (criticality and user priority) and a
dynamic priority that is inversely proportional to
the laxity.
The MUF algorithm assigns priorities in two phases
Phase One concerns the assignment of static
priorities to tasks
Phase Two deals with the run-time behavior of the
MUF scheduler
Maximum Urgency First Algorithm phase 1
The first phase consists of these steps :
1) It sorts the tasks from the shortest period to the
longest period. Then it defines the critical set as
the first N tasks such that the total CPU load
factor does not exceed 100%. These tasks are
guaranteed not to fail even during a transient
overload.
2) All tasks in the critical set are assigned high
criticality. The remaining tasks are considered to
have low criticality.
3) Every task in the system is assigned an optional
unique user priority
Maximum Urgency First Algorithm phase 2
In the second phase, the MUF scheduler follows
an algorithm to select a task for execution.
This algorithm is executed whenever a new task
is arrived to the ready queue.
The algorithm is as follows:
1) If there is only one highly critical task, pick it up
and execute it.
2) If there are more than one highly critical task,
select the one with the highest dynamic priority.
Here, the task with the least laxity is considered
to be the one with the highest priority.
3) If there is more than one task with the same
laxity, select the one with the highest user
priority.