0% found this document useful (0 votes)
3 views30 pages

MODULE 4 RTOS

The document discusses real-time operating systems, focusing on task constraints, scheduling algorithms, and the characteristics of real-time tasks. It outlines various scheduling methods for both periodic and aperiodic tasks, including EDD, EDF, and LDF, while also detailing the importance of timing constraints and resource management. Additionally, it provides examples and limitations of the EDD scheduling algorithm, emphasizing its application in real-time systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views30 pages

MODULE 4 RTOS

The document discusses real-time operating systems, focusing on task constraints, scheduling algorithms, and the characteristics of real-time tasks. It outlines various scheduling methods for both periodic and aperiodic tasks, including EDD, EDF, and LDF, while also detailing the importance of timing constraints and resource management. Additionally, it provides examples and limitations of the EDD scheduling algorithm, emphasizing its application in real-time systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

REAL TIME OPERATING SYSTEMS

MODULE 4
Task constraints, Task scheduling: Aperiodic task scheduling: EDD. EDF, LDF, EDF
with precedence constraints. Periodic task scheduling:Rate monotonic and Deadline monotonic,
Real time Kernel- Structure, State transition diagram, Kernel primitives.
Types of task constraints:
Typical constraints that can be specified on real-time tasks are of three classes: 1. Timing
constraints
2. Precedence relations
3. Mutual exclusion constraints on shared resources.

Timing constraints
Real-time systems are characterized by computational activities with stringent timing
constraints that must be met in order to achieve the desired behavior. A typical timing constraint
on a task is the deadline, which represents the time before which a process should complete its
execution without causing any damage to the system.
Depending on the consequences of a missed deadline, real-time tasks are usually
distinguished in two classes:
Hard: A task is said to be hard if a completion after its deadline can cause
catastrophic consequences on the system. In this case, any instance of the task should a priori be
guaranteed in the worst-case scenario.
Soft: A task is said to be soft if missing its deadline decreases the performance of the system
but does not jeopardize its correct behavior.
Characteristics of a real-time task
A real-time task Ji can be characterized by the following parameters:
1. Arrival time (ai) : It is the time at which a task becomes ready for execution; it is
also referred is request time or release time and indicated by ri.
2. Computation time (Ci) : is the time necessary to the processor for executing the
task without interruption;
3. Deadline (di) : is the time before which a task should be complete to avoid damage to
the system;
4. Start time (Si) : is the time at which a task starts its execution;
5. Finishing time (fi) : is the time at which a task finishes its execution;
6. Criticalness: is a parameter related to the consequences of missing the deadline.

7. Value (Vi) : represents the relative importance of the task with respect to the other tasks in
the system;
8. Lateness (Li) : Li = fi — di represents the delay of a task completion with respect to
its deadline; note that if a task completes before the deadline, its lateness is negative; 9.
REAL TIME OPERATING SYSTEMS

Tardiness or Exceeding time(Ei) : Ei = max{0, Li) is the time a task stays active after
its deadline;
10. Laxity or Slack time Xi : Xi — di — ai — Ci is the maximum time a task can be
delayed on its activation to complete within its deadline.

Periodic or aperiodic tasks


Periodic tasks consist of an infinite sequence of identical activities, called instances or jobs, that
are regularly activated at a constant rate. Aperiodic task will be denoted by τi , An aperiodic
job by J i. The activation time of the first periodic instance is called phase.
If φi is the phase of the periodic τi , the task activation time of the kth instance is given
by φi + (k - 1)Ti,
where Ti is called period of the task.
The parameters Ci , Ti and Di are considered to be constant for each instance. Aperiodic
tasks also consist of an infinite sequence of identical activities. Their activations are not

regular.

Precedence relations
In certain applications, computational activities cannot be executed in arbitrary order but
have to respect some precedence relations defined at the design stage. Such precedence relations
are usually described through a directed acyclic graph G, where tasks are represented by nodes
and precedence relations by arrows.
REAL TIME OPERATING SYSTEMS

Figure illustrates a directed acyclic graph that describes the precedence constraints among five
tasks. From the graph structure we observe that task J1 is the only one that can start executing
since it does not have predecessors. Tasks with no predecessors are called beginning tasks. As
J1 is completed, either J2 or J3 can start. Task J4 can start only when J2 is completed, whereas
J5 must wait the completion of J2 and J3. Tasks with no successors, as J4 and J5, are called
ending tasks.
Resource constraints

From a process point of view, a resource is any software structure that can be used by the process
to advance its execution. Typically, a resource can be a data structure, a set of variables, a main
memory area, a file, a piece of program, or a set of registers of a peripheral device. A resource
dedicated to a particular process is said to be private, whereas a resource that can be used by more
tasks is called a shared resource. To maintain data consistency, many shared resources do not
allow simultaneous accesses but require mutual exclusion among competing tasks, called
exclusive resources.
Let R be an exclusive resource shared by tasks Ja and Jb. If A is the operation performed
on R by Ja , and B is the operation performed on R by Jb , then A and B must never be executed
at the same time. A piece of code executed under mutual exclusion constraints is called a critical
section. Synchronization mechanism can be used by tasks to create critical sections of code
REAL TIME OPERATING SYSTEMS

Consider two tasks J1 and J2 that share an


exclusive resource R on which two operations (such as insert and remove) are defined. The
code implementing such operations is thus a critical section that must be executed in mutual
exclusion. If a binary semaphore s is used for this purpose, then each critical section must begin
with a wait(s) primitive and must end with a signal(s) primitive.

Blocking on an exclusive resource

If preemption is allowed and J1 has a higher priority than J2, then J1 can block in the
situation depicted in Figure Here, task J2 is activated first, and, after a while, it enters the critical
section and locks the semaphore. While J2 is executing the critical section, task J1 arrives, and,
since it has a higher priority, it preempts J2 and starts executing.However, at time t1, when
attempting to enter its critical section, it is blocked on the semaphore and J2 is resumed. J1 is
blocked until time t2, when J2 releases the critical section by executing the signal(s) primitive,
which unlocks the semaphore. A task waiting for an exclusive resource is said to be blocked on
that resource. All tasks blocked on the same resource are kept in a queue associated with the
semaphore, which protects the resource. When a running task executes a wait primitive on a
locked semaphore, it enters a waiting state, until another task executes a signal primitive that
unlocks the semaphore. When a task leaves the waiting state, it does not go in the running state,
but in the ready state, so that the CPU can be assigned to the highest-priority task by the
scheduling algorithm.
REAL TIME OPERATING SYSTEMS

State transition diagram

The state transition diagram relative to the situation described above is shown in
this Figure

Classification of scheduling algorithms


1. Preemptive : With preemptive algorithms, the running task can be interrupted at any time
to assign the processor to another active task, according to a predefined scheduling
policy.
2. Non-preemptive : With non-preemptive algorithms, a task, once started, is executed by the
processor until completion. In this case, all scheduling decisions are taken as a task
terminates its execution.

3. Static : Static algorithms are those in which scheduling decisions are based on fixed
parameters, assigned to tasks before their activation.
4. Dynamic : Dynamic algorithms are those in which scheduling decisions are based on
dynamic parameters that may change during system evolution. 5. Off-line : We say that a
scheduling algorithm is used off-line if it is executed on the entire task set before actual
task activation. The schedule generated in this way is stored in a table and later executed
by a dispatcher.
6. On-line : We say that a scheduling algorithm is used on-line if scheduling decisions are
taken at runtime every time a new task enters the system or when a running task
terminates.
7. Optimal : An algorithm is said to be optimal if it minimizes some given cost function
defined over the task set. When no cost function is defined and the only concern is to
achieve a feasible schedule, then an algorithm is said to be optimal if it may fail to meet a
deadline only if no other algorithms of the same class can meet it.
8. Heuristic : An algorithm is said to be heuristic if it tends toward but does
not guarantee to find the optimal schedule.
APERIODIC TASK SCHEDULING

1) Jackson's algorithm /Earliest Due Date (EDD) algorithm

2) Horn's algorithm / Earliest Deadline First (EDF)


REAL TIME OPERATING SYSTEMS

3) Latest Deadline First (LDF)

4) EDF with precedence constraints

NOTATIONS
To facilitate the description of the scheduling problems presented a systematic notation that
could serve as a basis for a classification scheme using three fields α / β / γ having the following
meaning:
The first field α describes the machine environment on which the task set has to be scheduled
(uniprocessor, multiprocessor, distributed architecture, and so on).
The second field β describes task and resource characteristics (preemptive, independent versus
precedence constrained, synchronous activations, and so on).

The third field γ indicates the optimality criterion (performance measure) to be followed in the
schedule.
Jackson's algorithm /Earliest Due Date (EDD) algorithm

EDD Scheduling:

 Non-preemptive : Once a task starts executing, it runs to completion before the next task
begins.
 Priority-based: Tasks with earlier due dates are prioritized.
 Objective: The goal is to minimize the maximum lateness or tardiness of tasks. By
scheduling tasks in order of their due dates, the algorithm aims to complete tasks as close
to their deadlines as possible.

Steps in EDD Scheduling for Real-Time Tasks:

1. Sort the tasks by due date : All tasks are ordered in ascending order of their due dates.
The task with the earliest due date will be scheduled first.
2. Schedule tasks in order: After sorting, the tasks are scheduled one by one in the order of
their due dates. Since EDD is typically used in single-processor systems, each task starts
as soon as the previous one finishes, assuming there are no other constraints like
preemption or parallel processing.
3. Calculate Completion Times : For each task, the completion time is calculated based on
when the task finishes execution.
4. Evaluate Lateness and Tardiness :
o Lateness = Completion time - Due date
o Tardiness = max(0, Lateness). If a task is completed before its due date, the
tardiness is 0. If it's late, the tardiness is the amount of time it's delayed beyond its
due date.
REAL TIME OPERATING SYSTEMS

5. Feasibility Check: The algorithm may be considered feasible if the tasks can all be
completed before their respective deadlines. If any task has a positive tardiness, the
system might need adjustment (e.g., adding more resources or adjusting deadlines).

Example of EDD Scheduling with Real-Time Tasks:

Let's consider the following tasks:

Task Processing Time Due Date


T1 3 5
T2 2 6
T3 1 4
T4 4 7

Step 1: Sort tasks by due date

 T3 (Due Date: 4)
 T1 (Due Date: 5)
 T2 (Due Date: 6)
 T4 (Due Date: 7)

Step 2: Schedule tasks in sorted order

 T3 starts at time 0 and finishes at time 1 (processing time = 1).


 T1 starts at time 1 and finishes at time 4 (processing time = 3).
 T2 starts at time 4 and finishes at time 6 (processing time = 2).
 T4 starts at time 6 and finishes at time 10 (processing time = 4).

Step 3: Calculate Completion Times

 T3: Completed at time 1


 T1: Completed at time 4
 T2: Completed at time 6
 T4: Completed at time 10

Step 4: Calculate Lateness and Tardiness

 T3:
o Lateness = 1 - 4 = -3 (On time)
o Tardiness = max(0, -3) = 0 (No tardiness)
 T1:
o Lateness = 4 - 5 = -1 (On time)
o Tardiness = max(0, -1) = 0 (No tardiness)
REAL TIME OPERATING SYSTEMS

 T2:
o Lateness = 6 - 6 = 0 (On time)
o Tardiness = max(0, 0) = 0 (No tardiness)
 T4:
o Lateness = 10 - 7 = 3 (Late)
o Tardiness = max(0, 3) = 3 (Tardy)

Step 5: Check Feasibility

 Feasibility: All tasks are completed, but T4 is tardy by 3 time units. This means the
system is not perfectly feasible since T4 missed its due date.

Properties of EDD in Real-Time Systems:

 Optimal for Maximum Lateness: EDD scheduling is optimal for minimizing the
maximum lateness in a single processor system. This means it provides the best
performance in terms of lateness when only a single processor is available.
 Simple to Implement: The algorithm is straightforward and easy to implement, as it only
requires sorting tasks by due date and then processing them sequentially.
 Non-preemptive : Since the algorithm is non-preemptive, once a task starts executing, it
cannot be interrupted. This is crucial in real-time systems that do not support preemption,
but it may also result in inefficiencies in certain scenarios.
 Infeasible for High Load: The algorithm assumes that all tasks are independent and
non-preemptive. If tasks are very large or have tight deadlines that cannot be met in
sequence, the system may fail to meet deadlines, as seen in the example above.

Use Cases of EDD Scheduling in Real-Time Systems:

1. Real-Time Operating Systems (RTOS): EDD is often used in RTOS for simple
scheduling scenarios where tasks have hard deadlines and the processor is limited to one
core.
2. Job Scheduling in Manufacturing: In environments where tasks (such as manufacturing
jobs) have deadlines, EDD can help schedule them in a way that minimizes tardiness and
ensures efficient use of resources.
3. Embedded Systems: EDD is suitable for scheduling real-time tasks in embedded
systems where deadlines are critical (e.g., sensor data processing, embedded control
systems).

Limitations of EDD:

1. No Preemption: Since EDD is a non-preemptive algorithm, it may not be effective in


scenarios where tasks need to be preempted or have time-sensitive critical sections.
REAL TIME OPERATING SYSTEMS

2. Not Optimal for Total Tardiness : While EDD minimizes the maximum lateness, it
does not necessarily minimize the total tardiness across all tasks.
3. Single Processor: EDD is most effective in single-processor systems. For multi-
processor environments, more complex scheduling algorithms (like Earliest Deadline
First (EDF) or Rate-Monotonic Scheduling (RMS)) are often used.

Example

Schedule the following tasks using EDD algorithm

Task Execution Time Deadline

J1 1 3

J2 1 8

J3 2 6

J4 2 7

J5 1 4

Step 1: Sort the tasks by their deadlines (Earliest Due Date)

Using the Earliest Due Date (EDD) algorithm, we first sort the tasks by their deadlines in
ascending order:

 J1 (Deadline: 3)
 J5 (Deadline: 4)
 J3 (Deadline: 6)
 J4 (Deadline: 7)
 J2 (Deadline: 8)

Step 2: Schedule tasks in the sorted order

We will now schedule the tasks based on their deadlines, assuming a single processor and non-
preemptive scheduling.

1. J1 starts at time 0 and finishes at time 1 (Execution time = 1).


2. J5 starts at time 1 and finishes at time 2 (Execution time = 1).
3. J3 starts at time 2 and finishes at time 4 (Execution time = 2).
4. J4 starts at time 4 and finishes at time 6 (Execution time = 2).
5. J2 starts at time 6 and finishes at time 7 (Execution time = 1).

Step 3: Calculate Completion Times


REAL TIME OPERATING SYSTEMS

Task Completion Time


J1 1
J5 2
J3 4
J4 6
J2 7

Step 4: Calculate Lateness and Tardiness

Now, calculate the lateness and tardiness for each task:

 Lateness = Completion Time - Deadline


 Tardiness = max(0, Lateness)

For each task:

 J1:
o Lateness = 1 - 3 = -2 → On time
o Tardiness = max(0, -2) = 0
 J5:
o Lateness = 2 - 4 = -2 → On time
o Tardiness = max(0, -2) = 0
 J3:
o Lateness = 4 - 6 = -2 → On time
o Tardiness = max(0, -2) = 0
 J4:
o Lateness = 6 - 7 = -1 → On time
o Tardiness = max(0, -1) = 0
 J2:
o Lateness = 7 - 8 = -1 → On time
o Tardiness = max(0, -1) = 0

Step 5: Final Result

Task Completion Time Lateness Tardiness


J1 1 -2 0
J5 2 -2 0
J3 4 -2 0
J4 6 -1 0
J2 7 -1 0

Step 6: Gantt Chart

We can now create the Gantt chart to visualize the scheduling of these tasks.
REAL TIME OPERATING SYSTEMS

makefile
Copy
Time: 0 1 2 3 4 5 6 7
Task: | J1 | J5 | J3 | J3 | J4 | J4 | J2 |

Here’s the Gantt chart for the tasks:

|----|----|----|----|----|----|----|
0 1 2 3 4 5 6 7
|----|----|----|----|----|----|----|
J1 J5 J3 J3 J4 J4 J2

Explanation:

 J1 runs from time 0 to time 1.


 J5 runs from time 1 to time 2.
 J3 runs from time 2 to time 4.
 J4 runs from time 4 to time 6.
 J2 runs from time 6 to time 7.

HORN'S ALGORITHM /EDF

If tasks are not synchronous but can have arbitrary arrival times, then preemption becomes an
important factor. If preemption is allowed, a task can be interrupted if a more important task
arrives. Horn found an elegant solution to the problem of scheduling a set of n independent tasks
on a uniprocessor system, when tasks may have dynamic arrivals and preemption is allowed (1
| preem\Lmax). The algorithm, called Earliest Deadline First (EDF).

The Earliest Deadline First (EDF) scheduling algorithm is a dynamic priority scheduling
algorithm used in real-time systems.

1. EDF for Periodic Tasks:

Periodic tasks are those tasks that arrive at regular intervals (e.g., every 10ms, 20ms) and have
fixed periods. Each task has a defined period, execution time , and deadline.

EDF for Periodic Tasks:

 In EDF, each task is assigned a priority based on its deadline: the task with the earliest
deadline is given the highest priority.
 If multiple tasks have deadlines that occur at the same time, EDF selects the task with the
earliest deadline for execution.
 Preemptive: EDF is preemptive, meaning that if a new task with an earlier deadline
arrives, it can preempt the currently running task.
 The task set is considered schedulable if the sum of the CPU utilization for all tasks
does not exceed 100% (for hard deadlines):
REAL TIME OPERATING SYSTEMS

o For n periodic tasks, the utilization is given by:


𝐶𝑖
o Ui = ∑ 𝑖 𝑟𝑎𝑛𝑔𝑒 𝑓𝑟𝑜𝑚 𝑜 𝑡𝑜 𝑛
𝑇𝑖

Where Ci is the execution time of task i, and Ti is the period of task i

Schedulability condition for EDF on a uniprocessor system:

o If the total utilization UUU is less than or equal to 1 (i.e., U≤1), the task set is
guaranteed to be schedulable.
o EDF is optimal for periodic tasks, meaning if a task set is schedulable by any
algorithm, it is also schedulable by EDF.

Example of Periodic EDF Scheduling:

Let’s consider the following periodic tasks:

Task Execution Time (C) Period (T) Deadline (D)


T1 1 3 3
T2 2 5 5
T3 1 6 6

EDF Scheduling Process:

 At time 0, T1 arrives (Deadline: 3), T2 arrives (Deadline: 5), and T3 arrives (Deadline:
6).
o T1 is the first to run as it has the earliest deadline (3).
 After T1 completes at time 1, T2 and T3 are pending.
o T2 has a deadline of 5, and T3 has a deadline of 6. T2 will run first as it has the
next earliest deadline.
 Once T2 completes at time 3, T3 will run as it has the next earliest deadline.

This process continues for each task, and they are executed in order of their deadlines.

2. EDF for Aperiodic Tasks:

Aperiodic tasks are tasks that do not have regular intervals and are activated by external events.
Aperiodic tasks may have soft deadlines, meaning they are not always required to finish by a
specific time, or they may have hard deadlines.

EDF for Aperiodic Tasks:


REAL TIME OPERATING SYSTEMS

 Aperiodic tasks are also scheduled using EDF in real-time systems. The primary
difference between scheduling aperiodic and periodic tasks is that aperiodic tasks may
arrive at any time, and their deadlines can vary.
 Queueing: Aperiodic tasks are typically placed in a queue, and when an aperiodic task
arrives, it is assigned a priority based on its deadline. The task with the earliest deadline
will be selected for execution first.
 EDF can preempt ongoing tasks if a new aperiodic task arrives with an earlier deadline.
 For aperiodic tasks, deadlines can either be hard or soft. In the case of hard deadlines, a
task must be completed before its deadline. For soft deadlines, a task's tardiness (if
missed) is penalized but not catastrophic.

Scheduling aperiodic tasks with EDF:

1. Arrival: When an aperiodic task arrives, it is immediately considered for scheduling


based on its deadline.
2. Preemption: If an aperiodic task has a higher priority (earlier deadline) than the currently
running task, it will preempt the ongoing task.
3. Execution: If the aperiodic task is executed before its deadline, it is considered to have
been successfully completed.
4. Missed Deadline : If the task is not completed before its deadline, it may be considered
"missed." In the case of soft deadlines, this would incur a penalty (e.g., a higher priority
for future execution or delayed execution).

Example of Aperiodic EDF Scheduling:

Let’s consider the following periodic tasks (same as in the previous example) and an aperiodic
task:

Task Execution Time (C) Period (T) Deadline (D)


T1 1 3 3
T2 2 5 5
T3 1 6 6
A1 2 - 4

Here, A1 is an aperiodic task that arrives at time 2 with a deadline of 4.

 At time 0, T1 starts executing.


 At time 1, T2 starts executing.
 At time 2, A1 arrives with a deadline of 4. Since A1 has an earlier deadline than T3
(which has a deadline of 6), A1 preempts T3.
 A1 executes from time 2 to 4, and then T3 executes.
REAL TIME OPERATING SYSTEMS

EDF for Periodic and Ape riodic Tasks

Type of
Characteristics Preemption Scheduling Behavior
Task
- Regular intervals Tasks are executed based on their
Periodic Yes
- Fixed deadlines deadlines.
- Irregular intervals Aperiodic tasks are scheduled based on
Aperiodic - Flexible deadlines Yes
their deadlines.
EDF can handle both types, with
- Combination of periodic
Mixed Yes aperiodic tasks preempting periodic ones
and aperiodic tasks
if necessary.

Advantages of EDF:

 Optimal for periodic tasks : EDF is optimal in terms of scheduling periodic tasks,
meaning it will always schedule a set of periodic tasks if the set is feasible.
 Handles mixed workloads : EDF can handle both periodic and aperiodic tasks
effectively.
 Preemption: Allows for preemption, making it suitable for dynamic task sets.

Disadvantages of EDF:

 Missed deadlines for aperiodic tasks : If multiple aperiodic tasks arrive with very tight
deadlines, there may be an increased likelihood of missed deadlines.
 Overhead: EDF requires constant recalculations of deadlines as tasks arrive, adding
scheduling overhead.

Question

Schedule the given tasks using earliest deadline first(EDF) scheduling algorithm and calculate
the average response time, total completion time, weighted sum of responses (Assume weights
logically), lateness and number for late tasks.

Task Details:

Task Arrival Time (ai) Execution Time (ci) Deadline (di)


J1 0 4 16
J2 4 2 6
J3 2 4 7
J4 6 2 8
REAL TIME OPERATING SYSTEMS

Solution

Step 1: Sort Tasks by Their Deadlines (EDF Scheduling)

In the Earliest Deadline First (EDF) scheduling algorithm, the task with the earliest deadline
gets the highest priority. We need to schedule the tasks based on their deadlines, but also
consider their arrival times.

Task Order (Sorted by Deadline):

 J2: Deadline =6
 J3: Deadline =7
 J4: Deadline =8
 J1: Deadline = 16

Step 2: Schedule the Tasks

We will schedule the tasks in the order of their deadlines, taking into account their arrival times.

Initial state:

 Time = 0
 J1 arrives at time 0 with a deadline of 16 and will execute first.

1. J1 starts executing at time 0 and runs until time 4 (since its execution time is 4).
o Completion time of J1 = 4.
2. At time 2, J3 arrives (deadline 7). J2 also arrives at time 4, but J3 has the earlier
deadline, so J3 will be scheduled next.
o J3 runs from time 4 to time 8 (since its execution time is 4).
o Completion time of J3 = 8.
3. At time 4, J2 arrives (deadline 6). J2 will run next as it has the earliest deadline.
o J2 runs from time 8 to time 10.
o Completion time of J2 = 10.
4. At time 6, J4 arrives (deadline 8). J4 will run next, as its deadline is the next earliest.
o J4 runs from time 10 to time 12.
o Completion time of J4 = 12.

Step 3: Calculate the Required Metrics

Completion Times:

 J1: Completion time = 4


 J2: Completion time = 10
 J3: Completion time = 8
REAL TIME OPERATING SYSTEMS

 J4: Completion time = 12

Lateness:

Lateness = Completion Time - Deadline

 J1: Lateness = 4 - 16 = -12 → On time (as the task finishes before the deadline)
 J2: Lateness = 10 - 6 = 4 → Late
 J3: Lateness = 8 - 7 = 1 → Late
 J4: Lateness = 12 - 8 = 4 → Late

Tardiness:

Tardiness is the amount of time a task is late. If the lateness is negative or zero, the tardiness is
zero.

 J1: Tardiness = max(0, -12) = 0


 J2: Tardiness = max(0, 4) = 4
 J3: Tardiness = max(0, 1) = 1
 J4: Tardiness = max(0, 4) = 4

Number of Late Tasks:

A task is considered late if its completion time exceeds its deadline.

 J1: On time
 J2: Late
 J3: Late
 J4: Late

Number of Late Tasks = 3 (J2, J3, J4)

Step 4: Calculate Average Response Time

The response time of a task is the time from its arrival until it starts executing.

 J1: Response time = Start time - Arrival time = 0-0=0


 J2: Response time = Start time - Arrival time = 8-4=4
 J3: Response time = Start time - Arrival time = 4-2=2
 J4: Response time = Start time - Arrival time = 10 - 6 = 4

Average Response Time :

Average Response Time=0+4+2+44=104=2.5\text{Average Response Time} = \frac{0 + 4 + 2 +


4}{4} = \frac{10}{4} = 2.5Average Response Time=40+4+2+4=410=2.5
REAL TIME OPERATING SYSTEMS

Step 5: Calculate Total Completion Time

Total completion time is the sum of the completion times for all tasks.

Total Completion Time=4+10+8+12=34\text{Total Completion Time} = 4 + 10 + 8 + 12 =


34Total Completion Time=4+10+8+12=34

Step 6: Calculate Weighted Sum of Responses

Assume weights based on execution times (larger execution time implies higher weight):

 J1: Weight = 4, Response time = 0 → Weighted response = 4 * 0 = 0


 J2: Weight = 2, Response time = 4 → Weighted response = 2 * 4 = 8
 J3: Weight = 4, Response time = 2 → Weighted response = 4 * 2 = 8
 J4: Weight = 2, Response time = 4 → Weighted response = 2 * 4 = 8

Weighted Sum of Responses = 0 + 8 + 8 + 8 = 24

Final Results:

Metric Value

Average Response Time 2.5

Total Completion Time 34

Weighted Sum of Responses 24

Lateness of Tasks J1: -12, J2: 4, J3: 1, J4: 4

Number of Late Tasks 3

Comparison of EDF and EDD Scheduling algorithms

STUDY BY YOURSELF

SCHEDULING WITH PRECEDENCECONSTRAINTS

Two algorithms are used that minimize the maximum lateness by assuming
synchronous activations and preemptive scheduling, respectively.
1. Latest Deadline First (1 / prec,sync/Lmax)

2. Earliest Deadline First (1 / prec,preem/ Lmax)


REAL TIME OPERATING SYSTEMS

LATEST DEADLINE FIRST (LDF)

Lawler presented an optimal algorithm that minimizes the maximum lateness of a set of tasks
with precedence relations and simultaneous arrival times. The algorithm is called Latest
Deadline First (LDF) and can be executed in polynomial time with respect to the number of tasks
in the set.

LDF (Latest Deadline First) Scheduling is a real-time task scheduling algorithm that prioritizes
tasks with the latest deadlines. This is the opposite of the Earliest Deadline First (EDF)
algorithm, where tasks with the earliest deadline are given the highest priority.

Working of LDF Scheduling:

 Tasks are sorted in descending order of their deadlines. The task with the latest deadline
is given the highest priority to execute first.
 After scheduling a task, the remaining tasks are re-evaluated at every time unit.
 The LDF algorithm works based on the principle that a task with a later deadline is more
urgent than one with an earlier deadline.

Key Steps in LDF Scheduling:

1. Sort Tasks by Deadline : At each scheduling point, sort tasks based on their deadlines in
descending order.
2. Execute the Task with the Latest Deadline : Always execute the task with the latest
deadline that is ready to be scheduled.
3. Preemption: If a task arrives with a later deadline than the one being executed, preempt
the current task and schedule the new one.
4. Repeat the Process: Continue re-evaluating the remaining tasks and execute based on
the new latest deadlines.

Example

Let's walk through an example using a set of tasks with their arrival times, execution times, and
deadlines. The tasks are as follows:

Task Arrival Time (ai) Execution Time (ci) Deadline (di)


J1 0 4 16

J2 4 2 6

J3 2 4 7

J4 6 2 8
REAL TIME OPERATING SYSTEMS

Step-by-Step Scheduling (LDF)

Time 0:

 Task J1 arrives at time 0 with a deadline of 16, and it has the latest deadline, so it starts
executing first.
 J1 runs from time 0 to time 4.

Time 4:

 J2 arrives at time 4 with a deadline of 6.


 J3 arrives at time 2 with a deadline of 7.
 J4 arrives at time 6 with a deadline of 8.

By the LDF principle, we prioritize the task with the latest deadline:

 J3 has the latest deadline of 7, so it gets executed.


 J3 runs from time 4 to time 8.

Time 8:

 Now, J2 (deadline 6) has missed its deadline, but J4 (deadline 8) has the latest deadline.
 J4 runs from time 8 to time 10.

Time 10:

 J2 (deadline 6) is the only remaining task.


 J2 runs from time 10 to time 12.

Final Completion Times:

 J1: Completion time =4


 J3: Completion time =8
 J4: Completion time = 10
 J2: Completion time = 12

Calculating the Metrics:

1. Lateness:

Lateness = Completion Time - Deadline

 J1: Lateness = 4 - 16 = -12 (on time)


 J2: Lateness = 12 - 6 = 6 (late)
 J3: Lateness = 8 - 7 = 1 (late)
 J4: Lateness = 10 - 8 = 2 (late)
REAL TIME OPERATING SYSTEMS

2. Tardiness:

Tardiness = max(0, Lateness)

 J1: Tardiness = max(0, -12) = 0


 J2: Tardiness = max(0, 6) = 6
 J3: Tardiness = max(0, 1) = 1
 J4: Tardiness = max(0, 2) = 2

3. Number of Late Tasks:

 J1: On time
 J2: Late
 J3: Late
 J4: Late

Number of Late Tasks = 3.

4. Average Response Time :

Response Time = Start Time - Arrival Time

 J1: Response time =0-0=0


 J2: Response time = 10 - 4 = 6
 J3: Response time =4-2=2
 J4: Response time =8-6=2

Average Response Time = (0 + 6 + 2 + 2) / 4 = 10 / 4 = 2.5

5. Total Completion Time :

Total Completion Time = Sum of all task completion times

Total Completion Time = 4 + 8 + 10 + 12 = 34

6. Weighted Sum of Responses (assuming weights based on execution times):

 J1: Weight = 4, Response time = 0 → Weighted response = 4 * 0 = 0


 J2: Weight = 2, Response time = 6 → Weighted response = 2 * 6 = 12
 J3: Weight = 4, Response time = 2 → Weighted response = 4 * 2 = 8
 J4: Weight = 2, Response time = 2 → Weighted response = 2 * 2 = 4

Weighted Sum of Responses = 0 + 12 + 8 + 4 = 24


REAL TIME OPERATING SYSTEMS

EDF with precedence constraints

The problem of scheduling a set of n tasks with precedence constraints


and dynamic activations can be solved in polynomial time complexity only if tasks
are preemptable. The basic idea is to transform a set J of dependent tasks into a set J*
of independent tasks by an adequate modification of timing parameters.
Then, tasks are scheduled by the Earliest Deadline First (EDF) algorithm. The
transformation algorithm ensures that J is schedulable and the precedence constraints are obeyed
if and only if J* is schedulable. Basically, all release times and deadlines are modified so that
each task cannot start before its predecessors and cannot preempt their successors.

PERIODIC TASK SCHEDULING

When a control application consists of several concurrent periodic tasks with individual timing
constraints, the operating system has to guarantee that each periodic instance is regularly
activated at its proper rate and is completed within its deadline.
Basic algorithms for handling periodic tasks are :

1. Rate Monotonic
2. Deadline Monotonic

The Rate Monotonic (RM) scheduling algorithm is a simple rule that


assigns priorities to tasks according to their request rates. Specifically, tasks with higher
request rates (that is, with shorter periods) will have higher priorities. Since periods are
constant, RM is a fixed-priority assignment. Priorities are assigned to tasks before execution
and do not change over time.
RM is intrinsically preemptive: the currently executing task is preempted by
a newly arrived task with shorter period. RM is optimal among all fixed priority assignments in
the sense that no other fixed-priority algorithms can schedule a task set that cannot be scheduled
by RM.

The Rate Monotonic Scheduling (RMS) algorithm is one of the most commonly used
scheduling algorithms for real-time tasks. It is a preemptive scheduling algorithm based on task
priority: tasks with shorter periods (or deadlines) are given higher priority. The key idea is to
assign priorities to tasks such that a task with a shorter period will always preempt a task with a
longer period.

Key Concepts:
REAL TIME OPERATING SYSTEMS

1. Period: The time interval after which a periodic task repeats. It is usually the same as the
task's deadline.
2. Execution Time (or Computation Time): The amount of time a task needs to complete.
3. Priority: A task with a shorter period has a higher priority in Rate Monotonic
Scheduling.

Basic Logic of RMS:

 Each task is assigned a priority based on its period: the shorter the period, the higher
the priority.
 The scheduler will execute tasks based on their priorities, with higher priority tasks
preempting lower priority ones.
 Preemption can occur if a higher-priority task arrives during the execution of a lower-
priority task.
 If the system can execute all tasks without missing deadlines, the system is said to be
schedulable.

Rate Monotonic Scheduling (RMS) Algorithm:

1. Assign Priorities: Tasks are assigned priorities based on their periods. A task with the
smallest period gets the highest priority.
2. Execute Tasks: At each time unit, the scheduler executes the task with the highest
priority that is ready to execute (i.e., has arrived and hasn't yet finished).
3. Preemption: If a higher-priority task becomes ready while a lower-priority task is
running, the running task is preempted.

Example:

Consider the following set of tasks:

Task Execution Time (Ci) Period (Pi) Deadline (Di)


T1 1 4 4

T2 1 5 5

T3 2 8 8

Step 1: Assign Priorities

In RMS, tasks with shorter periods get higher priorities. Therefore, the priority order based on
their periods is:

 T1 (Period = 4) has the highest priority.


 T2 (Period = 5) has the second-highest priority.
REAL TIME OPERATING SYSTEMS

 T3 (Period = 8) has the lowest priority.

Step 2: Create the Gantt Chart

We will now schedule the tasks based on their priorities. We assume the time starts at t = 0 and
will schedule tasks over one cycle (the least common multiple of the periods of the tasks).

The least common multiple (LCM) of the periods (4, 5, and 8) is 40, so we will consider the
time span from t = 0 to t = 40.

Task Execution Timeline :

 t = 0 to t = 1: T1 starts executing (T1 has the highest priority).


 t = 1 to t = 2: T1 finishes, and T2 starts executing (T2 is the next highest priority and
arrives at time 1).
 t = 2 to t = 3: T2 finishes, and T3 starts executing (T3 has the next available time slot).
 t = 3 to t = 4: T1 is ready again (it has the highest priority), so T1 preempts T3 and starts
executing.
 t = 4 to t = 5: T1 finishes, and T2 starts executing again (T2 has a shorter period than
T3).
 t = 5 to t = 6: T2 finishes, and T3 resumes its execution.
 Repeat this process for the full 40 units.

Gantt Chart (from time t = 0 to t = 40):

Time 0-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9 9-10 ...

Task T1 T2 T3 T1 T2 T3 T1 T2 T3 T1 ...

Step 3: Task Execution Analysis

Task Completion:

 T1 executes at times 0-1, 3-4, 7-8, 11-12, ..., preempting lower-priority tasks.
 T2 executes at times 1-2, 4-5, 9-10, ..., and is preempted by T1 whenever T1 is ready.
 T3 executes at times 2-3, 6-7, 10-12, ...

Preemption:

 T1 preempts both T2 and T3 as T1 has the highest priority due to its shorter period (4).
 T2 preempts T3 only because it has a shorter period (5).
 T3 executes without interruption when both T1 and T2 are not ready to execute.

Step 4: Schedulibility checking


REAL TIME OPERATING SYSTEMS

Disadvantages:

1. It is very difficult to support aperiodic and periodic tasks under RMA.

2. RMA is not optimal when the task period and deadline differ.

DEADLINE MONOTONIC

Deadline Monotonic Scheduling (DMS) is a real-time scheduling algorithm similar to Rate


Monotonic Scheduling (RMS), but instead of assigning priorities based on task periods, it
assigns priorities based on task deadlines. In Deadline Monotonic Scheduling, tasks with
earlier deadlines are given higher priorities.
REAL TIME OPERATING SYSTEMS

Key Concepts of Deadline Monotonic Scheduling (DMS):

1. Priority Assignment: A task with an earlier deadline is assigned a higher priority. This is
the opposite of Rate Monotonic Scheduling, where tasks with shorter periods are
assigned higher priorities.
2. Preemptive: Like RMS, DMS is a preemptive scheduling algorithm. A higher-priority
task can preempt a lower-priority task if the higher-priority task arrives while the lower-
priority task is executing.
3. Schedulability: A system is schedulable under DMS if the set of tasks can be scheduled
without missing any deadlines. Schedulability is typically determined using Liu and
Layland’s Theorem or through simulation.

How DMS Works:

1. Assign Priorities: Tasks with earlier deadlines receive higher priorities. If two tasks have
the same deadline, their priorities can be assigned based on their arrival times or periods.
2. Execute Tasks: The scheduler will always execute the task with the highest priority (i.e.,
the task with the earliest deadline).
3. Preemption: If a higher-priority task arrives while a lower-priority task is running, the
lower-priority task is preempted.

Example:

Consider the following set of tasks:

Task Execution Time (Ci) Period (Pi) Deadline (Di)


T1 2 6 4

T2 1 8 6

T3 3 12 9

Step 1: Assign Priorities Based on Deadlines

 T1: Deadline = 4 (highest priority)


 T2: Deadline = 6 (second highest priority)
 T3: Deadline = 9 (lowest priority)

Step 2: Create the Gantt Chart

The tasks are scheduled in a time frame, and we will schedule tasks from time t = 0 to t = 12 (the
least common multiple of periods for this example).

Task Execution Order:


REAL TIME OPERATING SYSTEMS

 T1 (deadline 4) has the highest priority and will execute first.


 T2 (deadline 6) has the second-highest priority.
 T3 (deadline 9) has the lowest priority and will only execute when no higher-priority
tasks are ready.

Gantt Chart Construction:

Time 0-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9 9-10 10-11 11-12

Task T1 T1 T2 T2 T1 T3 T3 T3 T2 T2 T3 T3

Explanation of the Gantt chart:

1. t = 0 to t = 2: T1 starts and runs for 2 units of time (T1's execution time).


2. t = 2 to t = 3: T2 starts executing as it has the next highest priority (T2's execution time is
1).
3. t = 3 to t = 4: T1 resumes execution since it has the highest priority, and its deadline is
approaching (T1's remaining execution time of 1).
4. t = 4 to t = 7: T3 begins execution as it has the lowest priority and no higher priority
tasks are pending.
5. t = 7 to t = 9: T2 resumes execution (T2's execution time of 1 unit).
6. t = 9 to t = 12: T3 resumes execution (finishing its remaining 1 unit).

Step 3: Task Execution Analysis

Task Completion:

 T1: Completion time = 4


 T2: Completion time = 9
 T3: Completion time = 12

Step 4: Schedulability Check


REAL TIME OPERATING SYSTEMS

STRUCTURE OF A REAL-TIME KERNEL

A kernel represents the innermost part of any operating system that is in direct
connection with the hardware of the physical machine. A kernel usually provides the following
basic activities:
1. Process management,
2. Interrupt handling, and
3. Process synchronization

Process management

Process management is the primary service that an operating system has to provide. It includes
various supporting functions, such as process creation and termination, job scheduling,
dispatching, context switching, and other related activities.

Interrupt handling

The objective of the interrupt handling mechanism is to provide service to the interrupt requests
that may be generated by any peripheral device, such as the keyboard, serial ports, analog-to-
digital converters, or any specific sensor interface. The service provided by the kernel to an
interrupt request consists of the execution of a dedicated routine (driver) that will transfer data
from the device to the main memory (or viceversa).In classical operating systems, applica tion
tasks can always be preempted by drivers, at any time. In real time systems, however, this
approach may introduce unpredictable delays in the execution of critical tasks, causing some
hard deadline to be missed. For this reason, in a real-time system, the interrupt handling
mechanism has to be integrated with the scheduling mechanism, so that a driver can be scheduled
as any other task in the system and a guarantee of feasibility can be achieved even in the presence
of interrupt requests.

Process synchronization

Another important role of the kernel is to provide a basic mechanism for supporting
process synchronization and communication. In classical operating systems this is done by
semaphores, which represent an efficient solution to the problem of synchronization, as well as
to the one of mutual exclusion.
Semaphores are prone to priority inversion, which introduces unbounded blocking
on task’s execution and prevents a guarantee for hard real-time tasks. As a consequence, in order
to achieve predictability, a real-time kernel has to provide special types of semaphores that
support a resource access protocol (such as Priority Inheritance, Priority Ceiling, or Stack
Resource Policy) for avoiding unbounded priority inversion.
Other kernel activities involve the initialization of internal data structures (such as
queues, tables, task control blocks, global variables, semaphores, and so on) and specific services
to higher levels of the operating system.
REAL TIME OPERATING SYSTEMS

Real-Time Kernel Structure

A real-time kernel generally consists of several key components:

 Task Manager: Manages task creation, scheduling, and execution. It typically includes
support for multiple tasks, with real-time scheduling algorithms like Rate-Monotonic
Scheduling (RMS), Earliest Deadline First (EDF), etc.
 Scheduler: Responsible for determining which task should be executed based on their
priorities, deadlines, and periodicity.
 Interrupt Handler: Manages the interrupts from hardware. Interrupt handling in real-
time systems must be deterministic to ensure timely responses.
 Memory Manager: Allocates and manages memory in a real-time system, often with
fixed-size buffers or memory pools to avoid delays associated with dynamic memory
allocation.
 Timer: Provides timing services like task deadlines, time slicing, or system clock, to
manage periodic task execution and system synchronization.
 Synchronization Primitives: Includes tools like semaphores, mutexes, and event flags to
manage shared resources and prevent race conditions.
 I/O Manager: Handles communication with external devices, ensuring that I/O
operations are completed within time constraints.
 Communication Manager: Deals with message-passing, inter-process communication
(IPC), or data exchange between tasks or external devices in a deterministic way.

State Transition Diagram of a Real-Time Task

In a real-time kernel, tasks go through various states during their lifecycle. The task states and
transitions are usually modeled as a state transition diagram.

Task States:

1. Ready: The task is ready to execute but is waiting for the CPU to be allocated to it. This
is typically due to the task being preempted or the system waiting for its turn to execute.
2. Running: The task is currently being executed by the CPU.
3. Blocked (or Waiting): The task is blocked, usually waiting for an event (e.g., I/O
completion, resource availability, or synchronization).
4. Suspended: The task is not scheduled for execution, either due to a manual intervention
or system state (e.g., sleeping, or awaiting a higher-priority task).
5. Terminated: The task has completed its execution and is no longer active

STUDY THE DIAGRAM

Kernel Primitives
REAL TIME OPERATING SYSTEMS

Kernel primitives are the basic building blocks of a real-time kernel. They are used to manage
task execution, synchronization, and communication. Common kernel primitives include:

1. Task Management Primitives :

 CreateTask(): Creates a new task and initializes its properties (priority, execution time,
etc.).
 DeleteTask(): Deletes a task that is no longer needed.
 ActivateTask(): Activates a task to move it to the Ready state.
 TerminateTask(): Terminates a running task and frees up its resources.
 SuspendTask(): Suspends a task, saving its state and preventing it from running until
resumed.
 ResumeTask(): Resumes a suspended task from where it left off.

2. Synchronization Primitives :

 Semaphore:
o Wait(): Decrements the semaphore, and blocks the task if the semaphore value is
0 (i.e., the resource is unavailable).
o Signal(): Increments the semaphore, signaling that the resource is now available.
 Mutex (Mutual Exclusion):
o Lock(): Acquires a mutex. If the mutex is already locked by another task, the task
will be blocked until it can acquire the mutex.
o Unlock(): Releases the mutex, allowing other tasks to acquire it.
 Event Flags:
o SetEvent(): Sets a flag that can trigger a task to wake up or execute when it is in
the Blocked state.
o ClearEvent(): Clears the flag, preventing it from triggering any tasks.

3. Message Passing/Inter-task Communication:

 SendMessage(): Sends a message to another task or process.


 ReceiveMessage(): Receives a message from a specific task or process.

4. Time Management Primitives :

 Delay(): Causes a task to delay its execution for a specified amount of time.
 TimeSlice(): Specifies the maximum amount of time a task can run before it is
preempted.
 GetTime(): Returns the current system time or the elapsed time since the system booted.

5. Interrupt Handling Primitives :

 DisableInterrupts(): Disables all interrupts to avoid interrupt handling during critical


sections.
 EnableInterrupts(): Enables interrupts again after critical sections are executed.
REAL TIME OPERATING SYSTEMS

6. Resource Management Primitives :

 AllocateMemory(): Allocates a fixed block of memory for tasks or buffers.


 FreeMemory(): Frees previously allocated memory.
 CreateTimer(): Sets up a timer to trigger events at specific intervals.

You might also like