MODULE 4 RTOS
MODULE 4 RTOS
MODULE 4
Task constraints, Task scheduling: Aperiodic task scheduling: EDD. EDF, LDF, EDF
with precedence constraints. Periodic task scheduling:Rate monotonic and Deadline monotonic,
Real time Kernel- Structure, State transition diagram, Kernel primitives.
Types of task constraints:
Typical constraints that can be specified on real-time tasks are of three classes: 1. Timing
constraints
2. Precedence relations
3. Mutual exclusion constraints on shared resources.
Timing constraints
Real-time systems are characterized by computational activities with stringent timing
constraints that must be met in order to achieve the desired behavior. A typical timing constraint
on a task is the deadline, which represents the time before which a process should complete its
execution without causing any damage to the system.
Depending on the consequences of a missed deadline, real-time tasks are usually
distinguished in two classes:
Hard: A task is said to be hard if a completion after its deadline can cause
catastrophic consequences on the system. In this case, any instance of the task should a priori be
guaranteed in the worst-case scenario.
Soft: A task is said to be soft if missing its deadline decreases the performance of the system
but does not jeopardize its correct behavior.
Characteristics of a real-time task
A real-time task Ji can be characterized by the following parameters:
1. Arrival time (ai) : It is the time at which a task becomes ready for execution; it is
also referred is request time or release time and indicated by ri.
2. Computation time (Ci) : is the time necessary to the processor for executing the
task without interruption;
3. Deadline (di) : is the time before which a task should be complete to avoid damage to
the system;
4. Start time (Si) : is the time at which a task starts its execution;
5. Finishing time (fi) : is the time at which a task finishes its execution;
6. Criticalness: is a parameter related to the consequences of missing the deadline.
7. Value (Vi) : represents the relative importance of the task with respect to the other tasks in
the system;
8. Lateness (Li) : Li = fi — di represents the delay of a task completion with respect to
its deadline; note that if a task completes before the deadline, its lateness is negative; 9.
REAL TIME OPERATING SYSTEMS
Tardiness or Exceeding time(Ei) : Ei = max{0, Li) is the time a task stays active after
its deadline;
10. Laxity or Slack time Xi : Xi — di — ai — Ci is the maximum time a task can be
delayed on its activation to complete within its deadline.
regular.
Precedence relations
In certain applications, computational activities cannot be executed in arbitrary order but
have to respect some precedence relations defined at the design stage. Such precedence relations
are usually described through a directed acyclic graph G, where tasks are represented by nodes
and precedence relations by arrows.
REAL TIME OPERATING SYSTEMS
Figure illustrates a directed acyclic graph that describes the precedence constraints among five
tasks. From the graph structure we observe that task J1 is the only one that can start executing
since it does not have predecessors. Tasks with no predecessors are called beginning tasks. As
J1 is completed, either J2 or J3 can start. Task J4 can start only when J2 is completed, whereas
J5 must wait the completion of J2 and J3. Tasks with no successors, as J4 and J5, are called
ending tasks.
Resource constraints
From a process point of view, a resource is any software structure that can be used by the process
to advance its execution. Typically, a resource can be a data structure, a set of variables, a main
memory area, a file, a piece of program, or a set of registers of a peripheral device. A resource
dedicated to a particular process is said to be private, whereas a resource that can be used by more
tasks is called a shared resource. To maintain data consistency, many shared resources do not
allow simultaneous accesses but require mutual exclusion among competing tasks, called
exclusive resources.
Let R be an exclusive resource shared by tasks Ja and Jb. If A is the operation performed
on R by Ja , and B is the operation performed on R by Jb , then A and B must never be executed
at the same time. A piece of code executed under mutual exclusion constraints is called a critical
section. Synchronization mechanism can be used by tasks to create critical sections of code
REAL TIME OPERATING SYSTEMS
If preemption is allowed and J1 has a higher priority than J2, then J1 can block in the
situation depicted in Figure Here, task J2 is activated first, and, after a while, it enters the critical
section and locks the semaphore. While J2 is executing the critical section, task J1 arrives, and,
since it has a higher priority, it preempts J2 and starts executing.However, at time t1, when
attempting to enter its critical section, it is blocked on the semaphore and J2 is resumed. J1 is
blocked until time t2, when J2 releases the critical section by executing the signal(s) primitive,
which unlocks the semaphore. A task waiting for an exclusive resource is said to be blocked on
that resource. All tasks blocked on the same resource are kept in a queue associated with the
semaphore, which protects the resource. When a running task executes a wait primitive on a
locked semaphore, it enters a waiting state, until another task executes a signal primitive that
unlocks the semaphore. When a task leaves the waiting state, it does not go in the running state,
but in the ready state, so that the CPU can be assigned to the highest-priority task by the
scheduling algorithm.
REAL TIME OPERATING SYSTEMS
The state transition diagram relative to the situation described above is shown in
this Figure
3. Static : Static algorithms are those in which scheduling decisions are based on fixed
parameters, assigned to tasks before their activation.
4. Dynamic : Dynamic algorithms are those in which scheduling decisions are based on
dynamic parameters that may change during system evolution. 5. Off-line : We say that a
scheduling algorithm is used off-line if it is executed on the entire task set before actual
task activation. The schedule generated in this way is stored in a table and later executed
by a dispatcher.
6. On-line : We say that a scheduling algorithm is used on-line if scheduling decisions are
taken at runtime every time a new task enters the system or when a running task
terminates.
7. Optimal : An algorithm is said to be optimal if it minimizes some given cost function
defined over the task set. When no cost function is defined and the only concern is to
achieve a feasible schedule, then an algorithm is said to be optimal if it may fail to meet a
deadline only if no other algorithms of the same class can meet it.
8. Heuristic : An algorithm is said to be heuristic if it tends toward but does
not guarantee to find the optimal schedule.
APERIODIC TASK SCHEDULING
NOTATIONS
To facilitate the description of the scheduling problems presented a systematic notation that
could serve as a basis for a classification scheme using three fields α / β / γ having the following
meaning:
The first field α describes the machine environment on which the task set has to be scheduled
(uniprocessor, multiprocessor, distributed architecture, and so on).
The second field β describes task and resource characteristics (preemptive, independent versus
precedence constrained, synchronous activations, and so on).
The third field γ indicates the optimality criterion (performance measure) to be followed in the
schedule.
Jackson's algorithm /Earliest Due Date (EDD) algorithm
EDD Scheduling:
Non-preemptive : Once a task starts executing, it runs to completion before the next task
begins.
Priority-based: Tasks with earlier due dates are prioritized.
Objective: The goal is to minimize the maximum lateness or tardiness of tasks. By
scheduling tasks in order of their due dates, the algorithm aims to complete tasks as close
to their deadlines as possible.
1. Sort the tasks by due date : All tasks are ordered in ascending order of their due dates.
The task with the earliest due date will be scheduled first.
2. Schedule tasks in order: After sorting, the tasks are scheduled one by one in the order of
their due dates. Since EDD is typically used in single-processor systems, each task starts
as soon as the previous one finishes, assuming there are no other constraints like
preemption or parallel processing.
3. Calculate Completion Times : For each task, the completion time is calculated based on
when the task finishes execution.
4. Evaluate Lateness and Tardiness :
o Lateness = Completion time - Due date
o Tardiness = max(0, Lateness). If a task is completed before its due date, the
tardiness is 0. If it's late, the tardiness is the amount of time it's delayed beyond its
due date.
REAL TIME OPERATING SYSTEMS
5. Feasibility Check: The algorithm may be considered feasible if the tasks can all be
completed before their respective deadlines. If any task has a positive tardiness, the
system might need adjustment (e.g., adding more resources or adjusting deadlines).
T3 (Due Date: 4)
T1 (Due Date: 5)
T2 (Due Date: 6)
T4 (Due Date: 7)
T3:
o Lateness = 1 - 4 = -3 (On time)
o Tardiness = max(0, -3) = 0 (No tardiness)
T1:
o Lateness = 4 - 5 = -1 (On time)
o Tardiness = max(0, -1) = 0 (No tardiness)
REAL TIME OPERATING SYSTEMS
T2:
o Lateness = 6 - 6 = 0 (On time)
o Tardiness = max(0, 0) = 0 (No tardiness)
T4:
o Lateness = 10 - 7 = 3 (Late)
o Tardiness = max(0, 3) = 3 (Tardy)
Feasibility: All tasks are completed, but T4 is tardy by 3 time units. This means the
system is not perfectly feasible since T4 missed its due date.
Optimal for Maximum Lateness: EDD scheduling is optimal for minimizing the
maximum lateness in a single processor system. This means it provides the best
performance in terms of lateness when only a single processor is available.
Simple to Implement: The algorithm is straightforward and easy to implement, as it only
requires sorting tasks by due date and then processing them sequentially.
Non-preemptive : Since the algorithm is non-preemptive, once a task starts executing, it
cannot be interrupted. This is crucial in real-time systems that do not support preemption,
but it may also result in inefficiencies in certain scenarios.
Infeasible for High Load: The algorithm assumes that all tasks are independent and
non-preemptive. If tasks are very large or have tight deadlines that cannot be met in
sequence, the system may fail to meet deadlines, as seen in the example above.
1. Real-Time Operating Systems (RTOS): EDD is often used in RTOS for simple
scheduling scenarios where tasks have hard deadlines and the processor is limited to one
core.
2. Job Scheduling in Manufacturing: In environments where tasks (such as manufacturing
jobs) have deadlines, EDD can help schedule them in a way that minimizes tardiness and
ensures efficient use of resources.
3. Embedded Systems: EDD is suitable for scheduling real-time tasks in embedded
systems where deadlines are critical (e.g., sensor data processing, embedded control
systems).
Limitations of EDD:
2. Not Optimal for Total Tardiness : While EDD minimizes the maximum lateness, it
does not necessarily minimize the total tardiness across all tasks.
3. Single Processor: EDD is most effective in single-processor systems. For multi-
processor environments, more complex scheduling algorithms (like Earliest Deadline
First (EDF) or Rate-Monotonic Scheduling (RMS)) are often used.
Example
J1 1 3
J2 1 8
J3 2 6
J4 2 7
J5 1 4
Using the Earliest Due Date (EDD) algorithm, we first sort the tasks by their deadlines in
ascending order:
J1 (Deadline: 3)
J5 (Deadline: 4)
J3 (Deadline: 6)
J4 (Deadline: 7)
J2 (Deadline: 8)
We will now schedule the tasks based on their deadlines, assuming a single processor and non-
preemptive scheduling.
J1:
o Lateness = 1 - 3 = -2 → On time
o Tardiness = max(0, -2) = 0
J5:
o Lateness = 2 - 4 = -2 → On time
o Tardiness = max(0, -2) = 0
J3:
o Lateness = 4 - 6 = -2 → On time
o Tardiness = max(0, -2) = 0
J4:
o Lateness = 6 - 7 = -1 → On time
o Tardiness = max(0, -1) = 0
J2:
o Lateness = 7 - 8 = -1 → On time
o Tardiness = max(0, -1) = 0
We can now create the Gantt chart to visualize the scheduling of these tasks.
REAL TIME OPERATING SYSTEMS
makefile
Copy
Time: 0 1 2 3 4 5 6 7
Task: | J1 | J5 | J3 | J3 | J4 | J4 | J2 |
|----|----|----|----|----|----|----|
0 1 2 3 4 5 6 7
|----|----|----|----|----|----|----|
J1 J5 J3 J3 J4 J4 J2
Explanation:
If tasks are not synchronous but can have arbitrary arrival times, then preemption becomes an
important factor. If preemption is allowed, a task can be interrupted if a more important task
arrives. Horn found an elegant solution to the problem of scheduling a set of n independent tasks
on a uniprocessor system, when tasks may have dynamic arrivals and preemption is allowed (1
| preem\Lmax). The algorithm, called Earliest Deadline First (EDF).
The Earliest Deadline First (EDF) scheduling algorithm is a dynamic priority scheduling
algorithm used in real-time systems.
Periodic tasks are those tasks that arrive at regular intervals (e.g., every 10ms, 20ms) and have
fixed periods. Each task has a defined period, execution time , and deadline.
In EDF, each task is assigned a priority based on its deadline: the task with the earliest
deadline is given the highest priority.
If multiple tasks have deadlines that occur at the same time, EDF selects the task with the
earliest deadline for execution.
Preemptive: EDF is preemptive, meaning that if a new task with an earlier deadline
arrives, it can preempt the currently running task.
The task set is considered schedulable if the sum of the CPU utilization for all tasks
does not exceed 100% (for hard deadlines):
REAL TIME OPERATING SYSTEMS
o If the total utilization UUU is less than or equal to 1 (i.e., U≤1), the task set is
guaranteed to be schedulable.
o EDF is optimal for periodic tasks, meaning if a task set is schedulable by any
algorithm, it is also schedulable by EDF.
At time 0, T1 arrives (Deadline: 3), T2 arrives (Deadline: 5), and T3 arrives (Deadline:
6).
o T1 is the first to run as it has the earliest deadline (3).
After T1 completes at time 1, T2 and T3 are pending.
o T2 has a deadline of 5, and T3 has a deadline of 6. T2 will run first as it has the
next earliest deadline.
Once T2 completes at time 3, T3 will run as it has the next earliest deadline.
This process continues for each task, and they are executed in order of their deadlines.
Aperiodic tasks are tasks that do not have regular intervals and are activated by external events.
Aperiodic tasks may have soft deadlines, meaning they are not always required to finish by a
specific time, or they may have hard deadlines.
Aperiodic tasks are also scheduled using EDF in real-time systems. The primary
difference between scheduling aperiodic and periodic tasks is that aperiodic tasks may
arrive at any time, and their deadlines can vary.
Queueing: Aperiodic tasks are typically placed in a queue, and when an aperiodic task
arrives, it is assigned a priority based on its deadline. The task with the earliest deadline
will be selected for execution first.
EDF can preempt ongoing tasks if a new aperiodic task arrives with an earlier deadline.
For aperiodic tasks, deadlines can either be hard or soft. In the case of hard deadlines, a
task must be completed before its deadline. For soft deadlines, a task's tardiness (if
missed) is penalized but not catastrophic.
Let’s consider the following periodic tasks (same as in the previous example) and an aperiodic
task:
Type of
Characteristics Preemption Scheduling Behavior
Task
- Regular intervals Tasks are executed based on their
Periodic Yes
- Fixed deadlines deadlines.
- Irregular intervals Aperiodic tasks are scheduled based on
Aperiodic - Flexible deadlines Yes
their deadlines.
EDF can handle both types, with
- Combination of periodic
Mixed Yes aperiodic tasks preempting periodic ones
and aperiodic tasks
if necessary.
Advantages of EDF:
Optimal for periodic tasks : EDF is optimal in terms of scheduling periodic tasks,
meaning it will always schedule a set of periodic tasks if the set is feasible.
Handles mixed workloads : EDF can handle both periodic and aperiodic tasks
effectively.
Preemption: Allows for preemption, making it suitable for dynamic task sets.
Disadvantages of EDF:
Missed deadlines for aperiodic tasks : If multiple aperiodic tasks arrive with very tight
deadlines, there may be an increased likelihood of missed deadlines.
Overhead: EDF requires constant recalculations of deadlines as tasks arrive, adding
scheduling overhead.
Question
Schedule the given tasks using earliest deadline first(EDF) scheduling algorithm and calculate
the average response time, total completion time, weighted sum of responses (Assume weights
logically), lateness and number for late tasks.
Task Details:
Solution
In the Earliest Deadline First (EDF) scheduling algorithm, the task with the earliest deadline
gets the highest priority. We need to schedule the tasks based on their deadlines, but also
consider their arrival times.
J2: Deadline =6
J3: Deadline =7
J4: Deadline =8
J1: Deadline = 16
We will schedule the tasks in the order of their deadlines, taking into account their arrival times.
Initial state:
Time = 0
J1 arrives at time 0 with a deadline of 16 and will execute first.
1. J1 starts executing at time 0 and runs until time 4 (since its execution time is 4).
o Completion time of J1 = 4.
2. At time 2, J3 arrives (deadline 7). J2 also arrives at time 4, but J3 has the earlier
deadline, so J3 will be scheduled next.
o J3 runs from time 4 to time 8 (since its execution time is 4).
o Completion time of J3 = 8.
3. At time 4, J2 arrives (deadline 6). J2 will run next as it has the earliest deadline.
o J2 runs from time 8 to time 10.
o Completion time of J2 = 10.
4. At time 6, J4 arrives (deadline 8). J4 will run next, as its deadline is the next earliest.
o J4 runs from time 10 to time 12.
o Completion time of J4 = 12.
Completion Times:
Lateness:
J1: Lateness = 4 - 16 = -12 → On time (as the task finishes before the deadline)
J2: Lateness = 10 - 6 = 4 → Late
J3: Lateness = 8 - 7 = 1 → Late
J4: Lateness = 12 - 8 = 4 → Late
Tardiness:
Tardiness is the amount of time a task is late. If the lateness is negative or zero, the tardiness is
zero.
J1: On time
J2: Late
J3: Late
J4: Late
The response time of a task is the time from its arrival until it starts executing.
Total completion time is the sum of the completion times for all tasks.
Assume weights based on execution times (larger execution time implies higher weight):
Final Results:
Metric Value
STUDY BY YOURSELF
Two algorithms are used that minimize the maximum lateness by assuming
synchronous activations and preemptive scheduling, respectively.
1. Latest Deadline First (1 / prec,sync/Lmax)
Lawler presented an optimal algorithm that minimizes the maximum lateness of a set of tasks
with precedence relations and simultaneous arrival times. The algorithm is called Latest
Deadline First (LDF) and can be executed in polynomial time with respect to the number of tasks
in the set.
LDF (Latest Deadline First) Scheduling is a real-time task scheduling algorithm that prioritizes
tasks with the latest deadlines. This is the opposite of the Earliest Deadline First (EDF)
algorithm, where tasks with the earliest deadline are given the highest priority.
Tasks are sorted in descending order of their deadlines. The task with the latest deadline
is given the highest priority to execute first.
After scheduling a task, the remaining tasks are re-evaluated at every time unit.
The LDF algorithm works based on the principle that a task with a later deadline is more
urgent than one with an earlier deadline.
1. Sort Tasks by Deadline : At each scheduling point, sort tasks based on their deadlines in
descending order.
2. Execute the Task with the Latest Deadline : Always execute the task with the latest
deadline that is ready to be scheduled.
3. Preemption: If a task arrives with a later deadline than the one being executed, preempt
the current task and schedule the new one.
4. Repeat the Process: Continue re-evaluating the remaining tasks and execute based on
the new latest deadlines.
Example
Let's walk through an example using a set of tasks with their arrival times, execution times, and
deadlines. The tasks are as follows:
J2 4 2 6
J3 2 4 7
J4 6 2 8
REAL TIME OPERATING SYSTEMS
Time 0:
Task J1 arrives at time 0 with a deadline of 16, and it has the latest deadline, so it starts
executing first.
J1 runs from time 0 to time 4.
Time 4:
By the LDF principle, we prioritize the task with the latest deadline:
Time 8:
Now, J2 (deadline 6) has missed its deadline, but J4 (deadline 8) has the latest deadline.
J4 runs from time 8 to time 10.
Time 10:
1. Lateness:
2. Tardiness:
J1: On time
J2: Late
J3: Late
J4: Late
When a control application consists of several concurrent periodic tasks with individual timing
constraints, the operating system has to guarantee that each periodic instance is regularly
activated at its proper rate and is completed within its deadline.
Basic algorithms for handling periodic tasks are :
1. Rate Monotonic
2. Deadline Monotonic
The Rate Monotonic Scheduling (RMS) algorithm is one of the most commonly used
scheduling algorithms for real-time tasks. It is a preemptive scheduling algorithm based on task
priority: tasks with shorter periods (or deadlines) are given higher priority. The key idea is to
assign priorities to tasks such that a task with a shorter period will always preempt a task with a
longer period.
Key Concepts:
REAL TIME OPERATING SYSTEMS
1. Period: The time interval after which a periodic task repeats. It is usually the same as the
task's deadline.
2. Execution Time (or Computation Time): The amount of time a task needs to complete.
3. Priority: A task with a shorter period has a higher priority in Rate Monotonic
Scheduling.
Each task is assigned a priority based on its period: the shorter the period, the higher
the priority.
The scheduler will execute tasks based on their priorities, with higher priority tasks
preempting lower priority ones.
Preemption can occur if a higher-priority task arrives during the execution of a lower-
priority task.
If the system can execute all tasks without missing deadlines, the system is said to be
schedulable.
1. Assign Priorities: Tasks are assigned priorities based on their periods. A task with the
smallest period gets the highest priority.
2. Execute Tasks: At each time unit, the scheduler executes the task with the highest
priority that is ready to execute (i.e., has arrived and hasn't yet finished).
3. Preemption: If a higher-priority task becomes ready while a lower-priority task is
running, the running task is preempted.
Example:
T2 1 5 5
T3 2 8 8
In RMS, tasks with shorter periods get higher priorities. Therefore, the priority order based on
their periods is:
We will now schedule the tasks based on their priorities. We assume the time starts at t = 0 and
will schedule tasks over one cycle (the least common multiple of the periods of the tasks).
The least common multiple (LCM) of the periods (4, 5, and 8) is 40, so we will consider the
time span from t = 0 to t = 40.
Time 0-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9 9-10 ...
Task T1 T2 T3 T1 T2 T3 T1 T2 T3 T1 ...
Task Completion:
T1 executes at times 0-1, 3-4, 7-8, 11-12, ..., preempting lower-priority tasks.
T2 executes at times 1-2, 4-5, 9-10, ..., and is preempted by T1 whenever T1 is ready.
T3 executes at times 2-3, 6-7, 10-12, ...
Preemption:
T1 preempts both T2 and T3 as T1 has the highest priority due to its shorter period (4).
T2 preempts T3 only because it has a shorter period (5).
T3 executes without interruption when both T1 and T2 are not ready to execute.
Disadvantages:
2. RMA is not optimal when the task period and deadline differ.
DEADLINE MONOTONIC
1. Priority Assignment: A task with an earlier deadline is assigned a higher priority. This is
the opposite of Rate Monotonic Scheduling, where tasks with shorter periods are
assigned higher priorities.
2. Preemptive: Like RMS, DMS is a preemptive scheduling algorithm. A higher-priority
task can preempt a lower-priority task if the higher-priority task arrives while the lower-
priority task is executing.
3. Schedulability: A system is schedulable under DMS if the set of tasks can be scheduled
without missing any deadlines. Schedulability is typically determined using Liu and
Layland’s Theorem or through simulation.
1. Assign Priorities: Tasks with earlier deadlines receive higher priorities. If two tasks have
the same deadline, their priorities can be assigned based on their arrival times or periods.
2. Execute Tasks: The scheduler will always execute the task with the highest priority (i.e.,
the task with the earliest deadline).
3. Preemption: If a higher-priority task arrives while a lower-priority task is running, the
lower-priority task is preempted.
Example:
T2 1 8 6
T3 3 12 9
The tasks are scheduled in a time frame, and we will schedule tasks from time t = 0 to t = 12 (the
least common multiple of periods for this example).
Time 0-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9 9-10 10-11 11-12
Task T1 T1 T2 T2 T1 T3 T3 T3 T2 T2 T3 T3
Task Completion:
A kernel represents the innermost part of any operating system that is in direct
connection with the hardware of the physical machine. A kernel usually provides the following
basic activities:
1. Process management,
2. Interrupt handling, and
3. Process synchronization
Process management
Process management is the primary service that an operating system has to provide. It includes
various supporting functions, such as process creation and termination, job scheduling,
dispatching, context switching, and other related activities.
Interrupt handling
The objective of the interrupt handling mechanism is to provide service to the interrupt requests
that may be generated by any peripheral device, such as the keyboard, serial ports, analog-to-
digital converters, or any specific sensor interface. The service provided by the kernel to an
interrupt request consists of the execution of a dedicated routine (driver) that will transfer data
from the device to the main memory (or viceversa).In classical operating systems, applica tion
tasks can always be preempted by drivers, at any time. In real time systems, however, this
approach may introduce unpredictable delays in the execution of critical tasks, causing some
hard deadline to be missed. For this reason, in a real-time system, the interrupt handling
mechanism has to be integrated with the scheduling mechanism, so that a driver can be scheduled
as any other task in the system and a guarantee of feasibility can be achieved even in the presence
of interrupt requests.
Process synchronization
Another important role of the kernel is to provide a basic mechanism for supporting
process synchronization and communication. In classical operating systems this is done by
semaphores, which represent an efficient solution to the problem of synchronization, as well as
to the one of mutual exclusion.
Semaphores are prone to priority inversion, which introduces unbounded blocking
on task’s execution and prevents a guarantee for hard real-time tasks. As a consequence, in order
to achieve predictability, a real-time kernel has to provide special types of semaphores that
support a resource access protocol (such as Priority Inheritance, Priority Ceiling, or Stack
Resource Policy) for avoiding unbounded priority inversion.
Other kernel activities involve the initialization of internal data structures (such as
queues, tables, task control blocks, global variables, semaphores, and so on) and specific services
to higher levels of the operating system.
REAL TIME OPERATING SYSTEMS
Task Manager: Manages task creation, scheduling, and execution. It typically includes
support for multiple tasks, with real-time scheduling algorithms like Rate-Monotonic
Scheduling (RMS), Earliest Deadline First (EDF), etc.
Scheduler: Responsible for determining which task should be executed based on their
priorities, deadlines, and periodicity.
Interrupt Handler: Manages the interrupts from hardware. Interrupt handling in real-
time systems must be deterministic to ensure timely responses.
Memory Manager: Allocates and manages memory in a real-time system, often with
fixed-size buffers or memory pools to avoid delays associated with dynamic memory
allocation.
Timer: Provides timing services like task deadlines, time slicing, or system clock, to
manage periodic task execution and system synchronization.
Synchronization Primitives: Includes tools like semaphores, mutexes, and event flags to
manage shared resources and prevent race conditions.
I/O Manager: Handles communication with external devices, ensuring that I/O
operations are completed within time constraints.
Communication Manager: Deals with message-passing, inter-process communication
(IPC), or data exchange between tasks or external devices in a deterministic way.
In a real-time kernel, tasks go through various states during their lifecycle. The task states and
transitions are usually modeled as a state transition diagram.
Task States:
1. Ready: The task is ready to execute but is waiting for the CPU to be allocated to it. This
is typically due to the task being preempted or the system waiting for its turn to execute.
2. Running: The task is currently being executed by the CPU.
3. Blocked (or Waiting): The task is blocked, usually waiting for an event (e.g., I/O
completion, resource availability, or synchronization).
4. Suspended: The task is not scheduled for execution, either due to a manual intervention
or system state (e.g., sleeping, or awaiting a higher-priority task).
5. Terminated: The task has completed its execution and is no longer active
Kernel Primitives
REAL TIME OPERATING SYSTEMS
Kernel primitives are the basic building blocks of a real-time kernel. They are used to manage
task execution, synchronization, and communication. Common kernel primitives include:
CreateTask(): Creates a new task and initializes its properties (priority, execution time,
etc.).
DeleteTask(): Deletes a task that is no longer needed.
ActivateTask(): Activates a task to move it to the Ready state.
TerminateTask(): Terminates a running task and frees up its resources.
SuspendTask(): Suspends a task, saving its state and preventing it from running until
resumed.
ResumeTask(): Resumes a suspended task from where it left off.
2. Synchronization Primitives :
Semaphore:
o Wait(): Decrements the semaphore, and blocks the task if the semaphore value is
0 (i.e., the resource is unavailable).
o Signal(): Increments the semaphore, signaling that the resource is now available.
Mutex (Mutual Exclusion):
o Lock(): Acquires a mutex. If the mutex is already locked by another task, the task
will be blocked until it can acquire the mutex.
o Unlock(): Releases the mutex, allowing other tasks to acquire it.
Event Flags:
o SetEvent(): Sets a flag that can trigger a task to wake up or execute when it is in
the Blocked state.
o ClearEvent(): Clears the flag, preventing it from triggering any tasks.
Delay(): Causes a task to delay its execution for a specified amount of time.
TimeSlice(): Specifies the maximum amount of time a task can run before it is
preempted.
GetTime(): Returns the current system time or the elapsed time since the system booted.