Real-Time Operating System Overview
Real-Time Operating System Overview
REAL-TIME OS
CHAPTER 5
REAL TIME OS
5.1 Introduction to Real time Systems
A real-time system is defined as a data processing system in which the time interval required to
process and respond to inputs is so small that it controls the environment. The time taken by the
system to respond to an input and display of required updated information is termed as the
response time. So, in this method, the response time is very less as compared to online processing.
Real-time systems are used when there are rigid time requirements on the operation of a processor
or the flow of data and real-time systems can be used as a control device in a dedicated application.
A real-time operating system must have well-defined, fixed time constraints, otherwise the system
will fail. For example, scientific experiments, medical imaging systems, industrial control systems,
weapon systems, robots, air traffic control systems, etc.
There are two types of real-time operating systems.
Features of RTOS:
1. It has better reliability. It means, RTOS can work in long time without any human
interference.
2. Its result is more predictable because its every action are executed into predefined time
frame.
3. Its performance is much better because it can perform more complex tasks without taking
more workload.
4. All software and hardware are small size, which are used in the RTOS (Real Time Operating
system). Due to this, technician does not get more headaches for finding the errors in the
RTOS.
5. It has good stability. So due to this feature, we can upgrade or downgrade to RTOS.
6. Consumes low memory, has Affordable Cost and Occupy less resource.
A process keeps on changing its states while in execution. If we classify, we can categorize the states
of a process into:
a) New: It is the initial state of each process. At this stage, the process is being created.
b) Running: When the processor is consistently executing the instructions. The process is in a
running state.
c) Waiting: The process might wait for an event to occur. Such as completion of an I/O,
receiving a signal to proceed, etc.
d) Ready: The process is ready for being assigned to the processor.
e) Terminated: The process that completes its execution is in the terminated state.
The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Figure above shows a simplified diagram of a PCB. The
PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.
Threads:
• A thread is the smallest unit of processing that can be performed in an OS.
• Thread is an execution unit which consists of its own program counter, a stack, and a set of
registers.
• In most modern operating systems, a thread exists within a process - that is, a single process
may contain multiple threads.
• A thread is a single sequence stream within a process.
• Because threads have some of the properties of processes, they are sometimes called light-
weighted processes.
• Threads are popular way to improve application through parallelism. The CPU switches
rapidly back and forth among the threads giving illusion that the threads are running in
parallel.
Types of threads:
Threads are implemented in following two ways:
Task:
The basic building block of software written under an RTOS is the task. While job is a unit of work
that is scheduled and executed by a system, Task is considered as a group of different processes and
functions grouped together to achieve some specified result.
5.3 Kernel
Kernel:
Kernel is central component of an operating system that manages operations of computer and
hardware. It basically manages operations of memory and CPU time. It is core component of an
operating system. Kernel acts as a bridge between applications and data processing performed at
hardware level using inter-process communication and system calls.
Kernel is the important part of an Operating System. The kernel is the first program that is loaded
after the boot loader whenever we start a system. The Kernel is present in the memory until the
Operating System is shut-down.
Kernel provides an interface between the user and the hardware components of the system.
It is responsible for various tasks such as disk management, task management, and memory
management.
Types of Kernel
1. Monolithic Kernels
In a monolithic kernel, the same memory space is used to implement
user services and kernel services. It means, in this type of kernel,
there is no different memory used for user services and kernel
services. As it uses the same memory space, the size of the kernel
increases, increasing the overall size of the OS. The execution of
processes is also faster than other kernel types as it does not use
separate user and kernel space.
Examples of Monolithic Kernels are Unix, Linux, Open VMS, XTS-400,
etc.
2. Micro kernel
A microkernel is also referred to as μK, and it is different
from a traditional kernel or Monolithic Kernel. In this, user
services and kernel services are implemented into two
different address spaces: user space and kernel space.
Since it uses different spaces for both the services, so, the
size of the microkernel is decreased, and which also
reduces the size of the OS.
Examples of Microkernel are L4, AmigaOS, Minix, K42, etc.
3. Hybrid Kernel
Hybrid kernels are also known as modular kernels, and it
is the combination of both Monolithic and Microkernels.
It takes advantage of the speed of monolithic kernels and
the modularity of microkernels. A hybrid kernel can be
understood as the extended version of a microkernel
with additional properties of a monolithic kernel. These
kernels are widely used in commercial OS, such as
different versions of MS Windows.
Examples of Hybrid Kernel are Windows NT, Netware, BeOS, etc.
Real-time Kernel:
In most cases the RTOS is an operating system kernel. An embedded system is designed for a single
purpose, so the user shell and file/disk access features are unnecessary.
The Kernel of a RTOS is called as Real-Time Kernel. It is highly specialized and it contains only the
minimal set of services required for running the user application/tasks.
The kernel is the part of an OS that is responsible for the management of threads (i.e., managing the
CPU’s time) and for communication between threads. The fundamental service provided by the
kernel is context switching.
RTOS Kernel has following functions:
• Task management
• Task scheduling
• Task synchronization
• Time management
• Memory management
• Interrupt handling
• Exception handling
S. Tasks Description
No.
1. Task It deals with setting up the memory space for the tasks, loading the
management task’s code into the memory space, allocating system resources, setting
up a Task Control Block (TCB) for the task and task termination or
deletion.
2. Task scheduling It deals with sharing the CPU among various tasks/processes. A Kernel
application called ‘scheduler’ handles the task scheduling.
3. Task It deals with synchronizing the concurrent access of a resource, which is
synchronization shared among multiple tasks. RT-kernel also deals with communication
between various tasks.
4. Time Accurate time management is essential for providing precise time
management reference for all application. Time reference to kernel is provided by a
high-resolution Real-Time Clock (RTC) hardware chip. The hardware tier
is programmed to interrupt the processor/controller at a fixed rate. This
Task/Process Scheduling:
The act of determining which process in the ready state should be moved to the running state is
known as Process Scheduling.
In other words, the process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process on the basis of a
particular strategy. The main aim of the process scheduling system is to keep the CPU busy all the
time and to deliver minimum response time for all programs.
Scheduling Criteria
There are many different criteria to check when considering the "best" scheduling algorithm:
i) CPU utilization
To make out the best use of CPU and not to waste any CPU cycle, CPU would be working
most of the time (Ideally 100% of the time). Considering a real system, CPU usage should
range from 40% (lightly loaded) to 90% (heavily loaded.)
ii) Throughput
It is the total number of processes completed per unit time or rather say total amount of
work done in a unit of time. This may range from 10/second to 1/hour depending on the
specific processes.
iii) Turnaround time
It is the amount of time taken to execute a particular process, i.e. The interval from time of
submission of the process to the time of completion of the process (Wall clock time).
TAT=Time Of Completion of job – Time Of Submission of job
iv) Waiting time
The sum of the periods spent waiting in the ready queue.
v) Load average
It is the average number of processes residing in the ready queue waiting for their turn to
get into the CPU.
vi) Response time
Amount of time it takes from when a request was submitted until the first response is
produced. Remember, it is the time till the first response and not the completion of process
execution (final response).
Thus, a good scheduler should have a high CPU utilization, maximum throughput, less waiting tie and
low response time.
Types of Scheduling:
1. Preemptive scheduling:
A scheduling discipline is preemptive if once a process has been given the CPU; the CPU can
be taken away from that process.
In this, all the Processes are executed by using some amount of time of CPU. The time of
CPU is divided into the number of minutes and Time of CPU divided into the Process by using
Some Rules. If the time is divided into equal interval, then it is called Quantum Time.
A scheduling discipline is non-preemptive if once a process has been given the CPU; the CPU
cannot be taken away from that process. In this, no time scheduling is done.
Scheduling Techniques:
1. First Come First Service
• A non-preemptive
scheduling technique.
• Jobs are executed on first
come, first serve basis.
• Easy to understand and
implement.
• Poor in performance as
average wait time is high.
3. Priority scheduling
• Priority is assigned for each
process.
• Process with highest priority is
executed first and so on.
• Processes with same priority
are executed in FCFS manner.
• Priority can be decided based
on memory requirements, time
requirements or any other
resource requirement.
4. Round Robin
• A fixed time is allotted to each
process, called quantum, for
execution.
• Once a process is executed for
given time period that process
is preempted and other
process executes for given
time period.
• Context switching is used to
save states of preempted
processes.
A Task Control Block (TCB) is a data structure having the information which is used by the OS to
control the process state. Task uses TCB to remember its context. TCB uses data structure residing
on RAM accessible only by RTOS.
Task Information at the TCB are:
• TaskID: The unique identifier is used to define a task. For example, in case of 8-bit ID, a
number between 0 and 255 be used to define TaskID.
• Task Context: It includes the current status of program counter, stack pointer, status of CPU
register and Status Register.
• Task priority: It stores the priority level of parent as well as child task available in Task List.
The priority is a number used as the identifier.
• Task Context_init: it is a pointer to the processor memory that stores following information.
– Allocated program memory address blocks in physical memory and in secondary
(virtual) memory for the tasks-codes.
– Allocated task-specific data address blocks.
– Allocated task-stack addresses for the functions called during running of the process.
– Allocated addresses of CPU register-save area as a task context represents by CPU
registers, which include the program counter and stack pointer
Context Switching:
When the multithreading kernel decides to run a different thread, it simply saves the current
thread’s context (CPU registers) in the current thread’s context storage area (the thread control
block, or TCB). Once this operation is performed, the new thread’s context is restored from its TCB
and the CPU resumes execution of the new thread’s code. This process is called a context switch.
Context switching adds overhead to the application.
The act of switching CPU among the processes or changing the current execution context is known
as Context Switching.
Interrupts allow a microprocessor to process events when they occur. This prevents the
microprocessor from continuously polling an event to see if it has occurred. Microprocessors allow
interrupts to be ignored and recognized through the use of two special instructions: disable
interrupts and enable interrupts, respectively.
The interrupt handlers handles the interrupt generated by external devices as below:
• The current context of the task is saved on stack.
• Block the task and branches the program control to beginning address of ISR and executes
the ISR to serve the interrupt.
• Terminates from interrupt routine and read the context of the blocked task.
Task Communication
In a multitasking system, multiple tasks/processes run concurrently and each process may or may
not interact between each other. Based on degree of interaction, processes are classified as:
Cooperating processes and Competing processes.
For Cooperating processes, one process requires inputs from other processes to complete its
execution. For Competing processes, process do not share anything except system resources.
The mechanism through which processes/tasks communicate each other is known as Inter Process
Communication (IPC). IPC is kernel dependent. Some important IPC mechanisms are discussed
below:
1. Shared Memory
With the shared data model shown in Figure below, processes communicate via access to shared
areas of memory in which variables modified by one process are accessible to all processes.
While accessing shared data as a means to communicate is a simple approach, the major issue of
race conditions can arise. A race condition occurs when a process that is accessing shared
variables is pre-empted before completing a modification access, thus affecting the integrity of
shared variables. To counter this issue, portions of processes that access shared data, called
critical sections, can be earmarked for mutual exclusion (or Mutex for short). Mutex mechanisms
allow shared memory to be locked up by the process accessing it, giving that process exclusive
access to shared data.
2. Message Passing
Message passing is an asynchronous information exchange mechanism for IPC. Major difference
between shared memory and message passing is that, through shared memory, lots of data can
be shared whereas through message passing only limited amount of data is passed. Message
passing is relatively faster and free from synchronization overheads.
Message passing serves to exchange data directly between two tasks; specifically, no data or
signals will be buffered. This represents the tightest coupling between tasks because the tasks
involved must synchronize for the data exchange. Since memory is not shared, less sensible for
bugs. However, it requires more advanced protocol. Message passing is classified into following
mechanisms:
a) Message queue:
The process which wants to talk to another process posts the message to a FIFO queue
called ‘Message queue’ which stores the message temporarily in a system defined memory
object, to pass it to the desired object.
16 Embedded System © Er. Shiva Ramdam, 2022
5. REAL-TIME OS
b) Mailbox
Mailbox is an alternative usually used for one-way messaging. The task/thread which wants
to send a message to other tasks/threads creates a mailbox for posting the messages.
1. The calling environment is suspended, procedure parameters are transferred across the
network to the environment where the procedure is to execute, and the procedure is
executed there.
2. When the procedure finishes and produces its results, its results are transferred back to the
calling environment, where execution resumes as if returning from a regular procedure call.
Task Synchronization
Process synchronization is the task of synchronizing the execution of processes in such a manner
that no two processes have access to the same shared data and resource. In a multi process system
when multiple processes are running simultaneously, then they may attempt to gain the access of
same shared data and resource at a time. This can lead in inconsistency of shared data. That is the
changes made by one process may not be reflected when other process accessed the same shared
data. In order to avoid these types of inconsistency of data the processes should be synchronized
with each other.
Mutex:
A mutex (MUTual EXclusion) is a locking mechanism. It is used for protecting critical sections of the
code. In the context of a task, we can define a critical section as a piece of code that accesses shared
resources of the embedded system.
A situation can arise where critical sections from different RTOS tasks try to access the same shared
resource at the same time (enabled by the preemptive scheduling algorithm). In simple applications
that do not employ multitasking behavior, we can guard critical sections by simply disabling the
interrupts. This approach, however, is not suitable for real-time applications, because allowing tasks
to disable the interrupts will severely deteriorate the response time to events. As an example, if a
low priority task disables the interrupts while executing a critical section code, other higher priority
tasks that do not need to use the same shared resource will not be able to execute. A solution to this
Embedded System © Er. Shiva Ramdam, 2022 19
5. REAL-TIME OS
situation is a mutex. It can be used to protect critical sections while maintaining the multitasking
behavior of the program.
Imagine a company office that has three employees and one company car. The shared resource, in
this case, is the car and the key to the car is the mutex. If an employee wants to use the car, he has
to obtain the key (mutex). If an employee has already taken the key and is using the car, all other
employees have to wait for the key to be returned to the office
The mutex behaves like a token (key) and it restricts access to a resource. If a task wants to access
the protected resource it must first acquire the token. If it is already taken, the task could wait for it
to become available. Once obtained, the token is owned by the task and is released once the task is
finished using the protected resource.
These are the common operations that an RTOS task can perform with a mutex:
• Create\Delete a mutex
• Get Ownership (acquire a lock on a shared resource)
• Release Ownership (release a lock on a shared resource)
As embedded system designers, we need to identify the critical sections of the program and use
mutexes to protect them.
Semaphore
Semaphore proposed by Edsger Dijkastra, is a technique to manage concurrent processes by using a
simple integer value, which is known as a semaphore.
Semaphore is simply a variable that is non-negative and shared between threads. A semaphore is a
signaling mechanism. This variable is used to solve the critical section problem and to achieve
process synchronization in the multiprocessing environment.
Semaphores are used for synchronization (between tasks or between tasks and interrupts) and
managing allocation and access to shared resources.
Semaphore is a technique for synchronizing two/more task competing for the same resources. When
a task wants to use a resource, it requests for the semaphore and will be allocated if the semaphore
is available. If the semaphore is not available then the requesting task will go to blocked state till the
semaphore becomes free.
A semaphore S is an integer variable that, apart from initialization, is accessed only through two
standard atomic operations: wait() and signal(). wait() means “to test” and signal() means “to
increment”.
P (Semaphore S) { V (Semaphore S) {
while (S<=0) S++;
; //no operation }
S--;
}
Based on the implementation of the sharing limitation of the shared resource, semaphores are
classified into two: Binary and Counting
a) Binary Semaphore
A binary semaphore is a simple signaling mechanism and it can take only two values: 0 and 1. It
is most commonly used as a flag for synchronization between tasks or interrupts and tasks.
b) Counting Semaphore
Counting semaphore maintains a count between zero and a value. It limits the usage of the
resource to the max value of the count supported by it. It is used to control access to a resource
that has multiple instances.
Reference: https://www.youtube.com/watch?v=XDIOC2EY5JE
Upon the completion of a program initialization, physical memory of the MCU or MPU will usually be
occupied with program code, program data and system stack. The remaining physical memory is
called heap. This heap memory is typically used by the kernel for dynamic memory allocation of data
space for tasks. The memory is divided into fixed size memory blocks, which can be requested by
tasks. When a task finishes using a memory block, it must return it to the pool. This process of
managing the heap memory is known as Heap management.
In general, a memory management facility maintains internal information for a heap in a reserved
memory called the control block. Typical information includes:
• The starting address of the physical memory block used for dynamic memory allocation
• The overall size of this physical memory block, and
• The allocation table that indicates which memory areas are in use, which memory areas are
free, and the size of each free region.
***