0% found this document useful (0 votes)
40 views22 pages

Real-Time Operating System Overview

This document provides an overview of Real-Time Operating Systems (RTOS), defining real-time systems and their types: hard and soft real-time systems. It discusses key concepts such as processes, tasks, threads, and the kernel, along with their respective roles and characteristics. Additionally, it covers task scheduling and the criteria for effective scheduling in RTOS environments.

Uploaded by

leofoster1024
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views22 pages

Real-Time Operating System Overview

This document provides an overview of Real-Time Operating Systems (RTOS), defining real-time systems and their types: hard and soft real-time systems. It discusses key concepts such as processes, tasks, threads, and the kernel, along with their respective roles and characteristics. Additionally, it covers task scheduling and the criteria for effective scheduling in RTOS environments.

Uploaded by

leofoster1024
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

5.

REAL-TIME OS

CHAPTER 5
REAL TIME OS
5.1 Introduction to Real time Systems
A real-time system is defined as a data processing system in which the time interval required to
process and respond to inputs is so small that it controls the environment. The time taken by the
system to respond to an input and display of required updated information is termed as the
response time. So, in this method, the response time is very less as compared to online processing.
Real-time systems are used when there are rigid time requirements on the operation of a processor
or the flow of data and real-time systems can be used as a control device in a dedicated application.
A real-time operating system must have well-defined, fixed time constraints, otherwise the system
will fail. For example, scientific experiments, medical imaging systems, industrial control systems,
weapon systems, robots, air traffic control systems, etc.
There are two types of real-time operating systems.

1. Hard real-time systems


Hard real-time systems guarantee that critical tasks complete on time. In hard real-time
systems, secondary storage is limited or missing and the data is stored in ROM. In these
systems, virtual memory is almost never found.
2. Soft real-time systems
Soft real-time systems are less restrictive. A critical real-time task gets priority over other
tasks and retains the priority until it completes. Soft real-time systems have limited utility
than hard real-time systems. For example, multimedia, virtual reality, Advanced Scientific
Projects like undersea exploration and planetary rovers, etc.

Features of RTOS:
1. It has better reliability. It means, RTOS can work in long time without any human
interference.
2. Its result is more predictable because its every action are executed into predefined time
frame.
3. Its performance is much better because it can perform more complex tasks without taking
more workload.
4. All software and hardware are small size, which are used in the RTOS (Real Time Operating
system). Due to this, technician does not get more headaches for finding the errors in the
RTOS.
5. It has good stability. So due to this feature, we can upgrade or downgrade to RTOS.
6. Consumes low memory, has Affordable Cost and Occupy less resource.

5.2 Definitions of Process, Task and Threads


Process:
A process is an instance of a program in execution. A program becomes a process when an
executable file is loaded into memory. A program is a passive entity whereas process is an active
entity. To put it in simple terms, we write our computer programs in a text file and when we execute
this program, it becomes a process which performs all the tasks mentioned in the program.
• The program is a passive entity and the process is an active entity. When we execute a
program, it remains on the hard drive of our system and when this program comes into the
main memory it becomes a process.

Embedded System © Er. Shiva Ramdam, 2022 1


5. REAL-TIME OS

• The process can be present on a hard drive, memory or CPU.


• Example: In windows we can see each of the processes (running) in windows task manager.
All the processes running in the background are visible under the processes tab in Task
Manager.
• Every process has its own address space.
• IT can be divided into four sections ─ stack, heap, text and
data.
• The following image shows a simplified layout of a process
inside main memory:
o Stack: The process Stack contains the temporary
data such as method/function parameters, return
address and local variables.
o Heap: This is dynamically allocated memory to a
process during its run time.
o Text: This includes the current activity represented
by the value of Program Counter and the contents
of the processor's registers.
o Data: This section contains the global and static
variables.

Process Life Cycle:

A process keeps on changing its states while in execution. If we classify, we can categorize the states
of a process into:
a) New: It is the initial state of each process. At this stage, the process is being created.
b) Running: When the processor is consistently executing the instructions. The process is in a
running state.
c) Waiting: The process might wait for an event to occur. Such as completion of an I/O,
receiving a signal to proceed, etc.
d) Ready: The process is ready for being assigned to the processor.
e) Terminated: The process that completes its execution is in the terminated state.

Process Control Block:


A PCB is a data structure maintained by the OS for every process. The PCB is identified by an integer
process ID (PID). A PCB keeps all the information needed to keep track of a process as listed below in
the table:

2 Embedded System © Er. Shiva Ramdam, 2022


5. REAL-TIME OS

S.No. Information Description

1 Process State The current state of the process i.e., whether


it is ready, running, waiting, or whatever.

2 Process privileges This is required to allow/disallow access to


system resources.

3 Process ID Unique identification for each of the process


in the operating system.

4 Pointer A pointer to parent process.

5 Program Counter Program Counter is a pointer to the address


of the next instruction to be executed for this
process.

6 CPU registers Various CPU registers where process need to


be stored for execution for running state.
Fig: PCB
7 CPU Scheduling Process priority and other scheduling
Information information which is required to schedule the
process.

8 Memory Includes the information of page table,


management memory limits, Segment table depending on
information memory used by the operating system.

9 Accounting Includes the amount of CPU used for process


information execution, time limits, execution ID etc.

10 IO status Includes a list of I/O devices allocated to the


information process.

The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Figure above shows a simplified diagram of a PCB. The
PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.

Threads:
• A thread is the smallest unit of processing that can be performed in an OS.
• Thread is an execution unit which consists of its own program counter, a stack, and a set of
registers.
• In most modern operating systems, a thread exists within a process - that is, a single process
may contain multiple threads.
• A thread is a single sequence stream within a process.
• Because threads have some of the properties of processes, they are sometimes called light-
weighted processes.
• Threads are popular way to improve application through parallelism. The CPU switches
rapidly back and forth among the threads giving illusion that the threads are running in
parallel.

Embedded System © Er. Shiva Ramdam, 2022 3


5. REAL-TIME OS

Types of threads:
Threads are implemented in following two ways:

a) User Level Threads:


• User managed threads
• User threads, are above the kernel and without
kernel support.
• These are the threads that application
programmers use in their programs.

b) Kernel Level Threads:


• Operating System managed threads acting on
kernel, an operating system core.
• Kernel threads are supported within the kernel
of the OS itself.
• All modern OSs support kernel level threads,
allowing the kernel to perform multiple
simultaneous tasks and/or to service multiple
kernel system calls simultaneously.

Differences between Process and Thread:

S.no Process Thread


1. Process is heavy weight or resource Thread is light weight, taking lesser
intensive. resources than a process.
2. Process switching needs interaction with Thread switching does not need to
OS. interact with OS.
3. In Multiple processing environments, All threads can share same set of open
each process executes the same code but files, child processes.
has its own memory and file resources.
4. If one process is blocked, then no other While one thread is blocked and waiting, a
process can execute until the first process second thread in the same task can run.
is unblocked.
5. Multiple processes without using threads Multiple threaded processes use fewer
use more resources. resources.
6. In multiple processes, each process One thread can read, write or change
operates independently of the other. another thread’s data.

4 Embedded System © Er. Shiva Ramdam, 2022


5. REAL-TIME OS

Task:
The basic building block of software written under an RTOS is the task. While job is a unit of work
that is scheduled and executed by a system, Task is considered as a group of different processes and
functions grouped together to achieve some specified result.

Differences between Process and Thread:

5.3 Kernel
Kernel:
Kernel is central component of an operating system that manages operations of computer and
hardware. It basically manages operations of memory and CPU time. It is core component of an
operating system. Kernel acts as a bridge between applications and data processing performed at
hardware level using inter-process communication and system calls.
Kernel is the important part of an Operating System. The kernel is the first program that is loaded
after the boot loader whenever we start a system. The Kernel is present in the memory until the
Operating System is shut-down.
Kernel provides an interface between the user and the hardware components of the system.
It is responsible for various tasks such as disk management, task management, and memory
management.

Embedded System © Er. Shiva Ramdam, 2022 5


5. REAL-TIME OS

Kernel Space and User space:


System memory in Operating System can be divided int two distinct regions: Kernel space and User
space.
• The program code corresponding to the kernel application are kept in a contiguous area of
primary memory and is protected from unauthorized access by their applications. The
memory space at which kernel code is located is known as ‘Kernel Space’. Kernel space is
where the kernel (i.e. the core of the operating system) executes and provides its services.
• All user applications are loaded to a specific are of primary ‘User space’. User space is that
set of memory locations in which user processes run. A process is an executing instance of a
program. One of the roles of the kernel is to manage individual user processes within this
space and to prevent them from interfacing with each other.

Types of Kernel

1. Monolithic Kernels
In a monolithic kernel, the same memory space is used to implement
user services and kernel services. It means, in this type of kernel,
there is no different memory used for user services and kernel
services. As it uses the same memory space, the size of the kernel
increases, increasing the overall size of the OS. The execution of
processes is also faster than other kernel types as it does not use
separate user and kernel space.
Examples of Monolithic Kernels are Unix, Linux, Open VMS, XTS-400,
etc.

2. Micro kernel
A microkernel is also referred to as μK, and it is different
from a traditional kernel or Monolithic Kernel. In this, user
services and kernel services are implemented into two
different address spaces: user space and kernel space.
Since it uses different spaces for both the services, so, the
size of the microkernel is decreased, and which also
reduces the size of the OS.
Examples of Microkernel are L4, AmigaOS, Minix, K42, etc.

3. Hybrid Kernel
Hybrid kernels are also known as modular kernels, and it
is the combination of both Monolithic and Microkernels.
It takes advantage of the speed of monolithic kernels and
the modularity of microkernels. A hybrid kernel can be
understood as the extended version of a microkernel
with additional properties of a monolithic kernel. These
kernels are widely used in commercial OS, such as
different versions of MS Windows.
Examples of Hybrid Kernel are Windows NT, Netware, BeOS, etc.

6 Embedded System © Er. Shiva Ramdam, 2022


5. REAL-TIME OS

Real-time Kernel:
In most cases the RTOS is an operating system kernel. An embedded system is designed for a single
purpose, so the user shell and file/disk access features are unnecessary.
The Kernel of a RTOS is called as Real-Time Kernel. It is highly specialized and it contains only the
minimal set of services required for running the user application/tasks.

The kernel is the part of an OS that is responsible for the management of threads (i.e., managing the
CPU’s time) and for communication between threads. The fundamental service provided by the
kernel is context switching.
RTOS Kernel has following functions:
• Task management
• Task scheduling
• Task synchronization
• Time management
• Memory management
• Interrupt handling
• Exception handling

S. Tasks Description
No.
1. Task It deals with setting up the memory space for the tasks, loading the
management task’s code into the memory space, allocating system resources, setting
up a Task Control Block (TCB) for the task and task termination or
deletion.
2. Task scheduling It deals with sharing the CPU among various tasks/processes. A Kernel
application called ‘scheduler’ handles the task scheduling.
3. Task It deals with synchronizing the concurrent access of a resource, which is
synchronization shared among multiple tasks. RT-kernel also deals with communication
between various tasks.
4. Time Accurate time management is essential for providing precise time
management reference for all application. Time reference to kernel is provided by a
high-resolution Real-Time Clock (RTC) hardware chip. The hardware tier
is programmed to interrupt the processor/controller at a fixed rate. This

Embedded System © Er. Shiva Ramdam, 2022 7


5. REAL-TIME OS

timer interrupt is referred as ‘Timer tick’. Timer-tick is taken as the timing


reference by the kernel.
5. Memory RTOS makes use of ‘block’ based memory allocation technique instead fo
management the usual dynamic memory allocation techniques used by the GPOS.
6. Exception It deals with registering and handling the errors occurred and exceptions
handling generated during the execution of tasks. Insufficient memory, timeouts,
deadlocks, deadline missing, bus error, divide by zero, and unknown
instruction execution are the conditions for error/exceptions
7. Interrupt It deals with various interrupt handling.
handling

5.4 OS tasks, task states and task scheduling

8 Embedded System © Er. Shiva Ramdam, 2022


5. REAL-TIME OS

Task/Process Scheduling:
The act of determining which process in the ready state should be moved to the running state is
known as Process Scheduling.
In other words, the process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process on the basis of a
particular strategy. The main aim of the process scheduling system is to keep the CPU busy all the
time and to deliver minimum response time for all programs.

Scheduling Criteria
There are many different criteria to check when considering the "best" scheduling algorithm:
i) CPU utilization
To make out the best use of CPU and not to waste any CPU cycle, CPU would be working
most of the time (Ideally 100% of the time). Considering a real system, CPU usage should
range from 40% (lightly loaded) to 90% (heavily loaded.)
ii) Throughput
It is the total number of processes completed per unit time or rather say total amount of
work done in a unit of time. This may range from 10/second to 1/hour depending on the
specific processes.
iii) Turnaround time
It is the amount of time taken to execute a particular process, i.e. The interval from time of
submission of the process to the time of completion of the process (Wall clock time).
TAT=Time Of Completion of job – Time Of Submission of job
iv) Waiting time
The sum of the periods spent waiting in the ready queue.
v) Load average
It is the average number of processes residing in the ready queue waiting for their turn to
get into the CPU.
vi) Response time
Amount of time it takes from when a request was submitted until the first response is
produced. Remember, it is the time till the first response and not the completion of process
execution (final response).
Thus, a good scheduler should have a high CPU utilization, maximum throughput, less waiting tie and
low response time.

Types of Scheduling:

Embedded System © Er. Shiva Ramdam, 2022 9


5. REAL-TIME OS

1. Preemptive scheduling:

A scheduling discipline is preemptive if once a process has been given the CPU; the CPU can
be taken away from that process.
In this, all the Processes are executed by using some amount of time of CPU. The time of
CPU is divided into the number of minutes and Time of CPU divided into the Process by using
Some Rules. If the time is divided into equal interval, then it is called Quantum Time.

2. Non- Preemptive scheduling:

A scheduling discipline is non-preemptive if once a process has been given the CPU; the CPU
cannot be taken away from that process. In this, no time scheduling is done.

Scheduling Techniques:
1. First Come First Service
• A non-preemptive
scheduling technique.
• Jobs are executed on first
come, first serve basis.
• Easy to understand and
implement.
• Poor in performance as
average wait time is high.

2. Shortest Job First


• The shortest job is done
at the first.
• Best approach to
minimize waiting time.
• Actual time taken by the
process is already known
to processor.
• Impossible to implement.

10 Embedded System © Er. Shiva Ramdam, 2022


5. REAL-TIME OS

Preemptive SJF Scheduling:

In Preemptive Shortest Job


First Scheduling, jobs are put
into ready queue as they
arrive, but as a process with
short burst time arrives, the
existing process is
preempted.

3. Priority scheduling
• Priority is assigned for each
process.
• Process with highest priority is
executed first and so on.
• Processes with same priority
are executed in FCFS manner.
• Priority can be decided based
on memory requirements, time
requirements or any other
resource requirement.

4. Round Robin
• A fixed time is allotted to each
process, called quantum, for
execution.
• Once a process is executed for
given time period that process
is preempted and other
process executes for given
time period.
• Context switching is used to
save states of preempted
processes.

Embedded System © Er. Shiva Ramdam, 2022 11


5. REAL-TIME OS

Task/Process Transition through various queues:

5.5 Control Blocks


Task Control Block:

12 Embedded System © Er. Shiva Ramdam, 2022


5. REAL-TIME OS

A Task Control Block (TCB) is a data structure having the information which is used by the OS to
control the process state. Task uses TCB to remember its context. TCB uses data structure residing
on RAM accessible only by RTOS.
Task Information at the TCB are:
• TaskID: The unique identifier is used to define a task. For example, in case of 8-bit ID, a
number between 0 and 255 be used to define TaskID.
• Task Context: It includes the current status of program counter, stack pointer, status of CPU
register and Status Register.
• Task priority: It stores the priority level of parent as well as child task available in Task List.
The priority is a number used as the identifier.
• Task Context_init: it is a pointer to the processor memory that stores following information.
– Allocated program memory address blocks in physical memory and in secondary
(virtual) memory for the tasks-codes.
– Allocated task-specific data address blocks.
– Allocated task-stack addresses for the functions called during running of the process.
– Allocated addresses of CPU register-save area as a task context represents by CPU
registers, which include the program counter and stack pointer

Context Switching:
When the multithreading kernel decides to run a different thread, it simply saves the current
thread’s context (CPU registers) in the current thread’s context storage area (the thread control
block, or TCB). Once this operation is performed, the new thread’s context is restored from its TCB
and the CPU resumes execution of the new thread’s code. This process is called a context switch.
Context switching adds overhead to the application.
The act of switching CPU among the processes or changing the current execution context is known
as Context Switching.

Embedded System © Er. Shiva Ramdam, 2022 13


5. REAL-TIME OS

5.6 Interrupt Processing


An interrupt is a hardware mechanism used to inform the CPU that an asynchronous event has
occurred. When an interrupt is recognized, the CPU saves all of its context (i.e., registers) and jumps
to a special subroutine called an Interrupt Service Routine, or ISR. The ISR processes the event, and
upon completion of the ISR, the program returns to:
• the background for a foreground / background system,
• the interrupted thread for a non-preemptive kernel, or
• The highest priority thread ready to run for a preemptive kernel.

14 Embedded System © Er. Shiva Ramdam, 2022


5. REAL-TIME OS

Interrupts allow a microprocessor to process events when they occur. This prevents the
microprocessor from continuously polling an event to see if it has occurred. Microprocessors allow
interrupts to be ignored and recognized through the use of two special instructions: disable
interrupts and enable interrupts, respectively.

The interrupt handlers handles the interrupt generated by external devices as below:
• The current context of the task is saved on stack.
• Block the task and branches the program control to beginning address of ISR and executes
the ISR to serve the interrupt.
• Terminates from interrupt routine and read the context of the blocked task.

In a real-time environment, interrupts should be disabled as little as possible. Disabling interrupts


affects interrupt latency and may cause interrupts to be missed. Processors generally allow
interrupts to be nested. This means that while servicing an interrupt, the processor will recognize
and service other (more important) interrupts, as shown in Figure below.

Figure – Interrupt nesting


Embedded System © Er. Shiva Ramdam, 2022 15
5. REAL-TIME OS

5.7 Task Communication and Task Synchronization


Task Synchronization and inter-process communication serves to pass information among the tasks.

Task Communication
In a multitasking system, multiple tasks/processes run concurrently and each process may or may
not interact between each other. Based on degree of interaction, processes are classified as:
Cooperating processes and Competing processes.
For Cooperating processes, one process requires inputs from other processes to complete its
execution. For Competing processes, process do not share anything except system resources.
The mechanism through which processes/tasks communicate each other is known as Inter Process
Communication (IPC). IPC is kernel dependent. Some important IPC mechanisms are discussed
below:
1. Shared Memory
With the shared data model shown in Figure below, processes communicate via access to shared
areas of memory in which variables modified by one process are accessible to all processes.

Fig:. Memory sharing.

While accessing shared data as a means to communicate is a simple approach, the major issue of
race conditions can arise. A race condition occurs when a process that is accessing shared
variables is pre-empted before completing a modification access, thus affecting the integrity of
shared variables. To counter this issue, portions of processes that access shared data, called
critical sections, can be earmarked for mutual exclusion (or Mutex for short). Mutex mechanisms
allow shared memory to be locked up by the process accessing it, giving that process exclusive
access to shared data.

2. Message Passing
Message passing is an asynchronous information exchange mechanism for IPC. Major difference
between shared memory and message passing is that, through shared memory, lots of data can
be shared whereas through message passing only limited amount of data is passed. Message
passing is relatively faster and free from synchronization overheads.

Message passing serves to exchange data directly between two tasks; specifically, no data or
signals will be buffered. This represents the tightest coupling between tasks because the tasks
involved must synchronize for the data exchange. Since memory is not shared, less sensible for
bugs. However, it requires more advanced protocol. Message passing is classified into following
mechanisms:
a) Message queue:
The process which wants to talk to another process posts the message to a FIFO queue
called ‘Message queue’ which stores the message temporarily in a system defined memory
object, to pass it to the desired object.
16 Embedded System © Er. Shiva Ramdam, 2022
5. REAL-TIME OS

Fig: Message queue

b) Mailbox
Mailbox is an alternative usually used for one-way messaging. The task/thread which wants
to send a message to other tasks/threads creates a mailbox for posting the messages.

Fig: Mailbox bases Indirect messaging for IPC

3. Remote Procedure Call (RPC) and Sockets


Remote Procedure Call (RPC) is the IPC mechanism used by a process to call a procedure of
another process running on the same CPU or on a different CPU which is interconnected in a
network. It is usually used for distributed applications like client-server applications. With RPC, it
is possible to communicate over heterogenous network. The CPU/process containing the
procedure which need s to be invoked remotely is called server. The CPU/process which initiates
an RPC request is called client.
Sockets are used for RPC communication and establishes full duplex communication between
tasks.

Embedded System © Er. Shiva Ramdam, 2022 17


5. REAL-TIME OS

1. The calling environment is suspended, procedure parameters are transferred across the
network to the environment where the procedure is to execute, and the procedure is
executed there.
2. When the procedure finishes and produces its results, its results are transferred back to the
calling environment, where execution resumes as if returning from a regular procedure call.

The following steps take place during a RPC :


1. A client invokes a client stub procedure, passing parameters in the usual way. The client
stub resides within the client’s own address space.
2. The client stub marshalls (pack) the parameters into a message. Marshalling includes
converting the representation of the parameters into a standard format, and copying each
parameter into the message.
3. The client stub passes the message to the transport layer, which sends it to the remote
server machine.
4. On the server, the transport layer passes the message to a server stub,
which demarshalls(unpack) the parameters and calls the desired server routine using the
regular procedure call mechanism.
5. When the server procedure completes, it returns to the server stub (e.g., via a normal
procedure call return), which marshalls the return values into a message. The server stub
then hands the message to the transport layer.
6. The transport layer sends the result message back to the client transport layer, which hands
the message back to the client stub.
7. The client stub demarshalls the return parameters and execution returns to the caller.

Task Synchronization
Process synchronization is the task of synchronizing the execution of processes in such a manner
that no two processes have access to the same shared data and resource. In a multi process system
when multiple processes are running simultaneously, then they may attempt to gain the access of
same shared data and resource at a time. This can lead in inconsistency of shared data. That is the
changes made by one process may not be reflected when other process accessed the same shared

18 Embedded System © Er. Shiva Ramdam, 2022


5. REAL-TIME OS

data. In order to avoid these types of inconsistency of data the processes should be synchronized
with each other.

Task Synchronization Issues:


• Racing: It is the situation in which multiple tasks/processes compete with each other to
access and manipulate shared data concurrently.
• Deadlock: It is the situation where none of the tasks/processes are able to make any
progress in their execution, resulting a set od deadlocked tasks/processes. Here, a process
will be waiting for a resource held by another process which in turn is waiting for the
resource held by the former process.
• Livelock: It is similar to deadlock except that a task/process in live lock condition changes its
state with time. A task/process always does something but is unable to make any progress in
execution completion.
• Starvation: It is the condition I which a task/process does not get the resources required to
continue its execution for a long time,

Task Synchronization Techniques:


The code memory area which holds the program instruction for accessing shared resources is known
as ‘critical section’. In order to synchronize the access to shared resources, the access to the critical
section should be exclusive. Task synchronization is provided through mutual exclusion mechanism.
There are two techniques for mutual exclusion mechanism.
a) Mutual Exclusion through Busy Waiting/spin Lock:
This technique uses a lock variable for implementing mutual exclusion. Each process/thread
checks this lock variable before entering the critical section. The lock is set to 1 by a
process/thread if the process/thread is already in its critical section; otherwise the lock is set
to 0.

b) Mutual Exclusion through Sleep and Wakeup:


The ‘Busy waiting’ method makes the CPU always busy by checking the lock to see whether
they can proceed or not. This results in wastage of CPU time and leads to high power
consumption. An alternative to this is ‘Sleep and Wakeup’ mechanism.
When a process is not allowed to access the critical section, which is currently being locked
by another process, the process undergoes ‘sleep’ and enters the ‘blocked’ state. When the
process leaves the critical section, it sends a wakeup message to the process which is
sleeping as a result of waiting for the access to critical section.

Mutex and Semaphore

Mutex:
A mutex (MUTual EXclusion) is a locking mechanism. It is used for protecting critical sections of the
code. In the context of a task, we can define a critical section as a piece of code that accesses shared
resources of the embedded system.
A situation can arise where critical sections from different RTOS tasks try to access the same shared
resource at the same time (enabled by the preemptive scheduling algorithm). In simple applications
that do not employ multitasking behavior, we can guard critical sections by simply disabling the
interrupts. This approach, however, is not suitable for real-time applications, because allowing tasks
to disable the interrupts will severely deteriorate the response time to events. As an example, if a
low priority task disables the interrupts while executing a critical section code, other higher priority
tasks that do not need to use the same shared resource will not be able to execute. A solution to this
Embedded System © Er. Shiva Ramdam, 2022 19
5. REAL-TIME OS

situation is a mutex. It can be used to protect critical sections while maintaining the multitasking
behavior of the program.
Imagine a company office that has three employees and one company car. The shared resource, in
this case, is the car and the key to the car is the mutex. If an employee wants to use the car, he has
to obtain the key (mutex). If an employee has already taken the key and is using the car, all other
employees have to wait for the key to be returned to the office

The mutex behaves like a token (key) and it restricts access to a resource. If a task wants to access
the protected resource it must first acquire the token. If it is already taken, the task could wait for it
to become available. Once obtained, the token is owned by the task and is released once the task is
finished using the protected resource.
These are the common operations that an RTOS task can perform with a mutex:
• Create\Delete a mutex
• Get Ownership (acquire a lock on a shared resource)
• Release Ownership (release a lock on a shared resource)

Fig: State diagram of a mutex

As embedded system designers, we need to identify the critical sections of the program and use
mutexes to protect them.

Semaphore
Semaphore proposed by Edsger Dijkastra, is a technique to manage concurrent processes by using a
simple integer value, which is known as a semaphore.
Semaphore is simply a variable that is non-negative and shared between threads. A semaphore is a
signaling mechanism. This variable is used to solve the critical section problem and to achieve
process synchronization in the multiprocessing environment.
Semaphores are used for synchronization (between tasks or between tasks and interrupts) and
managing allocation and access to shared resources.
Semaphore is a technique for synchronizing two/more task competing for the same resources. When
a task wants to use a resource, it requests for the semaphore and will be allocated if the semaphore
is available. If the semaphore is not available then the requesting task will go to blocked state till the
semaphore becomes free.
A semaphore S is an integer variable that, apart from initialization, is accessed only through two
standard atomic operations: wait() and signal(). wait() means “to test” and signal() means “to
increment”.

Definition of wait (): Definition of signal ():

P (Semaphore S) { V (Semaphore S) {
while (S<=0) S++;
; //no operation }
S--;
}

20 Embedded System © Er. Shiva Ramdam, 2022


5. REAL-TIME OS

Based on the implementation of the sharing limitation of the shared resource, semaphores are
classified into two: Binary and Counting

a) Binary Semaphore
A binary semaphore is a simple signaling mechanism and it can take only two values: 0 and 1. It
is most commonly used as a flag for synchronization between tasks or interrupts and tasks.

Fig. State diagram of a binary semaphore

b) Counting Semaphore
Counting semaphore maintains a count between zero and a value. It limits the usage of the
resource to the max value of the count supported by it. It is used to control access to a resource
that has multiple instances.

Fig. State diagram of a counting semaphore

Reference: https://www.youtube.com/watch?v=XDIOC2EY5JE

5.8 Memory Requirements and Control Kernel Services


An Embedded RTOS usually tries to use as less memory as possible by including only the
functionality needed for the user’s application. There are two types of memory management in
RTOS: Stack and Heap management.
In a multi-tasking RTOS, each task needs to be allocated with an amount of memory for storing their
contexts (i.e. volatile information such as register contents, program counter, etc.) for context
switching. This allocation of memory is done using task-control block model. This set of memory is
commonly known as kernel stack and the management process termed as Stack Management.

Upon the completion of a program initialization, physical memory of the MCU or MPU will usually be
occupied with program code, program data and system stack. The remaining physical memory is
called heap. This heap memory is typically used by the kernel for dynamic memory allocation of data
space for tasks. The memory is divided into fixed size memory blocks, which can be requested by
tasks. When a task finishes using a memory block, it must return it to the pool. This process of
managing the heap memory is known as Heap management.

Embedded System © Er. Shiva Ramdam, 2022 21


5. REAL-TIME OS

In general, a memory management facility maintains internal information for a heap in a reserved
memory called the control block. Typical information includes:
• The starting address of the physical memory block used for dynamic memory allocation
• The overall size of this physical memory block, and
• The allocation table that indicates which memory areas are in use, which memory areas are
free, and the size of each free region.

5.9 Exam Questions:


1. What is binary semaphore? Explain the usage of semaphore and mutex with proper
example. [2021 Fall]
2. Why RTOS are preferred in ES? Differentiate between clocking communication and task
synchronization. [2021 Fall]
3. List out the difference between Process and Thread. [2020 Fall]
4. What do you understand by TCB in RTOS? What are the information contents of TCB? [2020
Fall]
5. What do you understand by interrupt handler? Explain three method by which RTOS handles
interrupt. [2019 Spring]
6. What do you mean by Kernel? Describe the types of RT Kernel. [2019 Spring]
7. Define RTOS. Explain round Robin and preemptive Scheduling Policies. [2019 Fall]
8. Define scheduling. Explain various types of task scheduling techniques in RTOS. [2018 spring]
9. In an RTOS environment different tasks may share same variables and functions. Explain the
problems faced due to this type of sharing and also suggest the solutions. [2018 fall, 2017
spring]
10. Define RTOS. Explain various stages of task. [2017 spring]
11. What are task states? Describe task scheduling. [2017 Fall]
12. What is semaphore? How semaphore can be used for global resource sharing? [2017 Fall]
13. Describe the major functions of Real-time kernel. [2016 Fall]
14. Explain Vectored Interrupt with a neat diagram. [2016 fall]
15. List out the differences between process and thread. Explain various state of process. [2016
spring]
16. Define RTOS. Differentiate between clocking communication and task synchronization. [2016
spring]
17. Write down the features of Real-time kernel. What are the differences between stack
memory management and heap memory management? [2015 spring]
18. List three ways in which an RTOS handles the ISRs in a multitasking environment. [2015
spring, 2015 Fall]
19. What is a process and process control block? Explain various states of a process. [2014
spring]
20. Explain various task scheduling techniques in RTOS. [2014 spring]
21. What are task states? Describe task scheduling. [2014 Fall]
22. Differentiate between clock communication and task synchronization. Also explain Interrupt
processing. [2014 Fall]
23. Write short notes on:
a) Task and its states [2020 Fall]
b) Fan Out [2019 Fall]
c) Clocking communication and Task synchronization [2016 fall]
d) Task, task states and task scheduling [2015 Fall]

***

22 Embedded System © Er. Shiva Ramdam, 2022

You might also like