Rtos Lab Manual
Rtos Lab Manual
Laboratory Manual
Evaluation Scheme
Semester VI Teaching Scheme
Theory Practical
Course
C
Category
2 2 3 10 15 50 25 --
Course UITL307
Code UITP307
Teaching
TH 75 25
Mode
Theory:
Principle of Operation:
The Windows port layer creates a low priority Windows thread for each FreeRTOS task created
by the FreeRTOS application. All the low priority Windows threads are then kept in the
suspended state, other than the Windows thread that is running the FreeRTOS task selected by
the FreeRTOS scheduler to be in the Running state. In this way, the FreeRTOS scheduler
chooses which low priority Windows thread to run in accordance with its scheduling policy.
All the other low priority windows threads cannot run because they are suspended.
FreeRTOS ports that run on microcontrollers have to perform complex context switching to
save and restore the microcontroller context (registers, etc.) as tasks enter and leave the
Running state. In contrast, the Windows simulator layer simply has to suspend and resume
Windows threads as the tasks they represent enter and leave the Running state. The real context
switching is left to Windows.
The tick interrupt generation is simulated by a high priority Windows thread that will
periodically pre-empt the low priority threads that are running tasks. The tick rate achievable
is limited by the Windows system clock, which in normal FreeRTOS terms is slow and has a
very low precision. It is therefore not possible to obtain true real time behaviour.
Simulated interrupt processing is performed by a second higher priority Windows thread that,
because of its priority, can also pre-empt the low priority threads that are running FreeRTOS
tasks. The thread that simulates interrupt processing waits until it is informed by another thread
in the system that there is an interrupt pending. For example, the thread that simulates the
generation of tick interrupts sets an interrupt pending bit, and then informs the Windows thread
that simulates interrupts being processed that an interrupt is pending. The simulated interrupt
processing thread will then execute and look at all the possible interrupt pending bits -
performing any simulated interrupt processing and clearing interrupt pending bits as necessary.
Functionality
main_blinky() creates a very simple demo that includes two tasks and one queue. One task
repeatedly sends the value 100 to the other task through the queue. The receiving task prints
out a message each time it receives the value on the queue.
The demo created by main_full() is very comprehensive. The tasks it creates consist mainly of
the standard demo tasks - which don't perform any particular functionality other than testing
the port and demonstrating how the FreeRTOS API can be used.
The full demo creates a 'check' task in addition to the standard demo tasks. This only executes every
(simulated) five seconds, but has the highest priority to ensure it gets processing time. Its main function
is to check that all the standard demo tasks are still operational.
The check task maintains a status string that is output to the console each time it executes. If
all the standard demo tasks are running without error then the string will print out "OK" and
the current tick count. If an error has been detected then the string will print out a message that
indicates in which task the error was reported.
The eclipse project will output strings to an integrated console. To view these strings the
"RTOSDemo.exe" console must be selected using the drop down list accessible from the little computer
monitor icon speed button - as shown in the image below.
The Visual Studio console output will appear in a command prompt window.
If executing the routine should result in a context switch then the interrupt function must return
pdTRUE. Otherwise the interrupt function should return pdFALSE.
pvHandler should point to the handler function for the interrupt number being installed.
ulInterruptNumber is the number of the interrupt that is to be set pending, and corresponds to
the ulInterruptNumber parameter of vPortSetInterruptHandler().
The simulator itself uses three interrupts, one for a task yield, one for the simulated tick, and one for
terminating a Windows thread that was executing a FreeRTOS task that has since been deleted. As a
simple example, shown below is the code for the yield interrupt.
The interrupt function does nothing other than request a context switch, so just returns
pdTRUE. It is defined using the following code:
static unsigned long prvProcessYieldInterrupt( void )
{
/* There is no processing to do here, this interrupt is just used to cause
a context switch, so it simply returns pdTRUE. */
return pdTRUE;
}
The simulated interrupt handler function is then installed using the following call, where
portINTERRUPT_YIELD is defined as 2:
This is the interrupt that should execute whenever taskYIELD()/portYIELD() is called, so the
Win32 port version of portYIELD() is defined as:
Theory:
RTLinux Overview:
The basic premise underlying the design of RTLinux is that it is not feasible to
identify and eliminate all aspects of kernel operation that lead to unpredictability.
These sources of unpredictability include the Linux scheduling algorithm (which is
optimized to maximize throughput), device drivers, uninterrruptible system calls, the
use of interrupt disabling and virtual memory operations. The best way to avoid these
problems is to construct a small, predictable kernel separate from the Linux kernel,
and to make it simple enough that operations can be measured and shown to have
predictable execution. This has been the course taken by the developers of RTLinux.
This approach has the added benefit of maintainability - prior to the development of
RTLinux, every time new device drivers or other enhancements to Linux were
needed, a study would have to be performed to determine that the change would not
introduce unpredictability.
Figure 1.1 shows the basic Linux kernel without hard realtime support. You will see
that the Linux kernel separates the hardware from user-level tasks. The kernel has
the ability to suspend any user-level task, once that task has outrun the ``slice of
time'' allotted to it by the CPU. Assume, for example, that a user task controls a
robotic arm. The standard Linux kernel could potentially preempt the task and give
the CPU to one which is less critical (e.g. one that boots up Netscape). Consequently,
the arm will not meet strict timing requirements. Thus, in trying to be ``fair'' to all
tasks, the kernel can prevent critical events from occurring.
Figure 1.2 shows a Linux kernel modified to support hard realtime. An additional
layer of abstraction - termed a ``virtual machine'' in the literature - has been added
between the standard Linux kernel and the computer hardware. As far as the standard
Linux kernel is concedrned, this new layer appears to be actual hardware. More
importantly, this new layer introduces its own fixed-priority scheduler. This
scheduler assigns the lowest priority to the standard Linux kernel, which then runs
as an independent task. Then it allows the user to both introduce and set priorities
for any number of realtime tasks.
Figure 1.1: Detail of the bare Linux kernel
As its name implies, the init_module () function is called when the module is first loaded
into the kernel. It should return 0 on success and a negative value on failure.
Similarly, the cleanup_module is called when the module is unloaded.
For example, if we assume that a user has created a C file named my_module.c, the code
can be converted into a module by typing the following:
gcc -c {SOME-FLAGS} my_module.c
This command creates a module file named my_module.o, which can now be inserted
into the kernel. To insert the module into the kernel, we use the insmod command. To
remove it, the rmmod command is used.
Now that we understand the general structure of modules, and how to load and
unload them, we are ready to look at the RTLinux API.
To create a new realtime thread, we use the pthread_create(3) function. This function
must only be called from the Linux kernel thread (i.e., using init_module()):
#include <pthread.h>
int pthread_create(pthread_t * thread,
pthread_attr_t * attr,
void *(*start_routine)(void *),
void * arg);
The thread is created using the attributes specified in the `` attr'' thread attributes
object. If attr is NULL, default attributes are used. For more detailed information, refer
to the POSIX functions:
• pthread_attr_init(3),
• pthread_attr_setschedparam(3), and
• pthread_attr_getschedparam(3)
• pthread_attr_getcpu_np(3) , and
• pthread_attr_setcpu_np(3)
which are used to get and set general attributes for the scheduling parameters and
the CPUs in which the thread is intended to run.
The ID of the newly created thread is stored in the location pointed to by `` thread''.
The function pointed to by start_routine is taken to be the thread code. It is passed the
``arg'' argument.
You should join the thread in cleanup_module with pthread_join() for its resources
to be deallocated.
Time Facilities
RTLinux provides several clocks that can be used for timing functionality, such as
as referencing for thread scheduling and obtaining timestamps. Here is the general
timing API:
#include <rtl_time.h>
struct timespec {
time_t tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};
To obtain the current clock reading, use the clock_gettime(3) function where clock_id is
the clock to be read and ts is a structure which stores the value obtained.
The hrtime_t value is expressed as a single 64-bit number of nanoseconds.
Thus, clock_gethrtime(3) is the same as clock_gettime, but returns the time as
an hrtime_t rather than as a timespec structure.
Conversion Routines
Several routines exist for converting from one form of time reporting to the other:
#include <rtl_time.h>
The following clocks are architecture-dependent. They are not normally found in
user programs.
Scheduling Threads
RTLinux provides scheduling, which allows thread code to run at specific times.
RTLinux uses a pure priority-driven scheduler, in which the highest priority (ready)
thread is always chosen to run. If two threads have the same priority, which one is
chosen is undefined. RTLinux uses the following scheduling API:
int pthread_setschedparam(pthread_t thread,
int policy,
const struct sched_param *param);
int pthread_make_periodic_np(pthread_t thread,
const struct itimerspec *its);
int pthread_wait_np(void);
int sched_get_priority_max(int policy);
int sched_get_priority_min(int policy);
struct itimerspec {
struct timespec it_interval; /* timer period */
struct timespec it_value; /* timer expiration */
};
or afterwards by using
pthread_setschedparam(3) .
The policy argument is currently not used in RTLinux, but should be specified
as SCHED_FIFO for compatibility with future versions. The
structure sched_param contains the sched_priority member. Higher values correspond to
higher priorities. Use:
• sched_get_priority_max(3) , and
• sched_get_priority_min(3)
To make a realtime thread execute periodically, users may use the non-
portable2.1function:
pthread_make_periodic_np(3)
which marks the thread as periodic. Timing is specified by the itimer structure its.
The it_value member of the passed struct itimerspec specifies the time of the first
invocation; the it_interval is the thread period. Note that when setting up the period for
task T, the period specified in the itimer structure can be 0. This means that task T will
execute only once.
This function suspends the execution of the calling thread until the time specified
by:
pthread_make_periodic_np(3)
Code Listing
pthread_t thread;
void * start_routine(void *arg) {
struct sched_param p;
p . sched_priority = 1;
pthread_setschedparam (pthread_self(), SCHED_FIFO, &p);
pthread_make_periodic_np (pthread_self(), gethrtime(),
500000000);
while (1) {
pthread_wait_np();
rtl_printf("I'm here; my arg is %x\n", (unsigned) arg);
}
return 0;
}
int init_module(void) {
return pthread_create (&thread, NULL, start_routine, 0);
}
void cleanup_module(void) {
pthread_cancel (thread);
pthread_join (thread, NULL);
}
Upon the first call to the newly-created thread start_routine(), the initialization section
tells the scheduler to assign this thread a scheduling priority of 1 (one) with the call
to p.sched_priority. Next, the thread sets the scheduler's behavior to be SCHED_FIFO
for all subsequent executions with the call to pthread_setschedparam. Finally, by calling
the function:
pthread_make_periodic_np()
the thread tells the scheduler to periodically execute this thread at a frequency of
Hz (500 microseconds). This marks the end of the initialization section for the
thread.
which blocks all further execution of the thread until the scheduler calls it again.
Once the thread is called again, it executes the rest of the contents inside the while
loop, until it encounters another call to:
pthread_wait_np()
Because we haven't included any way to exit the loop, this thread will continue to
execute forever at a rate of 2Hz. The only way to stop the program is by removing
it from the kernel with the rmmod(8) command.
1. Compile the source code and create a module. We can normally accomplish
this by using the Linux GCC compiler directly from the command line. To
simplify things, however, we'll create a Makefile. Then we'll only need to type
``make'' to compile our code.
2. Locate and copy the rtl.mk file. The rtl.mk file is an include file which contains
all the flags needed to compile our code. For simplicity, we'll copy it from the
RTLinux source tree and place it alongside of our hello.c file.
3. Insert the module into the running RTLinux kernel. The resulting object binary
must be ``plugged in'' to the kernel, where it will be executed by RTLinux.
If you haven't already done so, locate the file rtl.mk and copy it into the same directory
as your hello.c and Makefile files. The rtl.mk file can usually be found
at /usr/include/rtlinux/rtl.mk.
cp /usr/include/rtlinux/rtl.mk .
This compiles the hello.c program and produces an object file named hello.o.
We now need to load the RTLinux modules. There are several ways to do this. The
easiest is to use the rtlinux(1) command (as root):
rtlinux start hello
You can check the status of your modules by typing the command:
rtlinux status hello
For more information about the usage of the rtlinux(1) command, refer to its man page,
or type:
rtlinux help
You should now be able to see your hello.o program printing its message twice per
second. Depending on the configuration of your machine, you should either be able
to see it directly in your console, or by typing:
dmesg
To stop the program, we need to remove it from the kernel. To do so, type:
rtlinux stop hello
Congratulations, you have now successfully created and run your very first RTLinux
program!
The Advanced API: Getting More Out of Your RTLinux Modules
RTLinux has a rich assortment of functions which can be used to solve most realtime
application problems. This chapter describes some of the more advanced concepts.
The examples/fp directory contains several examples of tasks which use floating point
and the math library.
Realtime FIFOs are First-In-First-Out queues that can be read from and written to
by Linux processes and RTLinux threads. FIFOs are uni-directional - you can use a
pair of FIFOs for bi-directional data exchange. To use the FIFOs,
the system/rtl_posixio.o and fifos/rtl_fifo.o Linux modules must be loaded in the kernel.
RT-FIFOs are Linux character devices with the major number of 150. Device entries
in /dev are created during system installation. The device file names
are /dev/rtf0, /dev/rtf1, etc., through /dev/rtf63 (the maximum number of RT-FIFOs in the
system is configurable during system compilation).
rtf_create allocates the buffer of the specified size for the fifo buffer. The fifo argument
corresponds to the minor number of the device. rtf_destroy deallocates the FIFO.
These functions must only be called from the Linux kernel thread (i.e.,
from init_module()).
After the FIFO is created, the following calls can be used to access it from RTLinux
threads: open(2) , read(2) , write(2) and close(2) . Support for other STDIO functions is planned for future
releases.
You can also use the RTLinux-specific functions rtf_put (3) and rtf_get (3) .
Linux processes can use UNIX file IO functions without restriction. See
the examples/measurement/rt_process.c example program for a practical application of RT-FIFOs.
First, the mbuff.o module must be loaded in the kernel. Two functions are used to allocate blocks of shared
memory, connect to them and eventually deallocate them.
#include <mbuff.h>
void * mbuff_alloc(const char *name, int size);
void mbuff_free(const char *name, void * mbuf);
The first time mbuff_alloc is called with a given name, a shared memory block of the specified size is allocated.
The reference count for this block is set to 1. On success, the pointer to the newly allocated block is
returned. NULL is returned on failure. If the block with the specified name already exists, this function returns a
pointer that can be used to access this block and increases the reference count.
mbuff_free deassociates mbuff from the specified buffer. The reference count is decreased by 1. When it reaches
0, the buffer is deallocated.
These functions are available for use in both Linux processes and the Linux kernel threads.
mbuff_alloc and mbuff_free cannot be used from realtime threads. You should call
them from init_module and cleanup_module only.
The general idea is that a threaded task can be either awakened or suspended from within an interrupt service
routine.
An interrupt-driven thread calls pthread_suspend_np(pthread_self()) and blocks. Later, the interrupt handler
calls pthread_wakeup_np(3) for this thread. The thread will run until the next call to pthread_suspend_np(3) . An
example can be found in examples/sound/irqthread.c.
Mutual Exclusion
Mutual exclusion refers to the concept of allowing only one task at a time (out of many) to read from or write to
a shared resource. Without mutual exclusion, the integrity of the data found in that shared resource could become
compromised. Refer to the appendix for further information on mutual exclusion.
RTLinux supports the POSIX pthread_mutex_ family of functions (include/rtl_mutex.h). Currently the following
functions are available:
• pthread_mutexattr_getpshared(3)
• pthread_mutexattr_setpshared(3)
• pthread_mutexattr_init(3)
• pthread_mutexattr_destroy(3)
• pthread_mutexattr_settype(3)
• pthread_mutexattr_gettype(3)
• pthread_mutex_init(3)
• pthread_mutex_destroy(3)
• pthread_mutex_lock(3)
• pthread_mutex_trylock(3)
• pthread_mutex_unlock(3)
In a module, you can call mmap from Linux mode only (i.e., from init_module()).
Calling mmap from RT-threads will fail.
Note, the order of arguments is value, port in the output functions. This may cause confusion when porting old
code from other systems.
Functions with the ``_p'' suffix (e.g., outb_p) provide a small delay after reading or writing to the port. This delay
is needed for some slow ISA devices on fast machines. (See also the Linux I/O port programming mini-HOWTO).
Check out examples/sound to see how some of these functions are used to program the PC realtime clock and the
speaker.
Soft interrupts are normal Linux kernel interrupts. They have the advantage that some Linux kernel functions can
be called from them safely. However, for many tasks they do not provide hard realtime performance; they may be
delayed for considerable periods of time.
Hard interrupts (or realtime interrupts), on the other hand, have much lower latency. However, just as with realtime
threads, only a very limited set of kernel functions may be called from the hard interrupt handlers.
Hard Interrupts
The two functions:
• rtl_request_irq(3) and
• rtl_free_irq(3)
are used for installing and uninstalling hard interrupt handlers for specific interrupts. The manual pages describe
their operation in detail.
#include <rtl_core.h>
int rtl_request_irq(unsigned int irq,
unsigned int (*handler) (unsigned int,
struct pt_regs *));
int rtl_free_irq(unsigned int irq);
Soft interrupts
int rtl_get_soft_irq(
void (*handler)(int, void *, struct pt_regs *),
const char * devname);
void rtl_global_pend_irq(int ix);
void rtl_free_soft_irq(unsigned int irq);
The rtl_get_soft_irq(3) function allocates a virtual irq number and installs the handler function for it. This virtual
interrupt can later be triggered using rtl_global_pend_irq(3) . rtl_global_pend_irq is safe to use from realtime
threads and realtime interrupts. rtl_free_soft_irq(3) frees the allocated virtual interrupt.
Note that soft interrupts are used in the RTLinux FIFO implementation (fifos/rtl_fifo.c).
Special Topics
You may never find yourself needing to know any of the following. Then again, you might.
By default, a thread is created to run on the current CPU. To assign a thread to a particular CPU, use
the pthread_attr_setcpu_np(3) function to set the CPU pthread attribute. See examples/mutex/mutex.c.
#include <rt_com.h>
#include <rt_comP.h>
void rt_com_write(unsigned int com, char *pointer, int cnt);
int rt_com_read(unsigned int com, char *pointer, int cnt);
int rt_com_setup(unsigned int com, unsigned int baud,
unsigned int parity, unsigned int stopbits,
unsigned int wordlength);
#define RT_COM_CNT n
struct rt_com_struct
{
int magic; // unused
int baud-base; // base-rate; 11520
// (BASE_BAUD in rt_comP.h;
// for standard ports.
int port; // port number
int irq; // interrupt number (IRQ)
// for the port
int flag; // flags set for this port
void (*isr)(void) // address of the interrupt
// service routine
int type; //
int ier; // a copy of the IER register
struct rt_buf_struct ibuf; // address of the port input
// buffer
struct rt_buf_struct obuf; // address of the port output
// buffer
} rt_com_table [RT_COM_CNT];
where
• rt_com_write(3) - writes cnt characters from buffer ptr to the realtime serial
port com.
• rt_com_read(3) - attempts to read cnt characters to buffer ptr from the realtime
serial port com.
• rt_com_setup(3) - is used to dynamically change the parameters of each realtime
serial port.
rt_com is a Linux module. The user must specify relevant serial port information via entries in rt_com_setup. In
addition, the user must specify - via entries in the rt_com_table (located in rt_com.h) - the following:
RTLinux provides two delayed execution mechanisms to overcome this limitation: soft interrupts and task queues.
General
Before you will be able to run any RTLinux programs, you must first insert the RTLinux scheduler and support
modules in the modules into the Linux kernel. Use any of the following:
Examples
Using rtlinux
Beginning with RTLinux 3.0-pre9, users can load and remove user modules by using the rtlinux(1) command. To
insert, remove, and obtain status information about RTLinux modules, use the following commands:
man 1 rtlinux
or
rtlinux help.
Using modprobe
Suppose we have the appropriately named my_program.o. Assuming that all the appropriate RTLinux modules
have already been loaded, all that's left to do is to load this module into the kernel:
insmod my_program.o
rmmod my_program
The following sections provide a listing of the various utilities and APIs available in RTLinux.
Getting Around
There are several manual pages which give overviews on the technology and the APIs.
The following utilities are designed to make your programming job easier.
Here is the main RTLinux API. You are encouraged to use this API for all new projects.
The v1 API is exclusively for older RTLinux projects. It is NOT recommended for use with new projects. This
listing is for backward compatibility only:
Theory:
It supports the AMD/Intel architecture, the ARM architecture, the POWER architecture, and
the RISC-V architecture. On 32 and 64-bit processors, the real-time operating system may be
utilized in multicore mixed modes, symmetric multiprocessing, multi-OS architectures, and
asymmetric multiprocessing.
The VxWorks development environment contains the kernel, board support packages, the
Wind River Workbench development suite, and third-party software and hardware
technologies. The real-time operating system in VxWorks 7 version has been redesigned for
modularity and upgradeability, with the operating system kernel separated from middleware,
applications, and other packages. Scalability, security, safety, connection, and graphics have
all been enhanced to meet the demands of the Internet of Things (IoT).
VxWorks started in the 1980s as a set of upgrades to VRTX, a crude RTOS sold by Ready
Systems in 1995. Wind River obtained the distribution rights to VRTX and significantly
upgraded it by integrating a file system and an integrated development environment, among
other things. Wind River designed and developed its kernel to replace VRTX within VxWorks
in 1987, anticipating the loss of its reseller contract by Ready Systems.
The wind microkernel is at the core of the VxWorks real-time OS. The kernel is a link between
the shell and the hardware, which is a software component. The kernel should execute the
LabVIEW program while providing secure access to the machine's hardware.
There are various capabilities of the VxWorks OS. Some capabilities of the VxWorks OS are
as follows:
It is the first real-time operating system on Earth and Mars where reliability is required. It
delivers the highest levels of performance when it's needed the most.
2. Safety
VxWorks was developed with security in mind. It has undergone extensive testing and
certification to meet specified requirements.
3. Security
It provides a set of capabilities designed to protect devices, data, and intellectual property in
the linked world. VxWorks Security Services meet strict security standards across industries
when combined with the development processes.
There are various functions of the VxWorks OS. Some functions of the VxWorks OS are as
follows:
1. Task Management
Task management is an example of software that is being run. The task comprises many
components, such as a memory address, identifier, programme counter, and context data. The
task consists of carrying out their directions.
There are mainly two types of operating system tasks: single-tasking and multitasking. The
single-task approach only works with one process at a time. The multitasking method allows
numerous processes to run at the same time. Because the VxWorks kernel supports
multitasking, we can run numerous jobs simultaneously.
2. Scheduling
The scheduling system is the backbone of the RTOS and is used to keep the processor's
workload consistent and balanced. As a result, each process is completed within a certain
amount of time. Priority and round round-robin scheduling are two key techniques in Vxworks
OS.
3. Memory Management
Memory management is a key part of the OS, which handles the computer's memory. Physical
and virtual memory are the two types of memory modules found in a CPU. The hard disk is
defined as physical memory, while virtual memory is defined as RAM. An OS manages the
RAM address spaces, and the virtual memory address is assigned after the real memory
address.
All application jobs in the VxWorks embedded RTOS share the same address space, implying
that faulty apps could mistakenly access system resources and compromise the overall system's
stability. The VxWorks system includes one optional tool called VxVMI, which can be used
to give each task its own address space. VxWorks does not provide privilege protection.
VxWorks privilege level is always 0.
4. Interrupts
VxWorks OS interrupt service routines execute in a distinct context outside of any process
context to give the quickest possible response to external interrupts. There are no process
context switches involved. The interrupt vector table stores the ISR address, which is invoked
directly from the hardware. The ISR first does some work (for example, saving registers and
setting up the stack) before calling the C functions that the user has connected.
5. Round-Robin Scheduling
This scheduling algorithm is utilized by the processor while executing the process. It is
specially intended for time-sharing systems. Round-robin scheduling allots a specific amount
of time to each process. Once a process has been completed in a certain time period, other
processes are permitted to complete in the same time period. In a real-time operating system,
round-robin scheduling performs better.
6. Priority Scheduling
Priority scheduling assigns a priority to each process (thread). The highest priority thread
would be executed first. Priority processes are implemented on a first-come, first-served basis.
Priority can be determined based on time, memory, or any other resource demand.
VxWorks has been adapted to a variety of platforms and can currently run on almost any latest
CPU used in the embedded industry. It includes the Intel x86 processor family (including the
Intel Quark SoC), MIPS, PowerPC (including BAE RAD), Intel i960, SPARC, Fujitsu FR-V,
Freescale ColdFire, SH-4, and the ARM, StrongARM, and xScale CPU families. It provides
an interface between all supported hardware and the operating system via a standard board
support package (BSP). Its developer kit offers a standardized API and a dependable
environment for designing RTOS. Popular SSL/TLS libraries, such as wolfSSL, support
VxWorks.
VxWorks operating system is a collection of runtime components and development tools. The
run time components are an OS (UP and SMP), which are the software for app support and
hardware support. VxWorks' primary development tools include compilers such as Diab, GNU,
and Intel C++ Compiler (ICC) and build and setup tools. Additionally, the system provides
productivity tools, including development support tools, Workbench development suite, and
Intel tools for asset tracking and host support.
The VxWorks OS platform is a modular, vendor-agnostic, open system that may run on various
third-party applications and hardware. The OS kernel is isolated from middleware, programs,
and other packages, making problem fixes and testing new features easier. A layered source
development system solution enables the simultaneous installation of several versions of any
stack, allowing developers to choose which version of any feature set must be included in the
VxWorks kernel libraries.
There are various features of the VxWorks OS. Some features of the VxWorks OS are as
follows:
Theory:
Suggestion to read
• FreeRTOS LPC2148 Tutorial – Task Creation
• Introduction
• API Used
o xTaskCreate
o vTaskDelay
o vTaskStartScheduler
• Code
• Output
Suggestion to read
• RTOS Basics – Part 1
• RTOS Basics – PART 2
• FreeRTOS Porting for LPC2148
• LPC2148 UART Tutorial
Introduction
FreeRTOS contains many APIs. But the very basic is Creating a Task. Because tasks are
concurrently running when the system boots up. So today we will look at simple task
creation.
API Used
• xTaskCreate
• vTaskDelay
• vTaskStartScheduler
xTaskCreate
This FreeRTOS API is used to create a task. Using this API, we can create more tasks.
portBASE_TYPE xTaskCreate (pdTASK_CODE pvTaskCode,
const signed portCHAR * const pcName,
unsigned portSHORT usStackDepth,
void *pvParameters,
unsigned portBASE_TYPE uxPriority,
xTaskHandle *pxCreatedTask);
• pvTaskCode: a pointer to the function where the task is implemented. (Address of the
function)
• pcName: given name to the task. This is useless to FreeRTOS but is intended for
debugging purposes only.
• usStackDepth: length of the stack for this task in words. The actual size of the stack
depends on the microcontroller.
• pvParameters: a pointer to arguments given to the task.
• uxPriority: priority given to the task, a number between 0 and MAX_PRIORITIES –
1.
• pxCreatedTask: a pointer to an identifier that allows handling the task. If the task
does not have to be handled in the future, this can be left NULL.
vTaskDelay
We are using this API for delay purposes.
vTaskStartScheduler
Output:
Theory:
The algorithms schedule processes that need to be executed on the basis of priority. A Process
with higher priority will be given the CPU first and this shall keep until the processes are
completed. Job Queue and Ready Queue are required to perform the algorithm where the
processes are placed according to their prioritised order in the ready queue and selected to be
placed in the job queue for execution.
The Priority Scheduling Algorithm is responsible for moving the process from the ready queue
into the job queue on the basis of the process having a higher priority.
The priority of a process can be decided by keeping certain factors in mind such as the burst
time of the process, lesser memory requirements of the process etc.
• Arrival Time:- The time of the arrival of the process in the ready queue
• Waiting Time:- The interval for which the process needs to wait in the queue for its
execution
• Turnaround Time:- The summation of waiting time in the queue and burst time.
Priority Scheduling Algorithm can be bifurcated into two halves, mentioned as-
During the execution of a process with the highest order, if and if a process with a higher
priority arrives in the queue, the execution of the ongoing process is not stopped and the
arriving process, with higher priority, has to wait until the ongoing process concludes.
Note:- On the occasion of a special case, when two processes have the same priority then the
tie-breaker for the earliest process to be executed is decided on the basis of a first come first
serve (FCFS) which is a scheduling algorithm in itself.
Let us take an example with three processes A, B and C with mentioned priority, arrival time
and burst time in the table below. We trace one by one which process takes place using a non-
preemptive priority scheduling algorithm.
A 5 0 3
B 3 3 1
C 2 5 2
As mentioned, Process A will get the CPU being the only process at the point in time. Being
non-preemptive, the process will finish its execution before another process gets the CPU.
C 2 5 2
B 3 3 1
On the 5th second, Process B and Process C are in waiting state with B already elapsed time,
but C will get the CPU due to the reason that it has a higher priority.
Process Burst Time Arrival Time Priority
B 3 3 1
On the 7th second, B will finish execution and CPU will be passed to the only process left i.e.,
C to complete execution.
The algorithm will bind up performing all the processes at the 10th second.
Now that we are done discussing the theoretical concept and working example of Priority
Scheduling Algorithm, we have the hint to come up with an algorithm for the priority
scheduling program in c and implement the priority scheduling program in c.
• Input the burst time and priority for each process to be executed
• Print the ordered execution of process and the average waiting and turnaround time
#include <stdio.h>
int temp=*a;
*a=*b;
*b=temp;
int main()
int n;
int burst[n],priority[n],index[n];
for(int i=0;i<n;i++)
printf("Enter Burst Time and Priority Value for Process %d: ",i+1);
scanf("%d %d",&burst[i],&priority[i]);
index[i]=i+1;
for(int i=0;i<n;i++)
int temp=priority[i],m=i;
for(int j=i;j<n;j++)
temp=priority[j];
m=j;
swap(&priority[i], &priority[m]);
swap(&burst[i], &burst[m]);
swap(&index[i],&index[m]);
int t=0;
for(int i=0;i<n;i++)
{
printf("P%d is executed from %d to %d\n",index[i],t,t+burst[i]);
t+=burst[i];
printf("\n");
int wait_time=0;
int total_wait_time = 0;
for(int i=0;i<n;i++)
printf("P%d\t\t%d\t\t%d\n",index[i],burst[i],wait_time);
total_wait_time += wait_time;
wait_time += burst[i];
int total_Turn_Around = 0;
total_Turn_Around += burst[i];
return 0;
Explanation: input for a number of processes is taken from the user followed by the input for
the priority and burst time for each process. Selection Sort is used to sort the process depending
on their priority values with the swap function used to swap their position in the existing arrays
for burst time, priority values and ordered execution. In further code, total waiting time and
total turnaround time is calculated, only to be divided by the total number of processes to find
the average waiting time and average turnaround time.
Output
Enter Number of Processes: 2
P1 is executed from 0 to 5
P2 is executed from 5 to 9
P1 5 0
P2 4 5
The space complexity is going to be O(1) as no extra auxiliary space will be required for
operations given that we are already providing all the necessary values like burst time and
priority beforehand.
Theory:
Fixed priority pre-emptive scheduling algorithm is mostly used in real time systems.
In this scheduling algorithm the processor makes sure that the highest priority task is to be
performed first ignoring the other task to be executed.
Decision Mode:
Pre-emptive: When a process arrives, its priority is compared with the current process’s
priority. If the new job has higher priority than the current process, the current process is
suspended and new process is started.
Implementation:
Sorted FIFO queue is used for this strategy. As the new process is identified it is placed in the
queue according to its priority. Hence the process having higher priority is considered first as
it is placed at higher position.
Example:
Let us take the following example having 4 set of processes along with its arrival time and time
taken to complete the process. Also, the priority of all the process are mentioned. Consider all
time values in millisecond and small value of priority means higher priority of process.
P0 0 10 5
P1 1 6 4
P2 3 2 2
P3 5 4 0
Gantt chart:
Initially only P0 is present and it is allowed to run. But when P1 comes, it has higher priority.
So, P0 is pre-empted and P1 is allowed to run. This process repeated till all processes complete
their execution.
Statistic:
P0 0 10 22 22 12
P1 1 6 13 12 6
P2 3 2 5 2 0
P3 5 4 9 4 0
Disadvantage:
Starvation is possible for low priority process. It can be overcome by using technique called
“Aging“. Aging gradually increases the priority of the process that wait in the system for long
time. Context switch overhead is there.
Implementation using C:
#include<stdio.h>
#include<conio.h>
void main()
int x,n,p[10],pp[10],pt[10],w[10],t[10],awt,atat,i;
scanf("%d",&n);
for(i=0;i<n;i++)
printf("\nProcess no %d : ",i+1);
scanf("%d %d",&pt[i],&pp[i]);
p[i]=i+1;
for(i=0;i<n-1;i++)
for(int j=i+1;j<n;j++)
if(pp[i]<pp[j])
x=pp[i];
pp[i]=pp[j];
pp[j]=x;
x=pt[i];
pt[i]=pt[j];
pt[j]=x;
x=p[i];
p[i]=p[j];
p[j]=x;
w[0]=0;
awt=0;
t[0]=pt[0];
atat=t[0];
for(i=1;i<n;i++)
w[i]=t[i-1];
awt+=w[i];
t[i]=w[i]+pt[i];
atat+=t[i];
printf("\n\n Job \t Burst Time \t Wait Time \t Turn Around Time Priority \n");
for(i=0;i<n;i++)
awt/=n;
atat/=n;
getch();
Output:
Process no 1 : 3
Process no 2 : 4
Process no 3 : 5
Process no 4 : 6
4 6 0 6 4
3 5 6 11 3
2 4 11 15 2
1 3 15 18 1
Theory:
How does Preemptive Priority CPU Scheduling Algorithm decide the Priority of a Process?
Preemptive Priority CPU Scheduling Algorithm uses a rank-based system to define a rank for
each process, where lower rank processes have higher priority and higher rank processes have
lower priority. For instance, if there are 10 processes to be executed using this Preemptive
Algorithm, then process with rank 1 will have the highest priority, the process with rank 2 will
have comparatively lesser priority, and process with rank 10 will have least priority.
• Step-1: Select the first process whose arrival time will be 0, we need to select that
process because that process is only executing at time t=0.
• Step-2: Check the priority of the next available process. Here we need to check for 3
conditions.
• Step-4: When it reaches the final process, choose the process which is having the
highest priority & execute it. Repeat the same step until all processes complete their
execution.
The preemptive Priority algorithm can be implemented using Min Heap data structure. For
the detailed implementation of the Preemptive priority scheduling algorithm, please
refer: Program for Preemptive Priority CPU Scheduling.
Example-1: Consider the following table of arrival time, Priority, and burst time for five
processes P1, P2, P3, P4, and P5.
P1 0 ms 3 3 ms
P2 1 ms 2 4 ms
P3 2 ms 4 6 ms
P4 3 ms 6 4 ms
P5 5 ms 10 2 ms
The Preemptive Priority CPU Scheduling Algorithm will work on the basis of the steps
mentioned below:
• At time t = 0,
• Process P1 is the only process available in the ready queue, as its arrival time is
0ms.
• Hence Process P1 is executed first for 1ms, from 0ms to 1ms, irrespective of its
priority.
0 ms – 1
ms P1 0 ms 3 1ms 3 ms 2 ms
• At time t = 1 ms,
• Since the priority of process P2 is higher than the priority of process P1, therefore
Initial Final
Time Arrival Execution Burst Burst
Instance Process Time Priority Time Time Time
P1 0 ms 3 0 2 ms 2 ms
1 ms – 2
ms P2 1 ms 2 1 4 ms 3 ms
• At time t = 2 ms,
• There are 3 processes available in the ready queue: P1, P2, and P3.
P1 0 ms 3 0 2 ms 2 ms
P2 1 ms 2 1 3 ms 2 ms
2 ms – 3
ms P3 2 ms 4 0 6 ms 6 ms
• At time t = 3 ms,
• There are 4 processes available in the ready queue: P1, P2, P3, and P4.
Initial Final
Time Arrival Execution Burst Burst
Instance Process Time Priority Time Time Time
P1 0 ms 3 0 2 ms 2 ms
P2 1 ms 2 1 2 ms 1 ms
3 ms – 4
ms P3 2 ms 4 0 6 ms 6 ms
Initial Final
Time Arrival Execution Burst Burst
Instance Process Time Priority Time Time Time
P4 3 ms 6 0 4 ms 4 ms
• At time t = 4 ms,
• There are 5 processes available in the ready queue: P1, P2, P3, P4, and P5.
• Since the priority of process P2 is highest among the priority of processes P1, P2,
P3, P4, and P5, therefore Process P2 will get executed first.
Since Process P2’s burst time has become 0, therefore it is complete and will be removed
from the process queue.
Initial Final
Time Arrival Execution Burst Burst
Instance Process Time Priority Time Time Time
P1 0 ms 3 0 2 ms 2 ms
P2 1 ms 2 1 1 ms 0 ms
P3 2 ms 4 0 6 ms 6 ms
P4 3 ms 6 0 4 ms 4 ms
4 ms – 5
ms P5 5 ms 10 0 2 ms 2 ms
• At time t = 5 ms,
• There are 4 processes available in the ready queue: P1, P3, P4, and P5.
• Since the priority of process P1 is the highest among the priority of processes P1,
P3, P4 and P5, therefore Process P1 will get executed first.
Since Process P1’s burst time has become 0, therefore it is complete and will be removed
from the process queue.
Initial Final
Time Arrival Execution Burst Burst
Instance Process Time Priority Time Time Time
P1 0 ms 3 2 2 ms 0 ms
P3 2 ms 4 0 6 ms 6 ms
P4 3 ms 6 0 4 ms 4 ms
5 ms – 7
ms P5 5 ms 10 0 2 ms 2 ms
• At time t = 7 ms,
• There are 3 processes available in the ready queue: P3, P4, and P5.
• Since the priority of process P3 is the highest among the priority of processes P3,
P4, and P5, therefore Process P3 will get executed first.
Initial Final
Time Arrival Execution Burst Burst
Instance Process Time Priority Time Time Time
P3 2 ms 4 6 6 ms 0 ms
P4 3 ms 6 0 4 ms 4 ms
7 ms –
13 ms P5 5 ms 10 0 2 ms 2 ms
• At time t = 13 ms,
• Since the priority of process P4 is highest among the priority of process P4 and P5,
therefore Process P4 will get executed first.
Since Process P4’s burst time has become 0, therefore it is complete and will be removed
from the process queue.
Initial Final
Time Arrival Execution Burst Burst
Instance Process Time Priority Time Time Time
P4 3 ms 6 4 4 ms 0 ms
13 ms –
17 ms P5 5 ms 10 0 2 ms 2 ms
• At time t = 17 ms,
Since Process P5’s burst time has become 0, therefore it is complete and will be removed
from the process queue.
Initial Final
Time Arrival Execution Burst Burst
Instance Process Time Priority Time Time Time
17 ms –
19 ms P5 5 ms 10 2 2 ms 0 ms
• At time t = 19 ms,
• There is no more process available in the ready queue. Hence the processing will
now stop.
Initial Final
Time Arrival Execution Burst Burst
Instance Process Time Priority Time Time Time
0 ms – 1
ms P1 0 ms 3 1 3 ms 2 ms
P1 0 ms 3 0 2 ms 2 ms
Initial Final
Time Arrival Execution Burst Burst
Instance Process Time Priority Time Time Time
1 ms – 2
ms P2 1 ms 2 1 4 ms 3 ms
P1 0 ms 3 0 2 ms 2 ms
P2 1 ms 2 1 3 ms 2 ms
2 ms – 3
ms P3 2 ms 4 0 6 ms 6 ms
P1 0 ms 3 0 2 ms 2 ms
P2 1 ms 2 1 2 ms 1 ms
P3 2 ms 4 0 6 ms 6 ms
3 ms – 4
ms P4 3 ms 6 0 4 ms 4 ms
P1 0 ms 3 0 2 ms 2 ms
P2 1 ms 2 1 1 ms 0 ms
P3 2 ms 4 0 6 ms 6 ms
P4 3 ms 6 0 4 ms 4 ms
4 ms – 5
ms P5 5 ms 10 0 2 ms 2 ms
Initial Final
Time Arrival Execution Burst Burst
Instance Process Time Priority Time Time Time
P1 0 ms 3 2 2 ms 0 ms
P3 2 ms 4 0 6 ms 6 ms
P4 3 ms 6 0 4 ms 4 ms
5 ms – 7
ms P5 5 ms 10 0 2 ms 2 ms
P3 2 ms 4 6 6 ms 0 ms
P4 3 ms 6 0 4 ms 4 ms
7 ms – 13
ms P5 5 ms 10 0 2 ms 2 ms
P4 3 ms 6 4 4 ms 0 ms
13 ms –
17 ms P5 5 ms 10 0 2 ms 2 ms
17 ms –
19 ms P5 5 ms 10 2 2 ms 0 ms
Grantt Chart:
Now we need to calculate the completion time (C.T) & First Arrival Time (F.A.T) of each
process through Gantt chart:
After calculating the above fields, the final table looks like
• Average Turn Around Time = (Total Turn Around Time)/(no. of processes) = 50/5
= 10.00 ms
Example 2:
Consider the following table of arrival time, Priority and burst time for seven processes
P1, P2, P3, P4, P5, P6 and P7
P1 0 ms 3 8 ms
P2 1 ms 4 2 ms
P3 3 ms 4 4 ms
P4 4 ms 5 1 ms
P5 5 ms 2 6 ms
P6 6 ms 6 5 ms
P7 10 ms 1 1 ms
• At time t = 0,
• At time t = 1,
• At time t = 3,
• At time t = 4,
• The priority of P1 is greater than P4, so we execute P1 for 1 ms.
• At time t = 5,
• At time t = 6,
• At time t = 10,
• At time t = 11,
• At time t = 12,
• At time t = 15,
• At time t = 17,
• Now we take the process which is having the highest priority.
• At time t = 21,
• At time t = 22,
Gantt chart:
Here, H – Higher Priority, L – Least Priority
One of the most common drawbacks of the Preemptive priority CPU scheduling algorithm is
the Starvation Problem. This is the problem in which a process has to wait for a longer amount
of time to get scheduled into the CPU. This condition is called the starvation problem.
Example: In Example 2, we can see that process P1 is having Priority 3 and we have pre-
empted the process P1 and allocated the CPU to P5. Here we are only having 7 processes.
Now, if suppose we have many processes whose priority is higher than P1, then P1 needs to
wait for a longer time for the other process to be pre-empted and scheduled by the CPU. This
condition is called a starvation problem.
Solution: The solution to this Starvation problem is Ageing. This can be done by decrementing
the priority number by a certain number of a particular process which is waiting for a longer
period of time after a certain interval.
After every 3 units of time, all the processes which are in waiting for the state, the priority of
those processes will be decreased by 2, So, if there is a process P1 which is having priority 5,
after waiting for 3 units of time its priority will be decreased from 5 to 3 so that if there is any
process P2 which is having priority as 4 then that process P2 will wait and P1 will be scheduled
and executed.
#include<iostream>
#include<algorithm>
using namespace std;
struct node{
char pname;
int btime;
int atime;
int priority;
int restime=0;
int ctime=0;
int wtime=0;
}a[1000],b[1000],c[1000];
j=f;
if(b[j].btime>qt){
c[k]=b[j];
c[k].btime=qt;
k++;
b[j].btime=b[j].btime-qt;
ttime+=qt;
moveLast=true;
for(q=0;q<n;q++){
if(b[j].pname!=a[q].pname){
a[q].wtime+=qt;
}
}
}
else{
c[k]=b[j];
k++;
f++;
ttime+=b[j].btime;
moveLast=false;
for(q=0;q<n;q++){
if(b[j].pname!=a[q].pname){
a[q].wtime+=b[j].btime;
}
}
}
if(f==r&&i>=n)
break;
}
tArray[i]=ttime;
ttime+=a[i].btime;
for(i=0;i<k-1;i++){
if(c[i].pname==c[i+1].pname){
c[i].btime+=c[i+1].btime;
for(j=i+1;j<k-1;j++)
c[j]=c[j+1];
k--;
i--;
}
}
int rtime=0;
for(j=0;j<n;j++){
rtime=0;
for(i=0;i<k;i++){
if(c[i].pname==a[j].pname){
a[j].restime=rtime;
break;
}
rtime+=c[i].btime;
}
}
float averageWaitingTime=0;
float averageResponseTime=0;
float averageTAT=0;
cout<<"\nGantt Chart\n";
rtime=0;
for (i=0; i<k; i++){
if(i!=k)
cout<<"| "<<'P'<< c[i].pname << " ";
rtime+=c[i].btime;
for(j=0;j<n;j++){
if(a[j].pname==c[i].pname)
a[j].ctime=rtime;
}
}
cout<<"\n";
rtime=0;
for (i=0; i<k+1; i++){
cout << rtime << "\t";
tArray[i]=rtime;
rtime+=c[i].btime;
}
cout<<"\n";
cout<<"\n";
cout<<"P.Name Priority AT\tBT\tCT\tTAT\tWT\tRT\n";
for (i=0; i<nop&&a[i].pname!='i'; i++){
if(a[i].pname=='\0')
break;
cout <<'P'<< a[i].pname << "\t";
cout << a[i].priority << "\t";
cout << a[i].atime << "\t";
cout << a[i].btime << "\t";
cout << a[i].ctime << "\t";
cout << a[i].wtime+a[i].ctime-rtime+a[i].btime << "\t";
averageTAT+=a[i].wtime+a[i].ctime-rtime+a[i].btime;
cout << a[i].wtime+a[i].ctime-rtime << "\t";
averageWaitingTime+=a[i].wtime+a[i].ctime-rtime;
cout << a[i].restime-a[i].atime << "\t";
averageResponseTime+=a[i].restime-a[i].atime;
cout <<"\n";
}
cout<<"Average Response time: "<<(float)averageResponseTime/(float)n<<endl;
cout<<"Average Waiting time: "<<(float)averageWaitingTime/(float)n<<endl;
cout<<"Average TA time: "<<(float)averageTAT/(float)n<<endl;
int main(){
int nop,choice,i,qt;
cout<<"Enter number of processes\n";
cin>>nop;
cout<<"Enter process, priority, AT, BT\n";
insert(nop);
disp(nop,1);
return 0;
}
Output:
Theory:
If a resource is allocated to a process under non-preemptive scheduling, that resource will not
be released until the process is completed. Other tasks in the ready queue will have to wait their
time, and it will not get the CPU forcefully. After a process has been assigned to it, it will hold
the CPU until it has completed its execution or until an I/O action is required.
When a non-preemptive process with a long CPU burst time runs, the other process must wait
for an extended period, increasing the average waiting time in the ready queue. Non-preemptive
scheduling, on the other hand, contains no overhead when switching processes from the ready
queue to the CPU. The execution process isn't even interrupted by a higher-priority task,
implying that the scheduling is rigorous.
Example:
Let's look at the above preemptive scheduling problem we have solved and see how we can
handle it in a non-preemptive method. In contrast, the same four processes are P0, P1, P2, and
P3, with the same arrival time and CPU burst time.
So, by using a Non-preemptive scheduler, the Gantt chart would look like-
P0 0 5
P1 1 2
P2 3 4
P3 4 3
Gantt chart:
P0 0 5 5 5 0
P1 1 2 7 6 4
P2 3 4 14 11 7
P3 4 3 10 6 3
So, the average waiting time for the pre-emptive method is less than the non-pre-emptive
method which can further conclude that the pre-emptive method is more efficient in terms of
time efficiency for the CPU.
#include<iostream>
#include<algorithm>
using namespace std;
struct node{
char pname[50];
int btime;
int atime;
}a[50];
float averageWaitingTime=0;
float averageResponseTime=0;
float averageTAT=0;
cout<<"\n";
cout<<"P.Name AT\tBT\tCT\tTAT\tWT\tRT\n";
for (i=0; i<n; i++){
cout << a[i].pname << "\t";
cout << a[i].atime << "\t";
cout << a[i].btime << "\t";
cout << tArray[i+1] << "\t";
cout << tArray[i]-a[i].atime+a[i].btime << "\t";
averageTAT+=tArray[i]-a[i].atime+a[i].btime;
cout << tArray[i]-a[i].atime << "\t";
averageWaitingTime+=tArray[i]-a[i].atime;
cout << tArray[i]-a[i].atime << "\t";
averageResponseTime+=tArray[i]-a[i].atime;
cout <<"\n";
}
cout<<"\n";
cout<<"\nGantt Chart\n";
for (i=0; i<n; i++){
cout <<"| "<< a[i].pname << " ";
}
cout<<"\n";
for (i=0; i<n+1; i++){
cout << tArray[i] << "\t";
}
cout<<"\n";
cout<<"Average Response time: "<<(float)averageResponseTime/(float)n<<endl;
cout<<"Average Waiting time: "<<(float)averageWaitingTime/(float)n<<endl;
cout<<"Average TA time: "<<(float)averageTAT/(float)n<<endl;
}
int main(){
int nop, choice, i;
cout<<"Enter number of processes\n";
cin>>nop;
insert(nop);
disp(nop);
return 0;
}
Output:
Theory:
The scheduling in which the scheduling points are determined by the interrupts received from
a clock, it’s known as Clock-driven Scheduling.
When workload is mostly periodic and the schedule is cyclic, timing constraints can be checked
and enforced at each frame boundary
That is, the scheduler pre-determines which task will run when. Therefore, these schedulers
incur very little run Time overhead.
However, a prominent shortcoming of this class of schedulers is that they cannot satisfactorily
handle aperiodic and sporadic tasks since the exact Time of occurrence of these tasks cannot
be predicted.
Advantages:
Time triggered System Based on clock driven scheduling is easy to validate, test and certify.
Clock driven scheduling paradigm are time triggered. in these systems interrupts to external
events are queued and polled periodically.
Disadvantages:
The pure clock driven approach is not suitable for many systems that contain both hard and
soft real time applications.
In this system all combinations of periodic task that might execute at same time.
Implementation:
#include<stdio.h>
#include<conio.h>
int main()
scanf("%d", &NOP);
y = NOP;
printf("\n Enter the Arrival and Burst time of the Process[%d]\n", i+1);
scanf("%d", &at[i]);
scanf("%d", &bt[i]);
temp[i] = bt[i];
scanf("%d", &quant);
printf("\n Process No \t\t Burst Time \t\t TAT \t\t Waiting Time ");
for(sum=0, i = 0; y!=0; )
count=1;
y--;
printf("\nProcess No[%d] \t\t %d\t\t\t\t %d\t\t\t %d", i+1, bt[i], sum-at[i], sum-at[i]-bt[i]);
wt = wt+sum-at[i]-bt[i];
tat = tat+sum-at[i];
count =0;
if(i==NOP-1)
i=0;
else if(at[i+1]<=sum)
i++;
else
i=0;
}
}
avg_wt = wt * 1.0/NOP;
return 0;
Output:
Theory:
Earliest deadline first (EDF) is dynamic priority scheduling algorithm for real time embedded
systems. Earliest deadline first selects a task according to its deadline such that a task with
earliest deadline has higher priority than others. It means priority of a task is inversely
proportional to its absolute deadline. Since absolute deadline of a task depends on the current
instant of time so every instant is a scheduling event in EDF as deadline of task changes with
time. A task which has a higher priority due to earliest deadline at one instant it may have low
priority at next instant due to early deadline of another task. EDF typically executes in pre-
emptive mode i.e., currently executing task is pre-empted whenever another task with earliest
deadline becomes active.
EDF is an optimal algorithm which means if a task set is feasible then it is surely scheduled by
EDF. Another thing is that EDF does not specifically take any assumption on periodicity of
tasks so it is independent of Period of task and therefore can be used to schedule aperiodic tasks
as well. If two tasks have same absolute deadline choose one of them randomly. you may also
like to read
T1 0 1 4 4
T2 0 2 6 6
T3 0 3 8 8
1. At t=0 all the tasks are released, but priorities are decided according to their
absolute deadlines so T1 has higher priority as its deadline is 4 earlier than T2
whose deadline is 6 and T3 whose deadline is 8, that’s why it executes first.
2. At t=1 again absolute deadlines are compared and T2 has shorter deadline so it
executes and after that T3 starts execution but at t=4 T1 comes in the system
and deadlines are compared, at this instant both T1 and T3 has same deadlines
so ties are broken randomly so we continue to execute T3.
3. At t=6 T2 is released, now deadline of T1 is earliest than T2 so it starts execution
and after that T2 begins to execute. At t=8 again T1 and T2 have same deadlines
i.e. t=16, so ties are broken randomly an T2 continues its execution and then T1
completes. Now at t=12 T1 and T2 come in the system simultaneously so by
comparing absolute deadlines, T1 and T2 has same deadlines therefore ties
broken randomly and we continue to execute T3.
4. At t=13 T1 begins it execution and ends at t=14. Now T2 is the only task in the
system so it completes it execution.
5. At t=16 T1 and T2 are released together, priorities are decided according to
absolute deadlines so T1 execute first as its deadline is t=20 and T3’s deadline
is t=24.After T1 completion T3 starts and reaches at t=17 where T2 comes in
the system now by deadline comparison both have same deadline t=24 so ties
broken randomly ant we T continue to execute T3.
6. At t=20 both T1 and T2 are in the system and both have same deadline t=24 so
again ties broken randomly and T2 executes. After that T1 completes it
execution. In the same way system continue to run without any problem by
following EDF algorithm.
Transient Over Load Condition & Domino Effect in Earliest deadline first
Transient over load is a short time over load on the processor. Transient overload condition
occurs when the computation time demand of a task set at an instant exceeds the processor
timing capacity available at that instant. Due to transient over load tasks miss their deadline.
This transient over load may occur due many reasons such as changes in the environment,
simultaneous arrival of asynchronous jobs, system exception. In real time operating systems
under EDF, whenever a task in Transient overload condition miss its deadline and as result
each of other tasks start missing their deadlines one after the other in sequence, such an effect
is called domino effect. It jeopardizes the behavior of the whole system. An example of such
condition is given below.
Release Execution
Task Deadline (Di) Period(Ti)
time(ri) Time(Ci)
T1 0 2 5 5
T2 0 2 6 6
T3 0 2 7 7
T4 0 2 8 8
As in the above figure at t=15 T1 misses it deadline and after that at t=16 T4 is missing its
deadline then T2 and finally T3 so the whole system is collapsed. It is clearly proved that EDF
has a shortcoming due to domino effect and as a result critical tasks may miss their deadlines.
The solution of this problem is another scheduling algorithm that is least laxity first (LLF). It
is an optimal scheduling algorithm. Demand bound function ad Demand bound analysis are
also used for schedualability analysis of given set of tasks.
Implementation in C:
#include<stdio.h>
#include<string.h>
if(b==0)
return a;
else
gcd(b,a%b);}
return((a*b)/gcd(a,b));}
int k=period[0];
n--;
while(n>=1){
k=lcm(k,period[n--]);}
return k;}
int i,small=10000.0f,smallindex=0;
for(int i=0;i<n;i++){
if(period[i]<small&&(period[i]-t)<=deadline[i]){
small=period[i];
smallindex=i;}}
if(small==10000.0f)
return -1;
return smallindex;}
int main()
int i,n,c,d,k,j,nexttime=0,time=0,task,preemption_count;
float
exec[20],period[20],individual_util[20],flag[20],release[20],deadline[20],instance[20],ex[20],
responsemax[20],responsemin[20],tempmax;
float util=0;
FILE *read;
read=fopen("Sampledata.docx","r"); // Sampledata
fscanf(read,"%d ",&n);
for(i=0;i<n;i++)
fscanf(read,"%f ",&release[i]);
fscanf(read,"%f ",&period[i]);
fscanf(read,"%f ",&exec[i]);
fscanf(read,"%f ",&deadline[i]);
fclose(read);
for(i=0;i<n;i++)
individual_util[i]=exec[i]/period[i];
util+=individual_util[i];
responsemax[i]=exec[i];
deadline[i]=period[i];
instance[i]=0.0f;
util=util*100;
if(util>100)
printf("\n Utilisation factor = %0.2f \n\nScheduling is not possible as Utilisation factor is above
100 \n",util);
else
c=0;
while(time<k)
nexttime=time+1;
task = edf(period,n,time,deadline);
if(task==-1)
printf("-");
time++;
continue;
instance[task]++;
printf("T%d ",task);
ex[c++]=task;
if(instance[task]==exec[task])
tempmax=nexttime-(period[task]-deadline[task]);
if(instance[task]<tempmax)
responsemax[task]=tempmax;
}
else
responsemin[task]=instance[task];
if(deadline[task]==k)
responsemin[task]=responsemax[task];
period[task]+=deadline[task];
instance[task]=0.0f;
time++;
for(i=0;i<n;i++)
preemption_count=0;
for(i=0;i<k;i=j)
flag[i]=1;
d=ex[i];
for(j=i+1;d==ex[j];j++)
flag[d]++;
if(flag[d]==exec[d])
flag[d]=1;
else
{
flag[d]++;
preemption_count++;
return 0;
Output: