Unit-2-OS
Unit-2-OS
UNIT-II
Process
A process is a program that is currently running. For example, when we write a program in C or
C++ and compile it, the compiler creates a binary file. Both the original code and the binary file
are just programs. But when we run the binary file, it becomes a process.
A process is "active" because it is running, while a program is "passive" because it just exists as a
file. One program can create multiple processes if it is run multiple times. For example, if we open
a .exe or binary file several times, each instance creates a new process.
Process vs Program
A process runs instructions in machine code, while a program contains instructions written in
a programming language.
A process is dynamic (changes as it runs), while a program is static (remains the same as a
file).
A process stays in the main memory (RAM) while running, but a program is stored in
secondary memory (like a hard drive).
A process exists only while it is running, but a program stays in memory permanently unless
deleted.
A process is active (currently executing), while a program is passive (just stored as a file).
Text Section: Contains the compiled program code, which is loaded from storage when the
program starts.
Data Section: Stores global and static variables that are allocated and initialized before the
main function runs.
Heap: Used for dynamic memory allocation, managed using functions like new, delete,
malloc, and free.
Stack: Holds local variables. Space is allocated on the stack when a local variable is declared.
OPERATING SYSTEMS
Process States
Process states in an operating system help manage resources efficiently by tracking each process's
current condition. The main process states are:
New State: When a process is first created or initialized, it is in the new state. It is being prepared
to move to the ready state.
Ready State: The process is ready to run but is waiting for the CPU to be assigned. It stays in this
state until the CPU becomes available.
Running State: When the CPU is assigned to a process, it enters the running state, where it
executes instructions and uses system resources. Only one process can be running at a time, and
the operating system decides which process runs next.
Waiting/Blocked State: If a process is waiting for something, like user input or data from a disk,
it enters the blocked state. It remains here until the required event happens.
Terminated State: When a process finishes or is stopped by the operating system, it moves to the
terminated state. At this point, it no longer uses system resources, and its memory is freed.
OPERATING SYSTEMS
Process State: A process goes through different phases during its execution. Its current phase is
called the process state.
Program Counter: This holds the address of the next instruction to be executed. It starts with the
address of the first instruction and updates automatically after each step until the program
completes.
Registers: These vary based on the computer architecture and include accumulators, index
registers, stack pointers, and general-purpose registers. When an interrupt occurs, the system saves
these registers, along with the program counter, to ensure the process resumes correctly.
List of Open Files: Stores details of files used by the process. This helps the operating system
close all open files when the process terminates.
Context Switching
Context switching is the process of switching the CPU from one task to another. During this switch,
the operating system saves the state of the current process so it can resume later and loads the state
of the next process. This allows a single CPU to manage multiple processes efficiently without
needing extra processors.
Why Context Switching is Needed:
Priority-Based Switching: If a higher-priority process enters the ready queue, the currently
running process is paused so the high-priority task can execute first.
I/O Handling: If a process needs I/O resources, it is switched out so another process can use the
CPU. Once the I/O operation is complete, the original process moves back to the ready state and
resumes execution.
Handling Interrupts: If an interrupt occurs during execution, the system saves the process state
using context switching. Once the interrupt is handled, the process moves from the waiting state
back to the ready state and resumes execution from where it left off.
OPERATING SYSTEMS
Takes extra time to save one process's state and load another, known as context switching time.
During switching, the CPU does no useful work, making it an overhead from the user’s
perspective.
OPERATING SYSTEMS
Each process is assigned a unique Process Identifier (PID). A process creates a child process using
a system call.
In the Solaris operating system, the process tree represents processes, including background
processes (daemons), along with their unique Process Identifiers (PIDs).
In Linux and other Unix-like systems, a daemon is a background process that runs independently
of any user session. These processes typically start when the system boots and continue running
until shutdown. Daemons handle various system-level tasks such as:
Daemon Operation
The init process is the first process started by the Linux/Unix kernel and
init
holds the process ID (PID) 1.
In Solaris, the sched process (PID 0) is at the top of the process tree. It creates several child
processes, including pageout and fsflush. It also creates the init process, which acts as the root
parent for all user processes.
When a user logs in, dtlogin starts an Xsession, which then creates sdt_shel. This tool, part of the
Common Desktop Environment (CDE), provides a graphical interface for accessing system
functions. Below sdt_shel, a C-shell (csh) is created, allowing users to execute commands.
For example, in the command-line shell, users can run commands like ls and cat, which are created
as child processes.
Another csh process (PID 7778) represents a user connected via telnet. This user has started:
A process requires resources such as CPU time, memory, files, and I/O devices to complete its tasks.
When a process creates a subprocess (child process), the child may:
Resource Allocation
The parent process may either divide its resources among its child processes or share certain
resources (like memory and files) among them.
Restricting a child to a subset of the parent’s resources prevents the system from overloading due
to excessive subprocess creation.
When a process is created, initialization data (input) can be passed from the parent to the child.
For example:
A process that displays an image file (e.g., img.jpg) on a screen may receive the file name as input
from its parent.
The child process then opens the file, reads its contents, and displays them on the screen.
Some operating systems pass open files to child processes, allowing them to directly transfer
data between input (file) and output (screen) without reopening resources.
When a process creates a new process, there are two possible execution scenarios:
1. Concurrent Execution: The parent continues executing alongside its child process.
2. Sequential Execution: The parent waits until some or all of its child processes have completed.
A duplicate of the parent process, including its program and data (e.g., fork() system call).
A new program loaded into it, replacing the parent’s program (e.g., exec() system call).
OPERATING SYSTEMS
Process Termination
A process terminates when it completes execution and uses the exit() system call to notify the OS for
deletion. At termination:
The process may return a status value to the parent process via wait().
The OS deallocates resources such as memory, open files, and I/O buffers.
A process can terminate another process via a system call (e.g., TerminateProcess() in Win32).
Typically, only a parent process can terminate its child process.
Reasons for termination:
o The child exceeds resource limits set by the parent.
o The child’s assigned task is no longer required.
o Cascading Termination: If a parent process exits, its children may also be terminated if the OS
enforces it.
Used to create a new process (child process) from an existing process (parent process).
The OS duplicates the parent’s memory, file descriptors, and other attributes for the child.
Return values:
o Parent process receives child’s PID.
o Child process receives 0.
o If unsuccessful, -1 is returned, and no child process is created.
Used for process spawning, enabling multiple child processes to run concurrently.
Suspends the parent process until the child process finishes execution.
If wait() is called:
o The parent halts until the child completes.
o Returns the PID of the terminated child process or -1 on failure.
o Takes a parameter to store child’s exit status, or NULL if not needed.
Together, fork(), exec(), wait(), and exit() facilitate process creation, execution, and
termination in Linux, enabling efficient multitasking and resource management.
Let’s depict all the system calls in the form of a process transition diagram.
A process begins its execution when it is created and transitions through multiple states before
termination.
1. Process Creation
A process is created using the fork() system call, which duplicates an existing process (parent)
to create a new process (child).
OPERATING SYSTEMS
System calls act as an interface between the operating system and processes, allowing user
programs to request services.
Often, a child process needs to execute a different program than its parent.
The exec() system call replaces the process's address space with a new program, effectively
loading and running a different executable.
3. Process Termination
A process exits using the exit() system call, which releases all its allocated resources except
for its Process Control Block (PCB).
A parent process can check the status of a terminated child process using the wait() system
call.
When wait() is used, the parent process remains blocked until the child process it is waiting for
completes execution.
.
OPERATING SYSTEMS
Example:
Process Control
File Management
Device Management
Information Maintenance
Communication
OPERATING SYSTEMS
It makes a parent process stop its execution till the termination of the
wait
child process.
It makes a parent process stop its execution till the termination of the
waitpid
specified child process.(Multiple child process)
The fork() system call is the primary method for process creation in Unix-like operating systems. It
creates a new process, known as the child process, from an existing one, called the parent
process. If the parent process terminates unexpectedly, the child process is also terminated.
Syntax:
Return Values:
Example Program:
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
int main() {
pid_t p;
OPERATING SYSTEMS
Expected Output:
The vfork() system call is another method for creating a new process. However, unlike fork(), the
child process shares the same address space as the parent process.
Key Characteristics:
The child process does not get a separate address space; instead, it runs in the same memory
space as the parent.
The parent process is paused until the child process completes execution.
Any modifications made by the child process to the code or data are reflected in the parent
process.
This behavior makes vfork() more efficient than fork() in scenarios where the child process
immediately calls exec() to replace itself with a new program. However, improper use can lead
to unintended behavior due to shared memory space.
fork() vfork()
Child process and parent process has separate Child process and parent process shares the
address spaces. same address space.
Parent and child process execute Parent process remains suspended till child
simultaneously. process completes its execution.
OPERATING SYSTEMS
The vfork() system call creates a new process, known as the child process, from an existing one,
called the parent process. Unlike fork(), the child process shares the same address space as
the parent and suspends the parent process until it terminates.
Syntax:
Example Program:
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include <stdlib.h>
int main() {
pid_t p;
p = vfork();
if (p == 0) {
// Child process
printf("This is the child process. PID: %d\n", getpid());
printf("Child process is exiting with exit()\n");
exit(0);
} else if (p > 0) {
// Parent process resumes after child terminates
printf("This is the parent process. PID: %d\n", getpid());
} else {
printf("Error: vfork() failed\n");
}
return 0;
}
Expected Output:
The wait() system call is used by a parent process to pause its execution until one of its child
processes terminates.
Syntax:
status: A pointer to an integer where the exit status of the terminated child is stored. It can be
replaced with NULL if the exit status is not needed.
Returns:
o Process ID: The PID of the terminated child process.
o -1: If an error occurs during execution.
Example Program:
#include <stdio.h>
#include <sys/wait.h>
#include <stdlib.h>
#include <unistd.h>
int main() {
pid_t p, childpid;
p = fork();
if (p == 0) {
// Child process
printf("Child: I am running!\n");
printf("Child: I have PID: %d\n", getpid());
exit(0);
} else {
// Parent process
printf("Parent: I am running and waiting for child to finish!\n");
childpid = wait(NULL); // Parent waits for child to terminate
printf("Parent: Child finished execution! It had the PID: %d\n", childpid);
}
return 0;
}
OPERATING SYSTEMS
Expected Output:
The waitpid() system call allows a parent process to wait for a specific child process to terminate,
providing more control than wait(). It is particularly useful when dealing with multiple child
processes.
The vfork() system call creates a new process (child process) from an existing one (parent process).
Unlike fork(), the child process shares the same address space as the parent and suspends the
parent process until it terminates.
Syntax:
Example Program:
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include <stdlib.h>
OPERATING SYSTEMS
int main() {
pid_t p = vfork();
if (p == 0) {
printf("Child process. PID: %d\n", getpid());
printf("Child exiting\n");
exit(0);
} else if (p > 0) {
printf("Parent process. PID: %d\n", getpid());
} else {
printf("Error: vfork() failed\n");
}
return 0;
}
The wait() system call pauses a parent process until one of its child processes terminates.
Syntax:
Example Program:
#include <stdio.h>
#include <sys/wait.h>
#include <stdlib.h>
#include <unistd.h>
int main() {
pid_t p = fork();
if (p == 0) {
printf("Child: PID = %d\n", getpid());
exit(0);
} else {
printf("Parent waiting for child to finish\n");
wait(NULL);
printf("Parent: Child terminated\n");
}
return 0;}
OPERATING SYSTEMS
The waitpid() system call allows a parent to wait for a specific child process to terminate.
Syntax:
Example Program:
#include <stdio.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>
int main() {
pid_t pids[2], wpid;
pids[0] = fork();
if (pids[0] == 0) {
printf("First child: PID = %d\n", getpid());
exit(0);
}
pids[1] = fork();
if (pids[1] == 0) {
printf("Second child: PID = %d\n", getpid());
exit(0);
}
wpid = waitpid(pids[1], NULL, 0);
printf("Parent: Second child (PID = %d) has terminated.\n", pids[1]);
return 0;
}
The exec() system call replaces the current process with a new program.
Syntax:
Example Program:
#include <stdio.h>
#include <stdlib.h>
OPERATING SYSTEMS
#include <unistd.h>
int main() {
pid_t pid = fork();
if (pid == 0) {
printf("Child executing '/bin/ls'\n");
execl("/bin/ls", "ls", NULL);
}
return 0;
}
Syntax:
exit();
Orphan Process: A child process that continues running even after its parent terminates.
Zombie Process: A process that has finished execution but still exists in the process table.
Process Scheduling
To increase CPU utilization, multiple processes are loaded into memory. The act of selecting
which process should run is called Process Scheduling.
Job queue
A process can transition between different states based on various system conditions:
1. Normal Termination: The process completes execution and exits the system.
2. I/O Request: The process requests an I/O operation and moves to the device queue. Once the I/O
operation is completed, it returns to the ready queue.
3. Time Quantum Expiry: In time-sharing systems, each process is allocated a fixed CPU time.
When this time expires, the process is moved back to the ready queue.
4. Child Process Creation: When a process creates a child process and waits for its termination, it
moves to the ready queue.
5. Higher Priority Process: If a higher-priority process arrives, it preempts the currently running
process, forcing it back into the ready state.
Process Schedulers
Schedulers are system components responsible for managing process execution. They determine which
processes should run, move between states, or be removed from execution.
When a running process requests an I/O operation and moves to the waiting state.
When a process terminates.
When a process's allocated CPU time is exhausted, requiring another process to take over.
When an I/O operation completes, making a previously waiting process ready to execute.
OPERATING SYSTEMS
Controls the degree of multiprogramming by selecting which processes enter the system for
execution.
Chooses a balanced mix of CPU-bound and I/O-bound processes from secondary memory (new
state).
o CPU-bound processes require extensive CPU processing time.
o I/O-bound processes perform many input/output operations and rely less on CPU time.
Loads selected processes into main memory, moving them to the ready queue for execution.
Improves system performance by selecting which process to execute next from the ready queue.
Works closely with the dispatcher, which assigns the selected process to the CPU.
Plays a crucial role in determining the next process to run based on scheduling policies.
Manages process swapping to optimize memory usage and ensure efficient execution.
If memory is needed, it swaps out processes from main memory to secondary storage to free up
space.
When memory becomes available again, it swaps in suspended processes, allowing them to
resume execution from where they left off.
Helps in reducing the degree of multiprogramming by temporarily removing inactive processes.
The major differences between long term, medium term and short-term scheduler are as follows -
CPU Scheduling
CPU scheduling is the process of managing the execution of multiple processes by allowing one process
to use the CPU while others wait for necessary resources (such as I/O operations). This ensures optimal
CPU utilization. Whenever the CPU becomes idle, the short-term scheduler (CPU scheduler) selects a
process from the ready queue for execution.
1. Preemptive Scheduling
Advantages:
Disadvantages:
2. Non-Preemptive Scheduling
Once a process starts execution, it cannot be interrupted until it completes its CPU cycle.
New processes must wait in the queue until the current process finishes.
If an executing process requires I/O, it moves to the waiting state, and upon completion, it returns
to the top of the queue.
Advantages:
Disadvantages:
Different CPU scheduling algorithms have unique characteristics. The selection of an appropriate
algorithm depends on various performance factors:
1. CPU Utilization:
o Ensures that the CPU remains as busy as possible.
o Typically ranges from 40% to 90% in real-time systems.
2. Throughput:
o Measures the number of processes completed per unit of time.
3. Turnaround Time:
o The total time taken by a process from arrival to completion.
o Formula: Turnaround Time = Completion Time – Arrival Time.
4. Waiting Time:
o Time spent by a process waiting in the ready queue.
o Formula: Waiting Time = Turnaround Time – Burst Time.
5. Response Time:
o The time from process submission to the first CPU allocation.
o Formula: Response Time = First CPU Allocation Time – Arrival Time.
OPERATING SYSTEMS
Maximize:
Minimize:
FCFS is the simplest CPU scheduling algorithm, where the process that arrives first gets executed first.
It follows the FIFO (First-In, First-Out) queue method.
A real-world example of FCFS is a cash counter queue, where customers are served in the order they
arrive. Similarly, in CPU scheduling, processes are executed in the sequence they request CPU access.
Advantages of FCFS:
First-come, first-served – Ensures fairness as processes are executed in the order they arrive.
Simple to implement – Uses a straightforward queue structure (FIFO).
Easy to program – Requires minimal complexity for scheduling.
Disadvantages of FCFS:
Example: If a low-priority process (such as a routine backup) is running and a critical process
(e.g., system crash handler) arrives, the critical process must wait, potentially causing a system
failure.
High average waiting time – Processes with longer execution times delay all subsequent tasks,
leading to inefficiency.
Convoy Effect – A long-running process blocks shorter processes, reducing CPU and resource
utilization.
Example: Multiple small processes needing quick CPU access may be stuck waiting behind a
single long process, leading to poor system performance.
OPERATING SYSTEMS
Gantt Chart for FCFS: (Generalized Activity Normalization Time Table (GANTT))
A Gantt chart is a horizontal bar chart used to represent operating systems CPU scheduling in graphical
view that help to plan, coordinate and track specific CPU utilization factor like throughput, waiting time,
turnaround time etc.
Turnaround time for p1= 5=0=5. Turnaround time for p2=29-0=29 Turnaround time for p3=45-0=45
Turnaround time for p4=55-0=55
Turnaround time for p5= 55-0=58
Average waiting time: Waiting Time=Turn Around Time-Burst TimeWaiting time for p1=5-5=0 Waiting time for
p2=29-24=5 Waiting time for p3=45-16=29
Waiting time for p4=55-10=45 Waiting time for p5=58-3=55
Average waiting time= (0+5+29+45+55)/5 = 125/5 = 25 ms.
TAT=Completion Time-Arrival Time Waiting Time=Turn Around Time-Burst TimeFirst Response - Arrival
Time
P1 3 0
P2 6 2
P3 4 4
P4 5 6
P5 2 8
Gantt Chart
TAT=Completion Time-Arrival Time Turn Around Time for P1 => 3-0= 3 Turn Around Time for P2 =>
9-2 = 7 Turn Around Time for P3 => 13-4=9 Turn Around Time for P4 => 18-6= 12 Turn Around Time
for P5 => 20-8=12
Average Turn Around Time => (3+7+9+12+12)/5 =>43/5 = 8.50 ms.
OPERATING SYSTEMS
Response Time of P1 = 0 Response Time of P2 => 3-2 = 1 Response Time of P3 => 9-4 = 5 Response
Time of P4 => 13-6 = 7
Response Time of P5 => 18-8 =10
Completion Turnaround
Process Burst Time Arrival Time Waiting Time Response Time
Time Time
P1 3 0 3 3 0 0
P2 6 2 9 7 1 1
P3 4 4 13 9 5 5
P4 5 6 18 12 7 7
P5 2 8 20 12 10 10
Unlike FCFS, where processes are scheduled based on arrival time, SJF prioritizes execution based on
burst time. The process with the shortest burst time among the available processes in the ready queue
is scheduled next.
If two processes have the same burst time, the tie is resolved using FCFS (First Come, First Serve)
Scheduling.
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
P1 7 7–3=4 4–1=3
P2 16 16 – 1 = 15 15 – 4 = 11
P3 9 9–4=5 5–2=3
OPERATING SYSTEMS
P4 6 6–0=6 6–6=0
P5 12 12 – 2 = 10 10 – 3 = 7
Now,
Faster execution for shorter processes: Short processes are prioritized, leading to quicker
turnaround times.
Increased throughput: Since shorter processes are completed first, more processes can be
executed in less time.
Requires prior knowledge of burst time: The CPU must know how long each process will take
before execution, which is often impractical.
Starvation of longer processes: If shorter processes keep arriving, longer processes may
experience indefinite delays.
SRTF is the preemptive version of Shortest Job First (SJF) scheduling. In this approach, the CPU
always selects the process that has the smallest remaining burst time for execution. If a new process
arrives with a shorter remaining time than the currently running process, the CPU preempts the running
process and schedules the new one.
Consider a set of six processes with their arrival time and burst time as follows:
OPERATING SYSTEMS
P1 0 7
P2 1 5
P3 2 3
P4 3 1
P5 4 2
P6 5 1
Gantt Chart-
Now, we know-
P1 19 19 – 0 = 19 19 – 7 = 12
P2 13 13 – 1 = 12 12 – 5 = 7
P3 6 6–2=4 4–3=1
OPERATING SYSTEMS
P4 4 4–3=1 1–1=0
P5 9 9–4=5 5–2=3
P6 7 7–5=2 2–1=1
Faster execution compared to SJF: Since it is pre-emptive, processes are executed more
efficiently based on their remaining burst time.
Frequent context switching: More interruptions lead to increased overhead, affecting system
performance.
Starvation risk: Like SJF, longer processes may suffer indefinite delays if shorter processes keep
arriving.
Not suitable for interactive systems: The exact CPU time required for each process is often
unknown, making it difficult to implement in real-world applications.
Priority Scheduling in OS
Priority scheduling is a CPU scheduling algorithm that assigns a priority level to each process. Processes
with higher priority are executed before those with lower priority.
Priority Assignment
Priorities can be either static (fixed during execution) or dynamic (changing based on system conditions).
1 2 0 3
2 6 2 5
3 3 1 4
4 5 4 2
5 7 6 9
6 4 5 4
7 10 7 10
1 2 0 3 3 3 0 0
2 6 2 5 18 16 11 13
OPERATING SYSTEMS
3 3 1 4 7 6 2 3
4 5 4 2 13 9 7 11
5 7 6 9 27 21 12 18
6 4 5 4 11 6 2 7
7 10 7 10 37 30 18 27
1 2(L) 0 1
2 6 1 7
OPERATING SYSTEMS
3 3 2 3
4 5 3 6
5 4 4 5
6 10(H) 5 15
7 9 15 8
At time 0, P1 arrives with the burst time of 1 units and priority 2. Since no other process is available
hence this will be scheduled till next job arrives or its completion (whichever is lesser).
At time 1, P2 arrives. P1 has completed its execution and no other process is available at this time
hence the Operating system has to schedule it regardless of the priority assigned to it.
The Next process P3 arrives at time unit 2, the priority of P3 is higher to P2. Hence the execution of
P2 will be stopped and P3 will be scheduled on the CPU.
During the execution of P3, three more processes P4, P5 and P6 becomes available. Since, all these
three have the priority lower to the process in execution so P3 can't preempt the process. P3 will
complete its execution and then P5 will be scheduled with the priority highest among the available
processes.
Meanwhile the execution of P5, all the processes got available in the ready queue. At this point, the
algorithm will start behaving as Non Preemptive Priority Scheduling. Hence now, once all the processes
get available in the ready queue, the OS just took the process with the highest priority and execute
that process till completion. In this case, P4 will be scheduled and will be executed till the completion.
OPERATING SYSTEMS
Since P4 is completed, the other process with the highest priority available in the ready queue is P2.
Hence P2 will be scheduled next.
P2 is given the CPU till the completion. Since its remaining burst time is 6 units hence P7 will
be scheduled after this.
The only remaining process is P6 with the least priority, the Operating System has no choice unless
of executing it. This will be executed at the last.
The Completion Time of each process is determined with the help of GANTT chart. The turnaround
time and the waiting time can be calculated by the following formula.
1. Turnaround Time = Completion Time - Arrival Time
1 2 0 1 1 1 0
2 6 1 7 22 21 14
3 3 2 3 5 3 0
4 5 3 6 16 13 7
OPERATING SYSTEMS
5 4 4 5 10 6 1
6 10 5 15 45 40 25
7 9 6 8 30 24 16
Aging Technique
To prevent starvation, an aging technique is used. It gradually increases the priority of waiting processes
over time, ensuring that no process is left waiting indefinitely.
Processes are assigned CPU time in a cyclic order based on First-Come, First-Served (FCFS).
A fixed time quantum (time slice) is allocated to each process.
When the time quantum expires, the running process is preempted and moved back to the ready
queue.
The CPU is then assigned to the next process in line.
Always preemptive, ensuring fair allocation of CPU time among all processes.
OPERATING SYSTEMS
Let’s see how the round-robin scheduling algorithm works with an example. Here, we have taken an
example to understand the working of the algorithm. We will also do a dry run to understand it better.
In the above example, we have taken 4 processes P1, P2, P3, and P4 with an arrival time of 0,1,2, and 4
respectively. They also have burst times 5, 4, 2, and 1 respectively. Now, we need to create two queues the
ready queue and the running queue which is also known as the Gantt chart.
OPERATING SYSTEMS
Step 1: first, we will push all the processes in the ready queue with an arrival time of 0. In this
example, we have only P1 with an arrival time of 0.
This is how queues will look after the completion of the first step.
Step 2: Now, we will check in the ready queue and if any process is available in the queue then we will
remove the first process from the queue and push it into the running queue. Let’s see how the queuewill
be after this step.
In the above image, we can see that we have pushed process P1 from the ready queue to the running queue.
We have also decreased the burst time of process P1 by 2 units as we already executed 2 units of P1.
Step 3: Now we will push all the processes arrival time within 2 whose burst time is not 0.
OPERATING SYSTEMS
In the above image, we can see that we have two processes with an arrival time within 2 P2 and P3 so,we
will push both processes into the ready queue. Now, we can see that process P1 also has remaining burst
time so we will also push process P1 into the ready queue again.
Step 4: Now we will see if there are any processes in the ready queue waiting for execution. If there is any
process then we will add it to the running queue.
In the above image, we can see that we have pushed process P2 from the ready queue to the running queue.
We also decreased the burst time of the process P2 as it already executed 2 units.
Step 5: Now we will push all the processes arrival time within 4 whose burst time is not 0.
OPERATING SYSTEMS
In the above image, we can see that we have one process with an arrival time within 4 P4 so, we will push
that process into the ready queue. Now, we can see that process P2 also has remaining burst time so we
will also push process P2 into the ready queue again.
Step 6: Now we will see if there are any processes in the ready queue waiting for execution. If there is any
process then we will add it to the running queue.
In the above image, we can see that we have pushed process P3 from the ready queue to the running queue.
We also decreased the burst time of the process P3 as it already executed 2 units. Now, process P3’s burst
time becomes 0 so we will not consider it further.
Step 7: Now we will see if there are any processes in the ready queue waiting for execution. If there is any
process then we will add it to the running queue.
OPERATING SYSTEMS
In the above image, we can see that we have pushed process P1 from the ready queue to the running queue.
We also decreased the burst time of the process P1 as it already executed 2 units.
Step 8: Now we will push all the processes arrival time within 8 whose burst time is not 0.
In the above image, we can see that process P1 also has a remaining burst time so we will also push process
In the above image, we can see that we have pushed process P4 from the ready queue to the running queue.
We also decreased the burst time of the process P4 as it already executed 1 unit. Now, process P4’s burst
time becomes 0 so we will not consider it further.
Step 10: Now we will see if there are any processes in the ready queue waiting for execution. If there is
any process then we will add it to the running queue.
OPERATING SYSTEMS
In the above image, we can see that we have pushed process P2 from the ready queue to the running queue.
We also decreased the burst time of the process P2 as it already executed 2 units. Now, process P2’s burst
time becomes 0 so we will not consider it further.
Step 11: Now we will see if there are any processes in the ready queue waiting for execution. If there is
any process then we will add it to the running queue.
In the above image, we can see that we have pushed process P1 from the ready queue to the running queue.
We also decreased the burst time of the process P1 as it already executed 1 unit. Now, process P1’s burst
time becomes 0 so we will not consider it further. Now our ready queue is empty so we will not perform
any task now.
After performing all the operations, our running queue also known as the Gantt chart will look like the
below.
Let’s calculate the other terms like Completion time, Turn Around Time (TAT), Waiting Time (WT), and
Response Time (RT). Below are the equations to calculate the above terms.
OPERATING SYSTEMS
Processes with longer burst times may experience delays due to repeated cycles.
The performance is highly dependent on the time quantum.
Small time quantum increases context switching overhead.
Large time quantum makes it behave like FCFS scheduling.
Finding the optimal time quantum is challenging.
Priority-based execution is not possible.
Introduction to Threads
A thread is the smallest unit of execution within a process. It is often called a lightweight process because
it runs independently while sharing the same memory space and resources as other threads within the same
process. A single process can have multiple threads, each performing a different task while following its
own execution path.
Threads enhance application performance by enabling parallelism. However, only one thread is executed
at a time by the CPU, which rapidly switches between threads to create the illusion of parallel execution.
OPERATING SYSTEMS
Components of a Thread in an OS
1. Stack Space – Stores local variables and function calls specific to the thread.
2. Register Set – Holds temporary data and keeps track of thread execution.
3. Program Counter – Keeps track of the next instruction to be executed.
Single-threaded Process – A process with only one thread executing its tasks sequentially.
Multi-threaded Process – A process containing multiple threads, where each thread has its own
registers, stack, and counter but shares the code and data segments with other threads.
Multithreading improves efficiency by allowing multiple tasks to run concurrently within the same process.
Process simply means any program in execution while the thread is a segment of a process.
OPERATING SYSTEMS
The main differences between process and thread are mentioned below:
Process Thread
A Process simply means any program in
Thread simply means a segment of a process.
execution.
The process takes more time to terminate The thread takes less time to terminate.
Eg: Opening two different browsers. Eg: Opening two tabs in the same browser.
1. Resource Sharing
In traditional processes, resource sharing requires explicit techniques such as message passing or
shared memory.
In contrast, threads share memory and resources by default within the same process.
OPERATING SYSTEMS
This enables efficient resource management, allowing multiple threads of an application to operate
within the same address space.
2. Improved Responsiveness
4. Cost Efficiency
Threads are more economical than processes since they share resources such as memory and
system files.
Creating and managing new processes requires significant memory and CPU overhead, whereas
threads are lightweight and easier to manage.
1. User-Level Threads
These are managed without kernel involvement and do not require system calls for creation or
management.
They are lightweight, allowing efficient switching between threads without kernel intervention.
Example: Java Threads, POSIX Threads (Pthreads)
2. Kernel-Level Threads
They provide better system integration and take advantage of multiprocessor architectures but have
a higher overhead compared to user-level threads.
Easier to Implement: User-level threads are simpler to create and manage compared to kernel-level
threads.
Lower Context Switching Overhead: Switching between user-level threads is faster as it does not
require kernel intervention.
More Efficient Execution: Since they do not require kernel-mode privileges, user-level threads
execute with minimal overhead.
Lightweight Representation: They consist of only essential components such as the Program
Counter, Register Set, and Stack Space, making them efficient.
Lack of Coordination with the Kernel: Since the kernel is unaware of user-level threads, it cannot
schedule them efficiently.
Process Blocking Issue: If one thread encounters a page fault, the entire process, including all its
threads, may get blocked.
Kernel-Level Threads
A Kernel-Level Thread (KLT) is managed directly by the Operating System Kernel. The kernel
maintains a thread table to track and schedule these threads efficiently. However, context switching time
is higher for kernel threads due to additional overhead.
OPERATING SYSTEMS
Better Thread Management: The kernel maintains an updated record of all threads, allowing for
efficient scheduling and execution.
Handles Blocking Efficiently: Kernel-level threads can manage processes that frequently block
without affecting the entire process.
Dynamic Resource Allocation: If a process requires more execution time, the kernel can allocate
additional processing time to its threads.
Slower Execution: Context switching for kernel-level threads involves system calls, making it
slower than user-level threads.
Complex Implementation: Managing kernel threads requires more resources and is more complex
compared to user-level threads.
If one user level thread perform blocking If one kernel level thread perform blocking
operation then entire process will be operation then another thread can
blocked continue execution.
Multithreading Models
User threads are mapped to kernel threads using different strategies, known as threading models. The three
primary multithreading models are:
Many-to-Many Model
In this model, multiple user-level threads are mapped to the same or a lesser number of kernel-level threads.
The system dynamically schedules user threads onto available kernel threads, ensuring that blocked user
threads do not halt the entire process. This approach provides efficient resource utilization and prevents
system-wide blocking, making it the most effective multithreading model.
Many-to-One Model
In the Many-to-One multithreading model, multiple user-level threads are mapped to a single kernel-level
thread. This means that thread management occurs entirely at the user level, without kernel involvement
in scheduling.
Since only one kernel thread is available, multiple user threads cannot execute in parallel on
multiprocessor systems.
OPERATING SYSTEMS
If a user thread makes a blocking system call, the entire process is blocked, affecting all other
threads.
Due to user-level thread management, this model is more efficient in terms of context switching
and overhead.
This model is simple to implement but lacks parallelism and is not suitable for systems requiring high
concurrency.
One-to-One Model
The One-to-One multithreading model establishes a direct one-to-one relationship between user threads
and kernel threads. This means that each user thread is mapped to a separate kernel thread, allowing for true
parallel execution on multiprocessor systems.
Multiple threads can run simultaneously across multiple processors, enhancing system performance.
Since each user thread has a dedicated kernel thread, a blocking system call in one thread does not
block other threads, ensuring better responsiveness.
However, creating a new user thread requires creating a corresponding kernel thread, which
increases resource consumption and limits scalability.
This model provides better concurrency and responsiveness, but the overhead of managing a large
number of kernel threads can impact system performance.
OPERATING SYSTEMS
In a multithreading environment, several challenges and complexities arise. Some of the key threading issues
include:
When a thread makes a blocking system call, it may cause the entire process to be blocked,
depending on the threading model used.
Some operating systems offer variations of fork(), where either all threads of a parent process are
duplicated in the child process, or only the invoking thread is copied.
The exec() system call replaces the current process, including all its threads, with a new program.
2. Thread Cancellation
Thread cancellation refers to terminating a thread before it has completed execution. There are two main
types:
Cancelling threads improperly can lead to resource leaks or inconsistent shared data.
3. Signal Handling
4. Thread-Specific Data
While threads share process memory, some data needs to be thread-specific (e.g., unique transaction IDs
in a banking system). Thread-local storage ensures that each thread maintains its own copy of such data.
5. Thread Pool
Instead of creating new threads for every request, a thread pool maintains a set of pre-created
threads.
When a request arrives, an idle thread is assigned the task, reducing the overhead of thread creation
and destruction.
This improves efficiency and prevents resource exhaustion.
6. Scheduler Activation
Many multithreading models (e.g., many-to-many and two-level) require coordination between the
kernel and user-level thread library.
Scheduler activation allows the kernel to notify the thread library about events like blocked
threads, ensuring better scheduling and resource allocation.
By addressing these challenges, operating systems can optimize multithreading performance while
maintaining system stability and efficiency.
OPERATING SYSTEMS
An LWP (Lightweight Process) acts as a virtual processor for the user-thread library, allowing
applications to schedule user threads for execution. Each LWP is linked to a kernel thread, and the operating
system schedules kernel threads to run on physical processors.
If a kernel thread blocks (e.g., while waiting for an I/O operation), the associated LWP also blocks,
causing the user thread linked to it to be suspended.
The number of LWPs required depends on the application’s nature:
o A CPU-bound application running on a single processor needs only one LWP since only
one thread can execute at a time.
o An I/O-intensive application may require multiple LWPs, especially if multiple blocking
system calls occur simultaneously.
For example, if five simultaneous file-read requests occur, five LWPs are needed. If only four LWPs exist,
the fifth request must wait for an LWP to become available.
Upcall Mechanism
The kernel informs an application about specific events using a mechanism called upcalls.
Upcall handlers, managed by the thread library, process these notifications.
A common event triggering an upcall is when a thread is about to block.
o The kernel makes an upcall informing the application that a thread will block.
o The kernel assigns a new virtual processor to run an upcall handler.
OPERATING SYSTEMS
o The upcall handler saves the state of the blocking thread and schedules another thread to
run.
When the blocked thread becomes ready again, the kernel sends another upcall to notify the thread
library. The upcall handler then either allocates a new virtual processor or preempts another
thread to resume execution.
Process Synchronization
Process synchronization ensures that multiple processes execute in a controlled manner while accessing
shared resources, preventing race conditions and data inconsistencies.
Producer-Consumer Problem
Buffering Mechanisms
1. Unbounded Buffer:
o The buffer has no size limit.
o The producer can always generate data, but the consumer may have to wait for items.
2. Bounded Buffer:
o The buffer has a fixed size.
o The producer must wait if the buffer is full, and the consumer must wait if it is empty.
Shared Variables:
#define BUFFER_SIZE 10
typedef struct { ... } item;
item buffer[BUFFER_SIZE];
int in = 0, out = 0;
int counter = 0;
Producer Process:
while (true) {
OPERATING SYSTEMS
Consumer Process:
while (true) {
while (counter == 0); /* Wait if buffer is empty */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item */
}
The buffer is a circular array, with in and out pointers indicating the next available slot and the
next item to be consumed, respectively.
The counter tracks the number of items in the buffer.
The buffer is full when counter == BUFFER_SIZE, and empty when counter == 0.
Race Condition
A race condition occurs when multiple processes or threads access shared data simultaneously, leading to
unpredictable outcomes.
Consider a shared counter variable (initial value = 5) used by both producer and consumer processes.
Producer Process:
register1 = counter;
register1 = register1 + 1;
counter = register1;
Consumer Process:
register2 = counter;
OPERATING SYSTEMS
register2 = register2 - 1;
counter = register2;
If executed sequentially, the output is consistent. However, if executed concurrently, the following
interleaving could occur:
This inconsistency arises because both processes read the same initial counter value (5) before updating
it. Proper synchronization mechanisms (e.g., mutex locks, semaphores) are needed to prevent such
conflicts.
By handling these issues efficiently, operating systems ensure reliable process execution and
synchronization in concurrent environments.
A race condition occurs when multiple processes access and modify shared resources concurrently without
proper synchronization, leading to unpredictable results.
OPERATING SYSTEMS
In the Producer-Consumer Problem, incorrect values of counter may occur due to unsynchronized
access.
If the order of execution changes, we may end up with an incorrect count of items in the buffer.
Example:
o We reached an incorrect state where counter == 4, while the correct value should be 5.
o If the execution order was different, we could end up with counter == 6, another incorrect
state.
This happens because both the producer and consumer modify counter simultaneously, leading
to inconsistencies.
The correct final value (counter == 5) is only achieved when the producer and consumer execute
separately or are properly synchronized.
The Critical Section is the part of a program where shared resources are accessed. If multiple processes
execute their critical sections concurrently, race conditions may occur.
To avoid race conditions, we must ensure that only one process at a time can execute its critical section.
Process Synchronization
Process synchronization ensures that multiple processes execute in a controlled manner, preventing
conflicts when accessing shared resources.
Since cooperative processes share resources, they require proper synchronization mechanisms to avoid
conflicts.
A process that needs access to a shared resource follows a standard structure to avoid conflicts:
1. Entry Section – The process requests permission to enter the critical section.
2. Critical Section – The process accesses shared resources safely.
OPERATING SYSTEMS
3. Exit Section – The process releases the critical section, allowing other processes to enter.
4. Remainder Section – The remaining code outside of the critical section.
while (true) {
// Entry Section
request_permission();
// Critical Section
access_shared_resource();
// Exit Section
release_permission();
// Remainder Section
execute_other_tasks();
}
To ensure proper synchronization in concurrent processes, any solution to the Critical Section Problem
must satisfy three key requirements:
1. Mutual Exclusion – Only one process can execute in the critical section at a time.
2. Progress – If no process is in the critical section and other processes wish to enter, one must be
allowed to proceed without indefinite delay.
3. Bounded Waiting – There must be a limit on how long a process waits before entering its critical
section, preventing starvation.
OPERATING SYSTEMS
Peterson’s Solution
Peterson’s Algorithm is a software-based synchronization solution designed for two processes that take
turns accessing the critical section.
Implementation:
Algorithm:
do {
flag[i] = true;
turn = j;
while (flag[j] && turn == j); // Wait if the other process wants to enter
// Critical Section
// Remainder Section
} while (true);
How It Works:
Mutual Exclusion – Only one process can enter at a time since turn allows only one process to
proceed.
Progress – If no process is in the critical section, the turn variable ensures one of the waiting
processes proceeds.
OPERATING SYSTEMS
Bounded Waiting – Each process waits for at most one execution of the other process before it gets
a turn.
The test_and_set() instruction executes atomically, ensuring that multiple processes do not modify a shared
variable simultaneously.
Function Definition:
do {
waiting[i] = true;
key = true;
while (waiting[i] && key)
key = test_and_set(&lock); // Atomic operation
// Critical Section
OPERATING SYSTEMS
if (j == i)
lock = false; // No waiting process, unlock
else
waiting[j] = false; // Pass turn to next process
// Remainder Section
} while (true);
How It Works:
2. Swap Mechanism
Another atomic hardware operation is swap(), which ensures synchronization by modifying a shared
variable only if its expected value matches the current value.
Function Definition:
do {
while (swap(&lock, 0, 1) != 0); // Wait for access
// Critical Section
// Remainder Section
} while (true);
How It Works: