POWERED BY
📚 ReferMe: Your Academic Companion
A Student-Centric Platform by Pixen
🔹 About ReferMe
ReferMe, by Pixen, offers curated academic resources to help students study
smarter and succeed faster.
✅ Class Notes
✅ Previous Year Question Papers (PYQs)
✅ Updated Syllabus
✅ Quick Revision Material
🔹 About Pixen
Pixen is a tech company helping students and startups turn ideas into reality.
Alongside ReferMe, we also offer:
✅ Custom Websites
✅ Machine learning
✅ Web Applications
✅ E‑Commerce Stores
✅ Landing Pages
[Link] [Link]
Powered by
hule Pune Un
ai P ive
ib
tr r
vi
sit
Sa
y
Information Technology - Third Year
OPERATING SYSTEM- UNIT 2
PROCESS MANAGEMENT
Process: Concept of a Process, Process States, Process Description,
Process Control
Threads: Processes and Threads, Concept of Multithreading, Types
of Threads, Threadprogramming Using Pthreads.
Scheduling: Types of Scheduling, Scheduling Algorithms, First Come
First Served, Shortest Job First, Priority, Round Robin
More Stuff Inside → Click here
Powered by
More Stuff Inside → Click here
Powered by
Q.1 Process State Transition Diagram
Definition: A process state transition diagram shows the different states a
process can be in during its lifecycle and the transitions between these
states. It represents how a process moves from one state to another based
on various events and system calls.
Process State Transition Diagram
Process State Transition Diagram
Process States Explained:
New State:
Process is being created
Memory allocation and initialization occur
Process not yet ready to execute
More Stuff Inside → Click here
Powered by
Ready State:
Process is ready to run
Waiting for CPU allocation
Has all required resources except CPU
Running State:
Process is currently executing
CPU is allocated to this process
Only one process can be running at a time (single CPU)
Waiting/Blocked State:
Process cannot continue execution
Waiting for I/O completion or event
CPU is not useful during this state
Terminated State:
Process has finished execution
Resources are being deallocated
Process will be removed from system
State Transitions:
New → Ready:
Process creation completed
System admits process to ready queue
More Stuff Inside → Click here
Powered by
Ready → Running:
Scheduler selects process for execution
CPU dispatch occurs
Running → Ready:
Time quantum expires (preemption)
Higher priority process arrives
Running → Waiting:
Process requests I/O operation
Process waits for system resource
Waiting → Ready:
I/O operation completes
Required resource becomes available
Running → Terminated:
Process completes execution
Process terminates abnormally
More Stuff Inside → Click here
Powered by
Q.2 Process State Transition Diagram with Two Suspend States
Definition: A process state transition diagram with suspend states includes
additional states where processes can be temporarily removed from main
memory to secondary storage. This helps in better memory management
and system performance.
More Stuff Inside → Click here
Powered by
Extended Process States
Basic States:
New, Ready, Running, Waiting, Terminated
Additional Suspend States:
Suspend Ready: Process is in secondary storage but ready to run
Suspend Waiting: Process is in secondary storage and waiting for event
Why Suspend States are Used
Memory Management:
Free up main memory for active processes
Allow more processes to be loaded in system
Handle memory shortage situations
Performance Benefits:
Reduce memory thrashing
Improve overall system throughput
Better multiprogramming degree
System Stability:
Prevent system crashes due to memory overflow
Maintain responsive system performance
Handle emergency situations
More Stuff Inside → Click here
Powered by
Suspend State Transitions
Ready → Suspend Ready
Memory shortage occurs
Higher priority process needs memory
Administrative decision to suspend
Suspend Ready → Ready
Memory becomes available
Process priority increases
User requests process activation
Waiting → Suspend Waiting
Process waiting for long-term event
Memory needed for other processes
System-initiated suspension
Suspend Waiting → Waiting
Memory becomes available
Process needs to be activated
Suspend Waiting → Suspend Ready
Event that process was waiting for occurs
Process ready but still in secondary storage
More Stuff Inside → Click here
Powered by
Applications:
Virtual memory systems
Swapping mechanisms
Memory management algorithms
System load balancing
Q.3. SJF (Preemptive) and Round Robin Scheduling
Sample Processes:
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
SJF (Preemptive) - Shortest Remaining Time First
Algorithm Steps:
1. Select process with shortest remaining burst time
2. Preempt if shorter process arrives
3. Continue until all processes complete
Time : 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Proc : P1 P2 P2 P2 P2 P4 P4 P4 P4 P4 P1 P1 P1 P1 P1 P1
More Stuff Inside → Click here
Powered by
Time : 17 18 19 20 21 22 23 24 25 26
Proc : P1 P3 P3 P3 P3 P3 P3 P3 P3 P3
Calculations:
P1: Waiting Time = (1-0) + (10-5) = 6, Turnaround Time = 17-0 = 17
P2: Waiting Time = (1-1) = 0, Turnaround Time = 5-1 = 4
P3: Waiting Time = (17-2) = 15, Turnaround Time = 26-2 = 24
P4: Waiting Time = (5-3) = 2, Turnaround Time = 10-3 = 7
Average Waiting Time = (6+0+15+2)/4 = 5.75Average Turnaround Time =
(17+4+24+7)/4 = 13
Round Robin (Time Quantum = 2)
Algorithm Steps:
1. Each process gets 2 time units
2. After quantum expires, move to ready queue end
3. Continue until all processes complete
Gantt Chart for Round Robin:
Time: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Proc: P1 P1 P2 P2 P3 P3 P4 P4 P1 P1 P2 P2 P3 P3 P4 P4
Time: 16 17 18 19 20 21 22 23 24 25
Proc: P1 P1 P3 P3 P4 P1 P1 P3 P3 P3
More Stuff Inside → Click here
Powered by
Calculations:
P1: Waiting Time = (0-0) + (8-2) + (16-10) + (21-18) = 15, Turnaround
Time = 23-0 = 23
P2: Waiting Time = (2-1) + (10-4) = 7, Turnaround Time = 12-1 = 11
P3: Waiting Time = (4-2) + (12-6) + (18-14) + (23-21) = 14,
Turnaround Time = 26-2 = 24
P4: Waiting Time = (6-3) + (14-8) + (20-16) = 13, Turnaround Time =
21-3 = 18
Average Waiting Time = (15+7+14+13)/4 = 12.25Average Turnaround Time
= (23+11+24+18)/4 = 19
Formulas Used
Waiting Time = Turnaround Time - Burst Time
Turnaround Time = Completion Time - Arrival Time
Average Waiting Time = Σ(Waiting Time)/n
Average Turnaround Time = Σ(Turnaround Time)/n
More Stuff Inside → Click here
Powered by
4. SJF (Non-preemptive) and Priority (Preemptive) Scheduling
Problem Setup
Sample Processes:
ProcessArrival TimeBurst TimePriorityP1063P2181P3272P4334
Note: Lower number = Higher priority
SJF (Non-preemptive)
Algorithm Steps:
1. Select shortest job from ready queue
2. No preemption once started
3. Continue until completion
Gantt Chart for SJF (Non-preemptive):
Time: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Proc: P1 P1 P1 P1 P1 P1 P4 P4 P4 P3 P3 P3 P3 P3 P3
15 16 17 18 19 20 21 22 23 24
P3 P2 P2 P2 P2 P2 P2 P2 P2
Calculations:
P1: Waiting Time = 0-0 = 0, Turnaround Time = 6-0 = 6
P2: Waiting Time = 16-1 = 15, Turnaround Time = 24-1 = 23
P3: Waiting Time = 9-2 = 7, Turnaround Time = 16-2 = 14
P4: Waiting Time = 6-3 = 3, Turnaround Time = 9-3 = 6
Average Waiting Time = (0+15+7+3)/4 = 6.25
Average Turnaround Time = (6+23+14+6)/4 = 12.25
More Stuff Inside → Click here
Powered by
Priority (Preemptive)
Algorithm Steps:
Select highest priority process
Preempt if higher priority arrives
Continue until all complete
Gantt Chart for Priority (Preemptive):
Time: 0 1 2 3 4 5 6 7 8 9 10 11 12 13
Proc: P1 P2 P2 P2 P2 P2 P2 P2 P2 P3 P3 P3 P3 P3
14 15 16 17 18 19 20 21 22 23 24
P3 P3 P1 P1 P1 P1 P1 P4 P4 P4
Calculations:
P1: Waiting Time = (0-0) + (16-1) = 15, Turnaround Time = 21-0 = 21
P2: Waiting Time = (1-1) = 0, Turnaround Time = 9-1 = 8
P3: Waiting Time = (9-2) = 7, Turnaround Time = 16-2 = 14
P4: Waiting Time = (21-3) = 18, Turnaround Time = 24-3 = 21
Average Waiting Time = (15+0+7+18)/4 = 10Average Turnaround Time =
(21+8+14+21)/4 = 16
More Stuff Inside → Click here
Powered by
5. Process vs Thread and PCB vs TCB
Aspect Process Thread
Definition Independent program in Lightweight unit of process
execution
Memory Separate memory space Shared memory space
Creation Heavy-weight operation Light-weight operation
IPC (Inter-Process
Communication Direct memory sharing
Communication) mechanisms
d d
Switching Expensive context switch Cheap context switch
Protection Isolated from other processes No protection between threads
Failure One process crash doesn’t One thread crash affects all
affect others threads
Process Control Block (PCB)
PCB is a data structure containing all information about a process that OS
needs to manage it.
Typical PCB Entries:
Process Identification:
Process ID (PID)
Parent Process ID (PPID)
User ID and Group ID
Process State Information:
Current state (Ready, Running, Waiting)
Priority level
Program counter value
CPU register values
More Stuff Inside → Click here
Powered by
Process Control Information:
Memory management information
List of open files
List of I/O devices allocated
Accounting information
Memory Management:
Base and limit registers
Page table pointers
Segment table pointers
Thread Control Block (TCB)
TCB stores information about individual threads within a process.
Typical TCB Entries:
Thread Identification:
Thread ID (TID)
Process ID it belongs to
Thread state
Execution Context:
Program counter
Stack pointer
Register values
Thread-specific data
Scheduling Information:
Priority level
Thread state
CPU usage statistics
More Stuff Inside → Click here
Powered by
Stack Information:
Stack pointer
Stack size
Stack boundaries
6. User-level Threads vs Kernel-level Threads
User-level Threads
Threads managed entirely by user-level thread library without kernel
knowledge.
Characteristics:
Kernel sees only single process
Thread management in user space
No kernel involvement in thread operations
Fast creation and switching
Examples:
POSIX Pthreads (user-level implementation)
Java Green Threads (older JVM versions)
GNU Portable Threads
User-level thread libraries in C++
Kernel-level Threads
Threads managed directly by operating system kernel.
Characteristics:
Kernel aware of all threads
Thread operations are system calls
Kernel schedules threads individually
Slower creation and switching
More Stuff Inside → Click here
Powered by
Examples:
Windows threads (CreateThread API)
Linux threads (clone() system call)
Solaris threads
Modern Java threads (Native threads)
Comparison Table
Feature User-level Threads Kernel-level Threads
Creation Speed Fast Slow
Context Switch Fast Slow
Blocking Blocks entire process Only blocking thread
Multiprocessor No true parallelism True parallelism
Scheduling User-level scheduler Kernel scheduler
Portability High Low
System Calls No overhead System call overhead
6. User-level Threads vs Kernel-level Threads
Thread Models Overview
Thread models define the relationship between user-level and kernel-level
threads. Three main models exist based on mapping between user and
kernel threads.
More Stuff Inside → Click here
Powered by
1. Many-to-One Model
Many user-level threads mapped to single kernel thread.
Characteristics:
Thread management in user space
Entire process blocks if one thread blocks
No parallelism on multiprocessor
Fast thread operations
Examples:
Green threads in early Java
GNU Portable Threads
Some user-level thread libraries
2. One-to-One Model
Each user thread mapped to separate kernel thread.
More Stuff Inside → Click here
Powered by
Characteristics:
Direct mapping between user and kernel threads
True parallelism possible
Independent thread blocking
Higher overhead
Examples:
Windows threads
Linux threads (NPTL)
Solaris threads
Modern Java threads
3. Many-to-Many Model
Many user threads multiplexed onto smaller number of kernel threads.
mermaid
More Stuff Inside → Click here
Powered by
Examples:
Solaris threads (older versions)
Digital UNIX
Some research operating systems
Two-level Model
Hybrid of Many-to-Many with some One-to-One threads.
Characteristics:
Some threads permanently bound to kernel threads
Others follow many-to-many mapping
Critical threads get dedicated kernel threads
Flexible thread management
More Stuff Inside → Click here
Powered by
8. Types of Processor Schedulers
Processor Scheduler Types
Operating systems use three levels of scheduling to manage processes
efficiently. Each level has different responsibilities and time scales.
1. Long-term Scheduler (Job Scheduler)
Decides which processes should be brought into ready queue from job pool.
Characteristics:
Controls degree of multiprogramming
Invoked infrequently (seconds or minutes)
Selects good mix of I/O and CPU bound processes
Not present in time-sharing systems
Functions:
Admit new processes to system
Control multiprogramming level
Balance CPU and I/O bound processes
Manage system performance
2. Short-term Scheduler (CPU Scheduler)
Selects which process should be executed next and allocates CPU.
Characteristics:
Invoked very frequently (milliseconds)
Must be fast and efficient
Implements scheduling algorithms
Present in all multiprogramming systems
More Stuff Inside → Click here
Powered by
Functions:
Select process from ready queue
Allocate CPU to selected process
Implement scheduling policies
Handle context switching
3. Medium-term Scheduler (Swapper)
Manages swapping of processes between main memory and secondary
storage.
Characteristics:
Handles memory management
Implements swapping mechanism
Reduces degree of multiprogramming
Improves process mix
Functions:
Swap processes out of memory
Swap processes back into memory
Manage memory allocation
Handle suspend/resume operations
8. Types of Processor Schedulers
Definition:
Context switching is the process by which the CPU saves the state of a
currently running process and loads the state of another process from its
PCB (Process Control Block), allowing multitasking.
More Stuff Inside → Click here
Powered by
Steps Involved:
Save current process state (registers, PC, etc.) into its PCB
Move current process to the Ready/Waiting queue
Select next process from the scheduler
Load next process’s state from its PCB
Resume execution of the new process
What is saved during Context Switching?
What is saved during Context Switching?
Component Description
Program Counter (PC) Address of the next instruction
CPU Registers All general-purpose and special registers
Stack Pointer (SP) Points to top of stack
Memory Management Info Page tables, segment tables, etc.
I/O & File Info Open files, device states
Why is Program Counter Important?
It stores the address of the next instruction. During a switch, it is saved
and restored to ensure the process resumes from the exact same place.
What causes overhead?
Time to save/restore states
Cache invalidation
Memory reconfiguration
Frequent switches slow down execution
More Stuff Inside → Click here
Powered by
Hardware Support:
MMU for memory isolation
Multiple register sets (in some CPUs)
Timer interrupts to force context switches
Context Switching vs Mode Switching
Feature Context Switching Mode Switching
Between Two processes User mode ↔ Kernel mode
Saved Info Full CPU context Minimal info
Overhead High Low
Triggered By Scheduler, timer System calls, interrupts
Q10. Explain Memory Allocation Methods in Operating Systems.
There are two primary types of memory allocation: contiguous and non-
contiguous. These are used to allocate memory to processes in a system
efficiently.
1. Contiguous Memory Allocation:
In this method, each process is allocated a single contiguous block of
memory.
Fixed Partitioning: Memory is divided into fixed-size partitions. Each
process is assigned one partition.
Advantages: Simple to implement.
Disadvantages: Internal fragmentation.
More Stuff Inside → Click here
Powered by
Variable Partitioning: Partitions are created dynamically based on
process needs.
Advantages: Efficient use of memory.
Disadvantages: External fragmentation.
2. Non-Contiguous Memory Allocation:
Memory is allocated in chunks, and a process may be loaded into non-
contiguous memory locations.
Paging:
Memory is divided into fixed-size pages (process) and frames
(physical memory).
Pages can be loaded into any free frame.
Eliminates external fragmentation.
Segmentation:
Memory is divided based on logical segments like code, data, stack,
etc.
Each segment has a base and a limit.
Allows logical division of process memory.
Comparison:
Feature Paging Segmentation
Unit size Fixed (pages) Variable (segments)
Fragmentation No external, some internal External fragmentation
Use case OS-level memory management Logical memory division
More Stuff Inside → Click here