Operating System Complete Notes - Unit 1
Introduction
Operating System (OS) ek software hai jo computer hardware aur software resources ko
manage karta hai aur programs ke liye common services provide karta hai. Ye computer system
ka sabse important part hai jo user aur hardware ke beech interface ka kaam karta hai.
1. Main Functions of Operating Systems
1.1 Process Management
Definition: Process management OS ka core function hai jo running programs ko control karta
hai.
Key Responsibilities:
Process creation and termination
Process scheduling (konsa process kab run hoga)
Process synchronization (processes ko coordinate karna)
Interprocess communication (IPC)
Deadlock handling
Simple Example: Jab aap multiple applications chalate ho jaise browser, music player, aur word
processor - OS decide karta hai ki konsa program CPU use karega aur kitni der ke liye.
1.2 Memory Management
OS memory ko efficiently manage karta hai:
Allocation: Programs ko memory assign karna
Deallocation: Memory ko free karna jab program end ho jaye
Virtual Memory: Physical memory se zyada space provide karna
Memory Protection: Ek process ki memory ko dusre se protect karna
1.3 File System Management
Files aur directories ko organize karna:
File creation, deletion, reading, writing
Directory structure maintenance
File permissions aur security
File backup aur recovery
1.4 Device Management (I/O Management)
Hardware devices ko control karna:
Device drivers ke through communication
I/O operations ko schedule karna
Device allocation aur deallocation
Error handling
1.5 Security and Protection
User authentication (login verification)
Access control (kon kya access kar sakta hai)
Data encryption
System integrity maintenance
1.6 User Interface
Command Line Interface (CLI) - text-based commands
Graphical User Interface (GUI) - windows, icons, mouse interaction
System calls for programs to interact with OS
2. Processes and Threads
2.1 What is a Process?
Process = Program in execution (running program)
Process Components:
1. Process Control Block (PCB): Process ki complete information
2. Program Counter: Next instruction ka address
3. CPU Registers: Process ki current state
4. Memory Allocation: Code, data, stack, heap sections
Process States:
New: Process create ho raha hai
Ready: Process CPU ka wait kar raha hai
Running: Process CPU use kar raha hai
Waiting/Blocked: Process I/O ya event ka wait kar raha hai
Terminated: Process complete ho gaya
2.2 What is a Thread?
Thread = Process ke andar ka lightweight execution unit
Key Differences: Process vs Thread
Process Thread
Heavy weight (zyada resources) Light weight (kam resources)
Separate memory space Share memory with other threads
Independent execution Part of a process
More time for creation Less time for creation
Communication through IPC Direct communication through shared memory
If one fails, others unaffected If one fails, can affect others
Example:
Process: Web browser (Chrome) - ek complete application
Thread: Browser ke andar different tabs - har tab ek thread
2.3 Multithreading Benefits
Responsiveness: User interface responsive rahta hai
Resource Sharing: Threads memory aur files share karte hain
Economy: Thread creation kam expensive hai
Parallelism: Multi-core systems pe parallel execution
3. Interprocess Communication (IPC) and Concurrency
3.1 Why IPC Needed?
Processes ko data share karna hota hai aur coordinate karna hota hai. Since har process ka
separate memory space hai, special mechanisms chahiye communication ke liye.
3.2 Types of IPC
3.2.1 Shared Memory
Concept: Common memory area jo multiple processes access kar sakte hain
How it Works:
1. Operating system shared memory segment create karta hai
2. Processes us segment ko attach karte hain
3. Direct read/write through shared memory
4. Synchronization mechanisms (semaphores) use karte hain
Advantages:
Very fast communication (direct memory access)
Low overhead
Good for large data transfers
Disadvantages:
Complex synchronization needed
Security risks if not managed properly
Manual cleanup required
3.2.2 Message Passing
Concept: Processes messages send/receive karte hain
Methods:
Direct Communication: Process names specify karte hain
Indirect Communication: Mailboxes/ports use karte hain
Message Passing Types:
Synchronous: Sender wait karta hai until receiver receives
Asynchronous: Sender message send karke continue kar jata hai
3.2.3 Pipes
Named Pipes (FIFOs): File-like interface for communication
Anonymous Pipes: Parent-child process communication
3.2.4 Signals
Software interrupts jo processes ko events notify karte hain
3.3 Concurrency
Definition: Multiple processes ya threads simultaneously execute hona
Challenges:
Race Conditions: Multiple processes same resource access karte hain
Data Inconsistency: Concurrent access se data corruption
Synchronization: Processes ko coordinate karna
4. Synchronization and Deadlock
4.1 Process Synchronization
Need: Jab multiple processes shared resources use karte hain, coordination chahiye to avoid
problems.
4.1.1 Critical Section Problem
Critical Section: Code section where shared resources access hote hain
Requirements for Solution:
1. Mutual Exclusion: Only one process in critical section at a time
2. Progress: If no process in critical section, waiting processes can enter
3. Bounded Waiting: Limited waiting time for processes
4.1.2 Synchronization Tools
Semaphores
Definition: Signaling mechanism with counter
Types:
1. Binary Semaphore: 0 ya 1 value (mutex lock ki tarah)
2. Counting Semaphore: Multiple resources manage karta hai
Operations:
wait(S): S-- (decrement), if S<0 then block
signal(S): S++ (increment), wake up waiting process
Example in Hindi/English:
// Bathroom example - binary semaphore
semaphore bathroom = 1; // initially available
Process1:
wait(bathroom); // lock bathroom
use_bathroom(); // critical section
signal(bathroom); // unlock bathroom
Mutex Locks
Definition: Binary lock mechanism
Simple Mutex Logic:
acquire(): Lock ko take karna
release(): Lock ko free karna
Monitors
High-level synchronization construct with built-in mutual exclusion
4.2 Deadlock
4.2.1 What is Deadlock?
Definition: Situation where processes wait for each other indefinitely
Real-world Example:
Do cars narrow bridge pe opposite directions se aaye aur dono wait kar rahe hain ki dusra piche
jaye - deadlock!
4.2.2 Deadlock Conditions (Coffman Conditions)
Deadlock sirf tab hota hai jab SARE 4 conditions simultaneously true hon:
1. Mutual Exclusion: Resource sirf ek process use kar sakta hai
2. Hold and Wait: Process resources hold karta hai aur more resources ka wait karta hai
3. No Preemption: Resources forcefully nahi le sakte
4. Circular Wait: Circular chain of waiting processes
4.2.3 Deadlock Handling Strategies
A) Deadlock Prevention
Kisi ek condition ko violate kar dena
Methods:
Mutual Exclusion: Avoid nahi kar sakte (practical nahi)
Hold and Wait:
All resources ek saath request karo
Resource chahiye to sabko release kar do
No Preemption: Force resources ko take karna
Circular Wait: Resources ko order mein assign karna
B) Deadlock Avoidance
System ko safe state mein rakhna
Safe State: Agar OS safely sab processes complete kar sakta hai
Unsafe State: Possibility of deadlock
Banker's Algorithm (most important for exams!)
C) Deadlock Detection and Recovery
Let deadlock happen, then detect aur recover
Detection: Wait-for graph check karna for cycles
Recovery:
Process termination
Resource preemption
Rollback processes
4.2.4 Banker's Algorithm (Detailed)
Purpose: Deadlock avoidance through safe state maintenance
Data Structures:
1. Available[m]: Available resources of each type
2. Max[n][m]: Maximum resources each process may need
3. Allocation[n][m]: Currently allocated resources
4. Need[n][m]: Remaining resource need = Max - Allocation
Safety Algorithm Steps:
1. Initialize Work = Available, Finish[i] = false for all i
2. Find process i such that Finish[i] = false AND Need[i] ≤ Work
3. If found: Work = Work + Allocation[i], Finish[i] = true, goto step 2
4. If all Finish[i] = true, then system is in safe state
Resource Request Algorithm:
1. Check if Request ≤ Need (valid request?)
2. Check if Request ≤ Available (resources available?)
3. Tentatively allocate resources
4. Run safety algorithm
5. If safe state: grant request, else deny
Example:
Processes: P0, P1, P2
Resources: A=10, B=5, C=7
Current Allocation:
A B C
P0: 0 1 0
P1: 2 0 0
P2: 3 0 2
Maximum Need:
A B C
P0: 7 5 3
P1: 3 2 2
P2: 9 0 2
Need = Max - Allocation:
A B C
P0: 7 4 3
P1: 1 2 2
P2: 6 0 0
Available: A=5, B=4, C=5
Safe sequence: P1 → P0 → P2 (verify karo!)
5. CPU Scheduling Algorithms
5.1 Why CPU Scheduling?
Multiprogramming: Multiple processes memory mein hain, CPU scheduling decide karta hai
konsa process CPU use karega.
CPU-I/O Burst Cycle: Processes alternatively CPU use karte hain aur I/O wait karte hain.
5.2 Scheduling Criteria
Metrics to optimize:
CPU Utilization: CPU busy time percentage
Throughput: Processes completed per unit time
Turnaround Time: Total time from arrival to completion
Waiting Time: Time spent in ready queue
Response Time: Time from request to first response
Formulas:
Turnaround Time = Completion Time - Arrival Time
Waiting Time = Turnaround Time - Burst Time
5.3 Scheduling Algorithms
5.3.1 First-Come, First-Served (FCFS)
Logic: Jo pahle aaya, usko pahle service
Characteristics:
Non-preemptive
Simple implementation (FIFO queue)
Poor performance (convoy effect)
Example:
Process Arrival Burst Completion Turnaround Waiting
P1 0 5 5 5 0
P2 1 3 8 7 4
P3 2 8 16 14 6
Average Waiting Time = (0+4+6)/3 = 3.33
Convoy Effect: Long process ke karan short processes wait karte rehte hain.
5.3.2 Shortest Job First (SJF)
Logic: Sabse short burst time wala process pahle
Types:
Non-preemptive SJF: Once started, process complete hone tak chalta hai
Preemptive SJF (SRTF): Agar shorter job aa jaye to current job preempt ho jata hai
Characteristics:
Optimal (minimum average waiting time)
Problem: Starvation (long processes kabhi execute nahi hote)
Difficult to predict burst time
Example (Non-preemptive):
Process Arrival Burst
P1 0 6
P2 1 8
P3 2 7
P4 3 3
Execution order: P1(0-6) → P4(6-9) → P3(9-16) → P2(16-24)
5.3.3 Round Robin (RR)
Logic: Har process ko fixed time quantum milta hai
Characteristics:
Preemptive
Time sharing systems ke liye ideal
No starvation
Performance depends on time quantum size
Time Quantum Selection:
Too small: High context switching overhead
Too large: Degrades to FCFS
Example:
Processes: P1(5), P2(3), P3(8), Time Quantum = 3
Execution: P1(3) → P2(3) → P3(3) → P1(2) → P3(3) → P3(2)
Timeline: 0-3-6-9-11-14-16
5.3.4 Priority Scheduling
Logic: Higher priority process pahle execute hota hai
Types:
Preemptive: Higher priority process aa jaye to current preempt
Non-preemptive: Current process complete hone tak wait karta hai
Problem: Starvation - Low priority processes kabhi execute nahi hote
Solution: Aging - Time ke saath priority increase karna
5.3.5 Multilevel Queue Scheduling
Different priority levels ke liye separate queues:
System processes (highest priority)
Interactive processes
Background processes (lowest priority)
5.3.6 Multilevel Feedback Queue
Processes queues ke beech move kar sakte hain based on behavior.
5.4 Algorithm Comparison
Algorithm Type Avg Waiting Time Starvation Complexity
FCFS Non-preemptive High No Low
SJF Both Optimal Yes Medium
RR Preemptive Medium No Medium
Priority Both Varies Yes Medium
6. Memory Management and Virtual Memory
6.1 Memory Management Goals
Allocation: Processes ko memory assign karna
Protection: Memory access control karna
Sharing: Controlled memory sharing allow karna
Organization: Efficient memory layout maintain karna
6.2 Memory Management Techniques
6.2.1 Fixed Partitioning
Memory ko fixed-size blocks mein divide karna
Problem: Internal fragmentation (unused space within partition)
6.2.2 Dynamic Partitioning
Variable-size partitions based on process requirements
Problem: External fragmentation (free space scattered)
6.2.3 Paging
Concept: Memory ko fixed-size frames mein divide karna, processes ko pages mein divide
karna
Key Components:
Page: Fixed-size logical memory unit
Frame: Fixed-size physical memory unit
Page Table: Mapping between pages and frames
Address Translation:
Logical Address = Page Number + Offset
Physical Address = Frame Number + Offset
Advantages:
No external fragmentation
Easy allocation
Support for virtual memory
Disadvantages:
Internal fragmentation possible (last page)
Page table overhead
6.2.4 Segmentation
Concept: Memory ko variable-size logical units (segments) mein divide karna
Segments Types:
Code segment
Data segment
Stack segment
Heap segment
Segment Table: Base address aur limit store karta hai
Advantages:
Logical organization
Protection and sharing easier
No internal fragmentation
Disadvantages:
External fragmentation
Complex memory allocation
6.2.5 Segmented Paging
Segmentation aur Paging ka combination - best of both worlds!
6.3 Virtual Memory
6.3.1 Concept
Virtual Memory: Process ka memory view jo physical memory se larger ho sakta hai
Benefits:
Programs physical memory size se bade ho sakte hain
More programs simultaneously run kar sakte hain
Memory protection
Memory sharing
6.3.2 Demand Paging
Pages sirf tab load karte hain jab needed (on-demand)
Page Fault: Jab process non-resident page access karta hai
Page Fault Handling:
1. Check if valid memory reference
2. Find free frame in physical memory
3. Load page from secondary storage
4. Update page table
5. Restart instruction
6.3.3 Page Replacement Algorithms
Jab physical memory full ho jaye to konsa page remove karna hai?
FIFO (First-In-First-Out)
Sabse pehle load hua page remove karo
Problem: Belady's Anomaly (more frames → more page faults)
LRU (Least Recently Used)
Jo page sabse kam recently use hua hai, usko remove karo
Best practical algorithm but implementation expensive
Optimal (MIN)
Future mein sabse late use hone wala page remove karo
Theoretical optimum but practically impossible
Second Chance (Clock)
FIFO with reference bit - second chance dena if recently used
Example:
Reference String: 1,2,3,4,1,2,5,1,2,3,4,5
Frames = 3
FIFO: 9 page faults
LRU: 8 page faults
Optimal: 6 page faults
6.3.4 Thrashing
Definition: Process pages continuously swap in/out - system performance degradation
Causes:
Too many processes in memory
Insufficient physical memory
Solution:
Working Set Model
Page Fault Frequency control
7. I/O Systems and Scheduling
7.1 I/O System Components
I/O Hardware: Devices, controllers, buses
I/O Software: Device drivers, I/O subsystem
I/O Interface: System calls, APIs
7.2 I/O Methods
7.2.1 Programmed I/O (Polling)
CPU actively checks device status
Disadvantage: CPU busy waiting (inefficient)
7.2.2 Interrupt-Driven I/O
Device generates interrupt when ready
Advantage: CPU can do other work while waiting
7.2.3 Direct Memory Access (DMA)
Device directly access memory without CPU involvement
Best for large data transfers
7.3 I/O Scheduling Algorithms
7.3.1 FCFS (First-Come, First-Served)
Simple but can cause long seek times
7.3.2 SSTF (Shortest Seek Time First)
Choose request closest to current head position
Problem: Starvation of distant requests
7.3.3 SCAN (Elevator Algorithm)
Move head in one direction, service all requests, then reverse
Advantage: Bounded seek time for all requests
7.3.4 C-SCAN (Circular SCAN)
Like SCAN but only scan in one direction (circular)
More uniform waiting time
7.3.5 LOOK and C-LOOK
Like SCAN/C-SCAN but only go as far as last request
Example:
Disk requests: 98, 183, 37, 122, 14, 124, 65, 67
Current head position: 53
Direction: Increasing
FCFS: 53→98→183→37→122→14→124→65→67 (Total: 640 tracks)
SSTF: 53→65→67→37→14→98→122→124→183 (Total: 236 tracks)
SCAN: 53→65→67→98→122→124→183→37→14 (Total: 208 tracks)
8. File Systems
8.1 File System Concepts
8.1.1 File
Definition: Named collection of related information stored on secondary storage
File Attributes:
Name, Type, Location
Size, Creation time, Last modified
Owner, Permissions
File Operations:
Create, Delete, Open, Close
Read, Write, Seek, Rename
8.1.2 Directory Structure
Files ko organize karne ka method
Types:
Single-Level: All files in one directory (simple but naming conflicts)
Two-Level: Each user has separate directory
Tree Structure: Hierarchical (most common)
Acyclic Graph: Shared files allowed
General Graph: Links allowed (can create cycles)
8.2 File Allocation Methods
8.2.1 Contiguous Allocation
File consecutive blocks mein store hota hai
Advantages: Fast access, simple
Disadvantages: External fragmentation, file size fix
8.2.2 Linked Allocation
File blocks linked list ki tarah linked hain
Advantages: No fragmentation, dynamic size
Disadvantages: Random access slow, pointer overhead
8.2.3 Indexed Allocation
Index block points to actual data blocks
Advantages: Random access, no fragmentation
Disadvantages: Index block overhead
8.3 File System Implementation
Boot Block: System boot information
Superblock: File system metadata
Free Block List: Available blocks tracking
I-nodes/FCB: File metadata storage
9. DOS, UNIX, and Windows Comparison
9.1 DOS (Disk Operating System)
Characteristics:
Single-user, Single-tasking: Ek time pe ek hi program
Command Line Interface: Text-based commands
16-bit architecture: Limited memory (640KB base memory)
No memory protection: Programs can interfere with each other
Simple file system: FAT (File Allocation Table)
Features:
File System: FAT12, FAT16, FAT32
Memory Management: No virtual memory
Security: No user accounts or permissions
Networking: Limited network support
Advantages:
Simple and lightweight
Fast boot time
Compatible with old hardware
Low memory requirements
Disadvantages:
No multitasking
Limited memory management
No security features
No network support
9.2 UNIX
Characteristics:
Multi-user, Multi-tasking: Multiple users aur programs simultaneously
Hierarchical File System: Single root directory (/)
Case-sensitive: Commands aur filenames
Shell Interface: Command line with powerful scripting
Modular Design: Small programs that do one thing well
Features:
File System: Hierarchical, everything is a file
Memory Management: Virtual memory, paging
Process Management: Fork, exec, pipes
Security: File permissions, user/group accounts
Networking: Built-in network support
File System Structure:
/ (root)
├── bin (system commands)
├── usr (user programs)
├── home (user directories)
├── etc (configuration files)
├── var (variable data)
└── tmp (temporary files)
Advantages:
Stable and robust
Powerful command line
Excellent for servers
Good security model
Portable across hardware
Disadvantages:
Steep learning curve
Command-line oriented
Less user-friendly for beginners
9.3 Windows
Characteristics:
Multi-user, Multi-tasking: GUI-based operating system
Drive Letters: C:, D:, E: for different drives
Registry: Central configuration database
Plug and Play: Automatic hardware detection
Backward Compatibility: Support for old programs
Features:
File System: NTFS, FAT32, exFAT
Memory Management: Virtual memory with paging
GUI: Windows, menus, icons
Security: User accounts, NTFS permissions, UAC
Networking: Built-in network support
File System Structure:
C:\ (drive letter)
├── Program Files (installed programs)
├── Users (user profiles)
├── Windows (system files)
└── ProgramData (shared application data)
Advantages:
User-friendly GUI
Wide software compatibility
Good hardware support
Integrated development tools
Disadvantages:
Resource intensive
Security vulnerabilities
Expensive licensing
Less stable than UNIX
9.4 Comparison Table
Feature DOS UNIX Windows
User Support Single Multi Multi
Multitasking No Yes Yes
GUI No Optional Yes
Memory Management Basic Advanced Advanced
File System FAT Hierarchical NTFS/FAT
Security None Strong Good
Case Sensitivity No Yes No
Cost Free/Cheap Varies Expensive
Learning Curve Easy Hard Medium
Stability Poor Excellent Good
Hardware Support Limited Good Excellent
Key Differences Summary
File Naming:
DOS: 8.3 format (filename.ext), limited characters
UNIX: 255 characters, case-sensitive, no restrictions
Windows: 255 characters, case-insensitive, some reserved names
Path Separators:
DOS/Windows: Backslash ()
UNIX: Forward slash (/)
Commands Comparison:
Task DOS UNIX Windows
List files DIR ls DIR
Change directory CD cd CD
Copy file COPY cp COPY
Delete file DEL rm DEL
Create directory MKDIR mkdir MKDIR
Clear screen CLS clear CLS
Important Points for Exams
Memory Management:
1. Paging vs Segmentation differences
2. Page replacement algorithms comparison
3. Virtual memory benefits
4. Thrashing causes and solutions
Process Management:
1. Process vs Thread differences
2. Process state transitions
3. IPC methods comparison
4. Context switching overhead
Scheduling:
1. Algorithm performance comparison
2. Time quantum effect in Round Robin
3. Convoy effect in FCFS
4. Starvation problems and solutions
Deadlock:
1. Four necessary conditions
2. Banker's algorithm step-by-step
3. Safe vs unsafe state
4. Prevention vs Avoidance vs Detection
File Systems:
1. Allocation methods pros/cons
2. Directory structures comparison
3. File system implementation details
Practice Problems
CPU Scheduling Example:
Processes: P1(0,10), P2(1,6), P3(3,4), P4(4,3)
Format: Process(Arrival, Burst)
Calculate average waiting time for:
1. FCFS
2. SJF (Non-preemptive)
3. Round Robin (quantum=4)
Banker's Algorithm Example:
System: 5 processes, 3 resources
Available: [3,3,2]
Allocation matrix and Max matrix given
Check if system is in safe state
Page Replacement Example:
Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,2
Frame size: 4
Compare FIFO, LRU, and Optimal algorithms
Tips for Non-CS Students
Understanding Complex Concepts:
1. Deadlock: Traffic jam analogy - cars waiting for each other
2. Virtual Memory: Library analogy - books on shelves vs books you can borrow
3. Paging: Dictionary analogy - index pointing to actual content
4. Semaphore: Bathroom key analogy - token-based access control
5. Process vs Thread: Company vs employees analogy
Memory Tricks:
DEADLOCK conditions: "My Help Never Comes"
Mutual Exclusion
Hold and Wait
No Preemption
Circular Wait
Page Replacement: "FIFO Less Optimal"
FIFO - simple but has anomaly
LRU - practical best choice
Optimal - theoretical minimum
Common Exam Mistakes:
1. Confusing process and program
2. Wrong Banker's algorithm calculations
3. Forgetting arrival time in scheduling
4. Mixing up paging and segmentation
5. Incorrect deadlock condition identification
Conclusion
Operating Systems sabse fundamental subject hai computer science mein. Ye notes cover karte
hain sab major topics jo exams mein important hain. Regular practice aur examples se concepts
clear ho jayenge.
Key Study Strategy:
1. Understand concepts first (don't memorize)
2. Practice numerical problems
3. Draw diagrams for better understanding
4. Compare different approaches
5. Relate concepts to real-world examples
Remember: OS is everywhere - mobile phones, computers, servers. Understanding these
concepts will help you in practical scenarios too!
Good luck with your studies! 📚✨