Operating System
Operating System
However, I can provide a comprehensive overview and detailed explanations for the key
concepts within each unit. This will serve as a strong foundation and a set of detailed notes
for your studies. For the most in-depth understanding, you should refer to your prescribed
textbook (which the page numbers indicate) and additional academic resources.
2. Computer-System-Organization
4. Operating-System Structure
● Monolithic Structure: The entire OS runs as a single program in kernel mode. All
components are tightly coupled.
○ Advantages: Fast execution, high performance due to no overhead of inter-
module communication.
○ Disadvantages: Difficult to develop, debug, and maintain; a bug in one
component can crash the entire system. (e.g., early UNIX, MS-DOS).
● Layered Approach: OS is divided into layers, each built on top of lower layers.
○ Advantages: Modularity, easier debugging (if a bug is found, it's likely in the
current layer or below), easier modification.
○ Disadvantages: Defining layers can be complex; performance overhead due to
inter-layer communication.
● Microkernel: Moves most OS functions (memory management, file systems, device
drivers) from the kernel to user-space processes. The kernel itself is very small and
handles only essential tasks like inter-process communication (IPC) and basic
scheduling.
○ Advantages: Extensibility, modularity, reliability (less code in kernel mode),
security.
○ Disadvantages: Performance overhead due to increased IPC. (e.g., Mach,
QNX).
● Modules (Loadable Kernel Modules): Modern OS (like Linux) use a modular approach
where core services are in the kernel, but others can be dynamically loaded/unloaded as
modules. This combines the benefits of monolithic (performance) and
layered/microkernel (modularity).
● Hybrid Systems: Most modern OS (Linux, Windows) use a hybrid approach, combining
the best features of different structures. For instance, Linux is primarily monolithic but
uses loadable modules, and Windows has a structured kernel that combines elements of
microkernel and layered approaches.
5. Operating System Operations
● Dual-Mode Operation:
○ User Mode: Used for executing user programs; limited access to hardware.
○ Kernel Mode (Supervisor/System Mode): Used for executing OS code; full
access to hardware and privileged instructions.
○ Purpose: Protects the OS and system resources from faulty or malicious user
programs. A mode bit in hardware indicates the current mode.
○ System Calls: The interface between user programs and the OS. User programs
request OS services via system calls, which trigger a switch from user mode to
kernel mode.
● Timer: A hardware timer generates interrupts at regular intervals.
○ Purpose: Prevents a single program from monopolizing the CPU by ensuring the
OS regains control. Used for time-sharing and scheduling.
6. Process Management
7. Memory Management
● Purpose: To optimize CPU utilization and to provide a good user experience by keeping
multiple programs in memory.
● Key Responsibilities of OS:
○ Keeping track of which parts of memory are being used and by whom.
○ Deciding which processes to load into memory when space becomes available.
○ Allocating and deallocating memory space as needed.
8. Storage Management
● File-System Management:
○ Files: Logical storage units; mapping them onto physical storage.
○ Directories: Organize files for easy navigation.
○ Key Responsibilities: Creating/deleting files and directories, mapping files onto
secondary storage, backing up files.
● Mass-Storage Management:
○ Hard Disks: Primary medium for long-term data storage.
○ Key Responsibilities: Free-space management, storage allocation, disk
scheduling, RAID management.
● Lists, Stacks, Queues: Fundamental data structures used extensively within the kernel
for managing processes, memory blocks, I/O requests, etc.
● Trees (e.g., Binary Search Trees, Red-Black Trees): Used for efficient searching and
organization of data, such as file system directories or process hierarchies.
● Hash Maps (Hash Tables): Provide fast lookups for specific data based on a key, often
used for caching or mapping unique identifiers to objects.
● Bitmaps: Arrays of bits used to represent the availability of resources (e.g., free memory
pages, free disk blocks).
● Definition: OS whose source code is freely available for anyone to use, modify, and
distribute.
● Examples: Linux, FreeBSD, Android (Linux-based).
● Advantages: Transparency, community development, cost-effectiveness, flexibility,
security through peer review.
● Disadvantages: May lack professional support (though commercial support is
available), fragmented development, steep learning curve for some.
● Definition: A program in execution. It's more than just the program code; it includes the
program counter, registers, and stack.
● Process State: A process can be in one of several states:
○ New: The process is being created.
○ Running: Instructions are being executed.
○ Waiting: The process is waiting for some event to occur (e.g., I/O completion,
signal).
○ Ready: The process is waiting to be assigned to a processor.
○ Terminated: The process has finished execution.
●
● Process Control Block (PCB): A data structure maintained by the OS for each
process, containing:
○ Process state
○ Program counter
○ CPU registers
○ CPU-scheduling information
○ Memory-management information
○ Accounting information
○ I/O status information
● Context Switch: The process of saving the state of the current process and loading the
state of another process. It's pure overhead.
2. Process Scheduling
● Purpose: To maximize CPU utilization and provide fair access to the CPU for multiple
processes.
● Schedulers:
○ Long-Term Scheduler (Job Scheduler): Selects processes from the job pool
and loads them into memory for execution (creates new processes). Controls the
degree of multiprogramming.
○ Short-Term Scheduler (CPU Scheduler): Selects a process from the ready
queue and allocates the CPU to it. Runs very frequently.
○ Medium-Term Scheduler: Swaps processes out of memory (and later swaps
them back in) to reduce the degree of multiprogramming or improve the mix of
processes. Used in time-sharing systems.
● Dispatcher: The module that gives control of the CPU to the process selected by the
short-term scheduler. It performs context switching.
● Scheduling Criteria:
○ CPU Utilization: Keep the CPU as busy as possible.
○ Throughput: Number of processes completed per unit time.
○ Turnaround Time: Total time from submission to completion.
○ Waiting Time: Total time a process spends in the ready queue.
○ Response Time: Time from submission of a request until the first response is
produced.
○ Fairness: Each process gets a fair share of the CPU.
● Scheduling Algorithms:
○ First-Come, First-Served (FCFS): Processes are executed in the order they
arrive. Non-preemptive. Simple but can lead to long waiting times (convoy effect).
○ Shortest-Job-First (SJF): Associates each process with the length of its next
CPU burst. The CPU is assigned to the process with the smallest next CPU
burst. Can be preemptive (Shortest-Remaining-Time-First, SRTF) or non-
preemptive. Optimal in terms of average waiting time.
○ Priority Scheduling: A priority number is associated with each process, and the
CPU is allocated to the process with the highest priority. Can suffer from
starvation (low-priority processes never run); aging can be used to solve this.
Can be preemptive or non-preemptive.
○ Round Robin (RR): Each process gets a small unit of CPU time (time quantum).
If the process doesn't complete within the quantum, it's preempted and put back
at the end of the ready queue. Preemptive. Good for time-sharing systems.
○ Multilevel Queue Scheduling: Ready queue is partitioned into separate queues
(e.g., foreground/interactive, background/batch). Each queue has its own
scheduling algorithm.
○ Multilevel Feedback Queue Scheduling: Allows processes to move between
different queues. Prevents starvation and can favor short jobs and interactive
processes.
3. Operations on Processes
● Process Creation:
○ A parent process creates child processes.
○ The child process can be a duplicate of the parent or have a new program loaded
into it.
○ fork() system call (UNIX/Linux): Creates a new process that is an exact copy
of the parent.
○ exec() system call (UNIX/Linux): Replaces the current process's memory space
with a new program.
● Process Termination:
○ Normal Exit: Process completes its execution (e.g., exit() system call).
○ Abnormal Termination:
■ Killed by Parent: Parent process terminates child process (e.g., kill()
system call).
■ Resource Exceeded: Process tries to use more resources than
allocated.
■ Invalid Instruction/Memory Access: Attempts to execute an invalid
instruction or access protected memory.
■ Arithmetic Error: Division by zero.
○ Zombie Process: A process that has terminated but whose entry in the process
table still exists because its parent has not yet called wait() to retrieve its exit
status.
○ Orphan Process: A process whose parent has terminated without waiting for its
child. Adopted by the init process (process ID 1).
● Pipes:
○ Ordinary Pipes: Unidirectional, fixed size, parent-child relationship required.
○ Named Pipes (FIFOs): Bidirectional, no parent-child relationship needed, can be
used by unrelated processes.
● Message Queues: A list of messages stored within the kernel, identified by a message
queue identifier. Processes can send/receive messages from the queue.
● Shared Memory Segments: A region of memory created by one process that other
processes can attach to.
● Sockets: Used for network communication between processes, even on different
machines. Support client-server models.
● Remote Procedure Calls (RPCs): Allows a process to call a procedure on a remote
machine as if it were a local procedure.
● Signals: Software interrupts used to notify a process of an event (e.g., SIGTERM for
termination, SIGKILL for forceful termination).
● Sockets: Provide an endpoint for communication. A pair of (IP address, port number)
defines a socket. Communication involves opening a socket, binding to an address,
listening for connections (server), connecting to a server (client), sending/receiving data,
and closing the socket.
● Remote Procedure Calls (RPCs): Client invokes a procedure on a server, and the
server executes the procedure and returns the result. The client-side stub packs
parameters into a message, and the server-side stub unpacks them and calls the actual
procedure.
● Pipes and Shared Memory: Can also be used in client-server setups, particularly if the
client and server are on the same machine.
Deadlocks
1. System Model
● Resources: Can be physical (e.g., CPU cycles, memory, I/O devices) or logical (e.g.,
files, semaphores). Each resource has a type and multiple instances.
● Resource Usage Cycle: A process uses a resource in the following sequence:
1. Request: The process requests the resource.
2. Use: The process uses the resource.
3. Release: The process releases the resource.
2. Deadlock Characterization
● Deadlock can arise if four necessary conditions hold simultaneously:
1. Mutual Exclusion: At least one resource must be held in a non-sharable mode
(only one process can use it at a time).
2. Hold and Wait: A process must be holding at least one resource and waiting to
acquire additional resources currently held by other processes.
3. No Preemption: Resources cannot be forcibly taken away from a process that is
holding them. They must be released voluntarily by the process.
4. Circular Wait: A set of processes {P0,P1,…,Pn} must exist such that P0 is
waiting for a resource held by P1, P1 is waiting for a resource held by P2, ..., and
Pn is waiting for a resource held by P0.
● Deadlock Prevention: Design a system where at least one of the four necessary
conditions cannot hold.
● Deadlock Avoidance: Give the OS enough information to avoid entering an unsafe
state.
● Deadlock Detection and Recovery: Allow deadlocks to occur, detect them, and then
recover.
● Ignore the Problem: Assume deadlocks never occur (e.g., UNIX/Linux typically use this
approach for simplicity, assuming deadlocks are rare or handled by application design).
4. Deadlock Prevention
5. Deadlock Avoidance
● Requires the OS to know in advance the maximum number of resources each process
will need.
● Safe State: A state is safe if there exists a sequence of processes P1,P2,…,Pn such
that for each Pi, the resources that Pi can still request can be satisfied by the currently
available resources plus the resources held by all Pj where j<i. If the system is in a safe
state, there is no deadlock. If it is in an unsafe state, there is a possibility of deadlock.
● Banker's Algorithm:
○ A sophisticated algorithm for deadlock avoidance.
○ Requires knowing the maximum possible requests for each process.
○ When a process requests resources, the algorithm checks if granting the request
would leave the system in a safe state. If yes, the request is granted; otherwise,
it's denied or delayed.
○ Data Structures:
■ Available: Vector of length m indicating the number of available
resources of each type.
■ Max: n x m matrix defining the maximum demand of each process.
■ Allocation: n x m matrix defining the number of resources of each
type currently allocated to each process.
■ Need: n x m matrix indicating the remaining resources needed by each
process (Need = Max - Allocation).
○ Safety Algorithm: Determines if the system is in a safe state.
○ Resource-Request Algorithm: Determines if a request can be granted safely.
6. Deadlock Detection
● Purpose: To store programs and data for quick access by the CPU.
● Basic Hardware: CPU can directly access main memory and registers. Registers are
faster, but limited. Main memory is slower but larger.
● Memory Management Unit (MMU): A hardware device that maps virtual addresses
(logical addresses generated by the CPU) to physical addresses (actual addresses in
main memory).
● Base and Limit Registers: Simple form of memory protection. Base register holds the
smallest valid physical address, limit register specifies the range size. Every memory
access must be within this range.
2. Swapping
● Concept: A process can be temporarily swapped out of main memory to a backing store
(disk) and then brought back in later.
● Purpose: Allows more processes to run than can fit in memory at once (medium-term
scheduling).
● Backing Store: Fast disk large enough to hold copies of all memory images for all
users.
● Roll Out, Roll In: Swapping used for priority-based scheduling; lower-priority process is
swapped out to allow a higher-priority process to be brought in.
● Challenges:
○ Context Switch Time: Swapping adds significant overhead to context switch
time.
○ I/O Time: The amount of time to swap is proportional to the amount of memory
swapped.
4. Segmentation
5. Paging
● Hierarchical Paging (Multilevel Paging): Breaks the page table into smaller,
hierarchical tables to save memory.
○ Two-Level Paging: An outer page table points to inner page tables, which then
point to frames. Common for 32-bit systems.
○ Problem: Still requires multiple memory accesses.
● Hashed Page Tables: Used for address spaces larger than 32 bits. A hash function
maps the virtual page number to an entry in a hash table.
● Inverted Page Table: Instead of one page table per process, there is one page table for
the entire system. Each entry stores (process-id, page-number) for the frame it holds.
○ Advantages: Reduces memory needed for page tables.
○ Disadvantages: Increases the time to search the page table (though often
combined with TLB).
Virtual Memory
1. Background
2. Demand Paging
● Concept: Pages are loaded into memory only when they are needed (demanded).
● Page Fault: An event that occurs when a program tries to access a page that is not
currently in memory.
○ Process:
1. Trap to OS.
2. Check page table to determine if the reference is valid but the page is not
in memory.
3. If invalid, terminate process.
4. If valid but not in memory, find a free frame.
5. Schedule a disk operation to bring the required page from backing store
into the free frame.
6. Update the page table.
7. Restart the instruction that caused the page fault.
● Lazy Swapper: A swapper that never swaps a page into memory unless that page is
actually needed.
● Performance: Measured by Effective Access Time (EAT) = (1 - p) * ma +
p * page_fault_time, where p is page-fault rate, ma is memory access time.
3. Page Replacement
● Problem: What if there are no free frames when a page fault occurs? An existing page
must be swapped out (victim page) to make space.
● Goal: Minimize the page-fault rate.
● Modify (Dirty) Bit: A bit associated with each page-table entry that indicates if the page
has been modified since it was loaded. If not modified, it doesn't need to be written back
to disk, saving I/O time.
● Page Replacement Algorithms:
○ FIFO (First-In, First-Out): Replaces the page that has been in memory the
longest. Simple but can suffer from Belady's Anomaly (more frames lead to more
page faults).
○ Optimal Page Replacement (OPT/MIN): Replaces the page that will not be
used for the longest period of time. Impossible to implement in practice as it
requires future knowledge, but serves as a benchmark.
○ LRU (Least-Recently-Used): Replaces the page that has not been used for the
longest period of time. Assumes past behavior predicts future.
■ Implementation: Requires hardware support (e.g., counters or stack of
page numbers).
■ Problem: Can be expensive to implement precisely.
○ LRU Approximation Algorithms:
■ Additional-Reference-Bits Algorithm: Uses a shift register for each
page. Periodically shifts the reference bit into the high-order bit. The page
with the smallest value is the LRU.
■ Second-Chance (Clock) Algorithm: Uses a reference bit. If a page has
its reference bit set, give it a "second chance" (clear bit and move on). If
not set, replace it.
■ Least Frequently Used (LFU): Replaces the page with the smallest
count.
■ Most Frequently Used (MFU): Replaces the page with the largest count
(less common, based on the idea that highly used pages just finished).
● Counting-Based Algorithms: LFU and MFU track access frequencies.
4. Allocation of Frames
5. Thrashing
● Definition: A phenomenon where a system spends more time paging (swapping pages
in and out) than executing application logic. Occurs when processes do not have enough
frames to hold their active sets of pages.
● Cause: High page-fault rate, leading to very low CPU utilization. The OS might respond
by trying to increase the degree of multiprogramming, which exacerbates the problem.
● Working Set Model:
○ Definition: The set of pages actively being used by a process in a given time
window.
○ Concept: A process performs optimally when its entire working set is in memory.
○ Thrashing Avoidance: The OS should try to allocate enough frames for the
working set of each active process. If not enough frames are available for all
working sets, some processes must be suspended (swapped out) to free up
frames.
● Page-Fault Frequency (PFF) Strategy: Directly monitors the page-fault rate. If PFF is
too high, allocate more frames. If PFF is too low, perhaps deallocate frames.
6. Memory-Mapped Files
2. Disk Structure
● Logical Blocks: Disks are typically viewed by the OS as a large 1-dimensional array of
logical blocks, which are the smallest units of transfer.
● Mapping: The 1-D logical block is mapped to (cylinder, track, sector) by the disk
controller.
3. Disk Attachment
● Host-Attached Storage: Connected directly to a host computer (e.g., via SATA, SAS,
Fiber Channel).
● Network-Attached Storage (NAS): Storage devices connected to a network that
provides file-level data access to heterogeneous clients (e.g., via NFS, CIFS).
● Storage Area Network (SAN): A dedicated high-speed network that connects storage
devices to servers. Provides block-level access to storage, making it appear as local
disks to the servers.
4. Disk Scheduling
● Purpose: To manage the queue of disk I/O requests to minimize disk access time.
● Access Time: Consists of:
○ Seek Time: Time to move the disk arm to the desired cylinder.
○ Rotational Latency: Time for the desired sector to rotate under the read/write
head.
○ Transfer Time: Time to transfer data.
● Algorithms:
○ FCFS (First-Come, First-Served): Simple but inefficient.
○ SSTF (Shortest-Seek-Time-First): Services the request closest to the current
head position. Can lead to starvation.
○ SCAN (Elevator Algorithm): The disk arm starts at one end of the disk and
moves toward the other end, servicing requests as it goes. When it reaches the
other end, it reverses direction.
○ C-SCAN (Circular SCAN): Similar to SCAN, but when the arm reaches the end,
it immediately returns to the beginning of the disk without servicing requests on
the return trip. Provides more uniform wait times.
○ LOOK and C-LOOK: Variants of SCAN and C-SCAN where the arm only goes
as far as the furthest request in the current direction, then reverses. More
efficient than full SCAN/C-SCAN.
5. Disk Management
● Partitioning: Dividing a disk into one or more logical partitions (volumes). Each partition
can be formatted with a different file system.
● Formatting:
○ Low-Level Formatting (Physical Formatting): Divides the disk into sectors and
creates basic data structures (e.g., error-correcting codes, headers). Performed
by the manufacturer.
○ Logical Formatting (File System Creation): The OS creates the file system
data structures (e.g., file allocation tables, inodes, free-space management
structures).
● Boot Block: A special block at the beginning of a disk (or partition) that contains the
bootstrap program (boot loader) to start the operating system.
● Bad Blocks: Sectors that are permanently damaged. Modern disks handle bad blocks
automatically using sector sparing/forwarding (remapping bad sectors to spare good
ones).
6. Swap-Space Management
● Swap Space: Disk space used as an extension of main memory for virtual memory
operations (e.g., demand paging).
● Location: Can be a dedicated raw partition or a normal file within the file system.
● Optimizations:
○ Dedicated Swap Partition: Faster, as no file system overhead.
○ Multiple Swap Spaces: Spreading swap space across multiple disks can
improve performance.
● Concept: Uses multiple physical disk drives to create a single logical unit for data
redundancy and/or performance improvement.
● Levels:
○ RAID 0 (Striping): Data is broken into blocks and written across multiple disks.
Improves performance (parallel I/O) but provides no redundancy.
○ RAID 1 (Mirroring): Data is duplicated on two or more disks. Provides high
reliability but is expensive (50% storage overhead).
○ RAID 4: Block-level striping with a dedicated parity disk. Good for reads, but
writes are slow due to parity disk bottleneck.
○ RAID 5: Block-level striping with distributed parity. Parity blocks are spread
across all disks, avoiding the bottleneck of RAID 4. Good balance of performance
and redundancy. Most common RAID level.
○ RAID 6: Block-level striping with two independent distributed parity blocks. Can
tolerate two simultaneous disk failures. Higher overhead than RAID 5.
○ RAID 1+0 (RAID 10): A striped array whose segments are mirrored. Combines
performance of striping with redundancy of mirroring. Expensive but high
performance and fault tolerance.
● Hardware vs. Software RAID: Hardware RAID is typically faster and handled by a
dedicated controller; software RAID uses the host CPU and OS.
8. Stable-Storage Implementation
● Goal: To ensure that data written to storage will survive any hardware or software
failure. Used for critical data like transaction logs.
● Mechanism: Typically achieved through a combination of techniques:
○ Duplication/Replication: Write data to multiple nonvolatile storage devices.
○ Careful Writes: Ensure that a write operation is completed to both copies before
signaling success.
○ Failure Recovery: Mechanisms to detect inconsistencies and restore
consistency after a failure.
2. Access Methods
● Sequential Access: Read/write proceeds in order. Most common (e.g., text editors,
compilers).
● Direct (Relative) Access: Records are fixed-length, allowing programs to read/write
records randomly by record number. Useful for databases.
● Indexed Access: Build an index for the file. The index contains pointers to the various
blocks. Supports direct, sequential, and random access.
● Directory: A collection of nodes containing information about all files on the system.
● Directory Operations: Search for a file, create/delete file, list directory, rename file,
traverse file system.
● Logical Directory Structures:
○ Single-Level Directory: All files are in one directory. Simple but problems with
naming and organization.
○ Two-Level Directory: Each user has their own directory. Solves naming
conflicts but no sharing.
○ Tree-Structured Directories: Most common. Users have their own directories,
and directories can contain subdirectories and files.
■ Current Directory (Working Directory): The directory where a user is
currently operating.
■ Absolute Pathname: Full path from the root directory.
■ Relative Pathname: Path relative to the current directory.
○ Acyclic-Graph Directories: Allows files and subdirectories to be shared. Avoids
cycles.
○ General Graph Directory: Allows cycles. Requires garbage collection for
deleted files.
● Disk Structure:
○ Partition (Volume): A logical division of a disk. Each volume contains a file
system.
○ File System Mounting: Attaching a file system to a designated mount point
(directory) in another file system's tree.
4. File-System Mounting
● Mount Point: The directory where the root of a mounted file system is attached.
● Process: The OS verifies the file system, reads its super-block (containing metadata),
and integrates it into the overall directory tree.
● Unmounting: Detaches a mounted file system, ensuring all dirty data is flushed to disk.
5. File Sharing
● Multiple Users:
○ User IDs (UID): Identify users.
○ Group IDs (GID): Identify groups of users.
● Sharing Types:
○ On the Same System: Through links (hard links and symbolic/soft links).
■ Hard Link: A directory entry that refers to the same underlying inode as
another file. Both links are equally valid paths to the file.
■ Symbolic Link (Soft Link): A special file that contains the path to
another file. If the original file is deleted, the symbolic link becomes
broken.
○ Remote File Systems:
■ NFS (Network File System): Allows clients to access files over a
network as if they were local.
■ CIFS (Common Internet File System): Used primarily by Windows for
network file sharing.
■ Distributed File Systems (DFS): Transparently manage files across
multiple machines.
6. Protection
● Goal: To control who can access what files and in what ways.
● Access Control List (ACL): Lists specific users and their permitted access rights for a
file/directory.
● Mode of Access: Read, Write, Execute (rwx).
● Access Control by User/Group/Other:
○ UNIX Permissions: owner, group, others. Each has read, write, execute
permissions. Represented by a 9-bit binary string or an octal number (e.g.,
drwxr-xr-x for directory, read/write/execute for owner, read/execute for group
and others).
● Password: Can be used to protect individual files.
● Encryption: Encrypting file contents provides strong security against unauthorized
access.
● Origin: Started by Linus Torvalds in 1991 as a hobby project, inspired by MINIX (a small
UNIX-like OS).
● Kernel: The core of the Linux operating system.
● GNU Project: Linux combined with GNU utilities (shell, compilers, libraries) formed the
complete GNU/Linux operating system.
● Key Features: Open source, Unix-like, highly customizable, runs on a vast array of
hardware.
2. Design Principles
3. Kernel Modules
● Concept: Pieces of code that can be loaded into and unloaded from the kernel while it is
running, without recompiling or rebooting the entire kernel.
● Examples: Device drivers, file system modules, network protocols.
● Benefits: Modularity, reduced kernel footprint, easier development and debugging,
dynamic system configuration.
● Commands: lsmod (list loaded modules), insmod (insert module), rmmod (remove
module), modprobe (intelligent module loader).
4. Process Management
● Processes and Threads: Linux uses a concept of "tasks" that are unified for both
processes and threads. A thread is essentially a process that shares its address space
with another process.
● fork() and exec(): Standard Unix-like process creation (fork creates a copy, exec
loads a new program).
● Process States: Running, Interruptible Sleep, Uninterruptible Sleep, Stopped, Zombie.
● PID (Process ID): Unique identifier for each process.
5. Scheduling
6. Memory Management
7. File Systems
● Virtual File System (VFS): An abstraction layer that allows Linux to support various
underlying concrete file systems (ext2, ext3, ext4, XFS, Btrfs, NFS, FAT, NTFS, etc.).
● Inode: A data structure that stores metadata about a file (permissions, owner, size,
timestamps, pointers to data blocks).
● Directory Structure: Hierarchical, single-rooted tree.
● Mounting: Attaching file systems to specific directories.
● Device Files: All devices are represented as files in the /dev directory (character
devices for sequential access, block devices for random access).
● Device Drivers: Kernel modules that manage specific hardware devices.
● I/O Scheduling: Disk I/O schedulers optimize disk access (e.g., deadline, CFQ -
Completely Fair Queuing, noop).
● Buffered I/O: Data is buffered in kernel memory to improve performance.
9. Interprocess Communication
● Standard UNIX IPC: Pipes (ordinary and named), Message Queues, Shared Memory,
Semaphores.
● Sockets: For network-based communication.
● Signals: For notifying processes of events.
Windows 7
1. History
2. Design Principles
3. System Components
● Terminal Services (Remote Desktop Services): Allows multiple users to remotely log
in and run applications on a single Windows server.
● Fast User Switching: Allows multiple users to be logged on simultaneously, and users
can switch between their active sessions without logging off. This works by saving the
state of the current user's session and loading the state of the new user's session.
● NTFS (New Technology File System): The primary file system for Windows NT-based
operating systems.
● Key Features:
○ Logging (Journaling): Records metadata changes in a log before applying them
to the file system, ensuring data integrity in case of crashes.
○ Security: Extensive use of Access Control Lists (ACLs) for granular permissions
on files and directories.
○ Compression: Supports on-the-fly file compression.
○ Encryption (EFS - Encrypting File System): File-level encryption.
○ Disk Quotas: Limit disk space usage per user.
○ Hard Links and Junction Points: Similar to Unix links.
○ Sparse Files: Files that contain large regions of zeros that do not consume disk
space.
○ Volume Shadow Copy Service (VSS): Allows creation of consistent snapshots
(shadow copies) of volumes, even while they are in use, for backup and
recovery.
6. Networking
7. Programmer Interface
● Win32 API: The primary application programming interface (API) for Windows
applications. Provides access to all system services (process creation, memory
management, I/O, GUI, networking).
● Native API: A lower-level API directly to the Executive, mostly used by OS components
and device drivers.
● Managed APIs: .NET Framework provides a managed environment and APIs for
application development.
● PowerShell: A command-line shell and scripting language for task automation.