Operating Systems
Operating Systems
1. A system call is a programmatic way in which a computer program requests a service from the
kernel of the operating system it is executed on. A system call is a way for programs to interact
with the operating system.
3. Layered Structure: An OS can be broken into pieces and retain much more control over the
system. In this structure, the OS is broken into a number of layers (levels). The bottom layer (layer
0) is the hardware, and the topmost layer (layer N) is the user interface.
Unit -2
8. Process Relationship: In an operating system, processes can have relationships with each
other, mainly through parent-child structures. Here’s a simple breakdown:
c. A parent can have multiple child processes, but each child has only one parent.
a. New State: In this step, the process is about to be created but not yet created. It is the program
that is present in secondary memory that will be picked up by the OS to create the process.
Ready State: New -> Ready to run. After the creation of a process, the process enters the ready
state i.e. the process is loaded into the main memory. The process here is ready to run and is
waiting to get the CPU time for its execution. Processes that are ready for execution by the CPU
are maintained in a queue called a ready queue for ready processes.
Run State: The process is chosen from the ready queue by the OS for execution and the
instructions within the process are executed by any one of the available processors.
Blocked or Wait State: Whenever the process requests access to I/O needs input from the user
or needs access to a critical region(the lock for which is already acquired) it enters the blocked or
waits state. The process continues to wait in the main memory and does not require CPU. Once
the I/O operation is completed the process goes to the ready state.
Terminated or Completed State: Process is killed as well as PCB is deleted. The resources
allocated to the process will be released or deallocated. logically contains a PCB for all of the
current processes in the system.
10. Structure of the Process Control Block: A Process Control Block (PCB) is a data structure
used by the operating system to manage information about a process. The process control keeps
track of many important pieces of information needed to manage processes efficiently.
a. Pointer: It is a stack pointer that is required to be saved when the process is switched from one
state to another to retain the current position of the process.
c. Process number: Every process is assigned a unique id known as process ID or PID which
stores the process identifier.
d. Program counter: Program Counter stores the counter, which contains the address of the next
instruction that is to be executed for the process.
e. Register: Registers in the PCB, it is a data structure. When a processes is running and it’s time
slice expires, the current value of process specific registers would be stored in the PCB and the
process would be swapped out. When the process is scheduled to be run, the register values is
read from the PCB and written to the CPU registers. This is the main purpose of the registers in
the PCB.
f. Memory limits: This field contains the information about memory management system used by
the operating system. This may include page tables, segment tables, etc.
g. List of Open files: This information includes the list of files opened for a process.
11. Context switching is a process in an operating system (OS) that allows multiple processes to
share a single central processing unit (CPU)
12. In an operating system (OS), a thread is a sequence of instructions that a computer can
manage and execute independently. It's the smallest unit of processor time that the OS allocates.
A thread can run any part of a process's code, including parts that are already being executed by
another thread.
13. Thread States in Operating Systems: When a thread moves through the system, it is always
in one of the five states:
(1) Ready
(2) Running
(3) Waiting
(4) Delayed
(5) Blocked
14. Types of Thread in Operating System: Threads are of two types. These are described below.
a. User Level Thread: User Level Thread is a type of thread that is not created using system
calls. The kernel has no work in the management of user-level threads. User-level threads can be
easily implemented by the user.
b. Kernel Level Threads: A kernel Level Thread is a type of thread that can recognize the
Operating system easily. Kernel Level Threads has its own thread table where it keeps track of
the system. The operating System Kernel helps in managing threads.
1. Long Term or Job Scheduler: It brings the new process to the ‘Ready State’. It controls the
Degree of Multi-programming, i.e., the number of processes present in a ready state at any point
in time. It is important that the long-term scheduler make a careful selection of both I/O and
CPU-bound processes.
2. Short-Term or CPU Scheduler: It is responsible for selecting one process from the ready state
for scheduling it on the running state. Note: Short-term scheduler only selects the process to
schedule it doesn’t load the process on running.
3. Medium-Term Scheduler: It is responsible for suspending and resuming the process. It mainly
does swapping (moving processes from main memory to disk and vice versa). Swapping may be
necessary to improve the process mix or because a change in memory requirements has
overcommitted available memory, requiring memory to be freed up.
17. Throughput: A measure of the work done by the CPU is the number of processes being
executed and completed per unit of time. This is called throughput. The throughput may vary
depending on the length or duration of the processes.
18. CPU utilization: The main objective of any CPU scheduling algorithm is to keep the CPU as
busy as possible. Theoretically, CPU utilization can range from 0 to 100 but in a real-time system,
it varies from 40 to 90 percent depending on the load upon the system.
20. Turnaround Time: For a particular process, an important criterion is how long it takes to
execute that process. The time elapsed from the time of submission of a process to the time of
completion is known as the turnaround time.
21. Waiting Time: A scheduling algorithm does not affect the time required to complete the
process once it starts execution. It only affects the waiting time of a process i.e. time spent by a
process waiting in the ready queue.
22. Response Time: In an interactive system, turn-around time is not the best criterion. A process
may produce some output fairly early and continue computing new results while previous results
are being output to the user. Thus another criterion is the time taken from submission of the
process of the request until the first response is produced.
25. First Come First Served CPU Scheduling: FCFS stands for First Come First Served (FCFS).
First Come First Served (FCFS) is the simplest type of algorithm. It is a non-preemptive algorithm
i.e. the process cannot be interrupted once it starts executing. The FCFS is implemented with the
help of a FIFO queue. The processes are put into the ready queue in the order of their arrival time.
26. Shortest Job First CPU Scheduling: SJF stands for Shortest Job First (SJF). Shortest Job First
(SJF) Scheduling Algorithm is based upon the burst time of the process. The processes are put
into the ready queue based on their burst times. In this algorithm, the process with the least burst
time is processed first.
27. Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a fixed
time slot. It is the preemptive version of the First come First Serve CPU Scheduling algorithm.
Round Robin CPU Algorithm generally focuses on Time Sharing technique.
29. Scheduling in Real Time Systems: Real-time systems are systems that carry real-time tasks.
These tasks need to be performed immediately with a certain degree of urgency. In particular,
these tasks are related to control of certain events (or) reacting to them. Real-time tasks can be
classified as hard real-time tasks and soft real-time tasks.
30. Rate-monotonic scheduling: Rate monotonic scheduling is a priority algorithm that belongs
to the static priority scheduling category of Real Time Operating Systems. It is preemptive in
nature. The priority is decided according to the cycle time of the processes that are involved. If the
process has a small job duration, then it has the highest priority.
31. Earliest Deadline First (EDF) CPU scheduling algorithm: Earliest Deadline First (EDF) is an
optimal dynamic priority scheduling algorithm used in real-time systems.
It can be used for both static and dynamic real-time scheduling.
EDF uses priorities to the jobs for scheduling. It assigns priorities to the task according to the
absolute deadline. The task whose deadline is closest gets the highest priority. The priorities are
assigned and changed in a dynamic fashion. EDF is very efficient as compared to other
scheduling algorithms in real-time systems.
Unit- 3
32. A critical section in an operating system (OS) is a code segment that multiple programs
access and must be executed by only one process or thread at a time.
34. Semaphores are a tool used in computer science to help manage how different processes (or
programs) share resources, like memory or data, without causing conflicts. A semaphore is a
special kind of synchronization data that can be used only through specific synchronization
primitives. Semaphores are used to implement critical sections, which are regions of code that
must be executed by only one process at a time.
35. What is Message Passing?
In this message passing process model, the processes communicate with others by exchanging
messages. A communication link between the processes is required for this purpose, and it must
provide at least two operations: transmit (message) and receive (message). Message sizes might
be flexible or fixed.
36. Mutual exclusion in an operating system (OS) is a synchronization technique that prevents
multiple threads from accessing the same shared resource at the same time.
37. Strict alternation is a process synchronization approach in an operating system (OS) that
allows two processes to execute their critical sections in turn.
38. A race condition in an operating system (OS) is a software bug that occurs when multiple
processes or threads access or modify shared data at the same time, resulting in unexpected or
incorrect behavior.
Readers: Multiple readers can access the shared data simultaneously without causing any issues
because they are only reading and not modifying the data.
Writers: Only one writer can access the shared data at a time to ensure data integrity, as writers
modify the data, and concurrent modifications could lead to data corruption or inconsistencies.
40. Paterson Solution: This is a software mechanism implemented at user mode. It is a busy
waiting solution can be implemented for only two processes. It uses two variables that are turn
variable and interested variable.
For example, let’s consider P0, P1, P2, P3, and P4 as the philosophers or processes and C0, C1,
C2, C3, and C4 as the 5 chopsticks or resources between each philosopher. Now if P0 wants to
eat, both resources/chopsticks C0 and C1 must be free, which would leave P1 and P4 void of the
resource and the process wouldn't be executed, which indicates there are limited
resources(C0,C1..) for multiple processes(P0, P1..), and this problem is known as the Dining.
Unit – 4
42. A Deadlock is a situation where each of the computer process waits for a resource which is
being assigned to some another process. In this situation, none of the process gets executed
since the resource it needs, is held by some other process which is also waiting for some other
resource to be released.
43. To prevent deadlock, an OS can ensure that - Mutual Exclusion, Hold and Wait, No
Preemption and Circular Wait one of these conditions is never allowed to hold. Here’s how each
condition can be tackled.
44. In deadlock avoidance, the request for any resource will be granted if the resulting state of
the system doesn't cause deadlock in the system. The state of the system will continuously be
checked for safe and unsafe states.
In order to avoid deadlocks, the process must tell OS, the maximum number of resources a
process can request to complete its execution.
a. Mutual Exclusion: Mutual Exclusion condition requires that at least one resource be held in a
non-shareable mode, which means that only one process can use the resource at any given time.
Both Resource 1 and Resource 2 are non-shareable in our scenario, and only one process can
have exclusive access to each resource at any given time. As an example:
Process 1 obtains Resource 1.
Process 2 acquires Resource 2.
b. Hold and Wait: The hold and wait condition specifies that a process must be holding at least
one resource while waiting for other processes to release resources that are currently held by
other processes. In our example,
Process 1 has Resource 1 and is awaiting Resource 2.
Process 2 currently has Resource 2 and is awaiting Resource 1.
Both processes hold one resource while waiting for the other, satisfying the hold and wait
condition.
c. No Preemption: Preemption is the act of taking a resource from a process before it has finished
its task. According to the no preemption condition, resources cannot be taken forcibly from a
process a process can only release resources voluntarily after completing its task.
d. Circular Wait: Circular wait is a condition in which a set of processes are waiting for resources
in such a way that there is a circular chain, with each process in the chain holding a resource that
the next process needs. This is one of the necessary conditions for a deadlock to occur in a
system.
46. Banker algorithm used to avoid deadlock and allocate resources safely to each process in the
computer system. The 'S-State' examines all possible tests or activities before deciding whether
the allocation should be allowed to each process. It also helps the operating system to
successfully share the resources between all the processes. The banker's algorithm is named
because it checks whether a person should be sanctioned a loan amount or not to help the bank
system safely simulate allocation resources.
47. Deadlock Detection And Recovery: Deadlock Detection and Recovery is the mechanism of
detecting and resolving deadlocks in an operating system. In operating systems, deadlock
recovery is important to keep everything running smoothly. A deadlock occurs when two or more
processes are blocked, waiting for each other to release the resources they need.
48. Detection methods help identify when this happens, and recovery techniques are used to
resolve these issues and restore system functionality. This ensures that computers and devices
can continue working without interruptions caused by deadlock situations. This can lead to a
system-wide stall, where no process can make progress.
Unit -5
It is completed by partitioning the memory into fixed-sized partitions and assigning every
partition to a single process. However, it will limit the degree of multiprogramming to the number
of fixed partitions done in memory.
53. What is Fixed Partitioning?
Fixed Partitioning is a contiguous memory management technique in which the main memory is
divided into fixed sized partitions which can be of equal or unequal size. Whenever we have to
allocate a process memory then a free partition that is big enough to hold the process is found.
Then the memory is allocated to the process.
55. Internal fragmentation: Occurs when a process uses less or more space than the assigned
memory block size. This can happen when memory is allocated in fixed-sized blocks, and the
process is larger than the memory. Internal fragmentation wastes space within a memory block.
56. Paging: In Operating Systems, Paging is a storage mechanism used to retrieve processes
from the secondary storage into the main memory in the form of pages.
The main idea behind the paging is to divide each process in the form of pages. The main
memory will also be divided in the form of frames.
57. External fragmentation: Occurs when a process is removed from the main memory, or when
there are repeated allocations and deallocations. This can happen when memory is allocated in
variable-sized blocks, and there are small, non-contiguous memory pieces that cannot be
assigned to any process. External fragmentation can lead to allocation failures even when there
is enough total memory.
Memory Management Unit (MMU): Or Memory Protection Unit (MPU), this is usually hardwired
into the CPU/MCU to support paged memory functionality.
CR0 control register: In CPUs that use the x86 instruction set architecture, this register enables
memory paging.
Base and limit registers: These are examples of address maps that require hardware support.
b. Internal fragmentation: Some space within pages may go unused, leading to wasted memory
c. Page table overhead: Page tables consume extra memory, especially in systems with many
processes.
d. Swapping overheads: Moving pages to and from disk (swapping) is slow and impacts
performance.
e. Complicated management: Managing page tables and swapping adds complexity to the OS.
f. Limited page size options: Choosing page size is challenging; smaller pages increase
overhead, larger ones cause fragmentation.
Page Fault Handler: Manages loading pages from disk to RAM when needed.
Replacement Algorithms: Decide which pages to replace when RAM is full (e.g., LRU, FIFO).
64. Page Replacement Algorithms: Page replacement algorithms are techniques used in
operating systems to manage memory efficiently when the virtual memory is full. When a new
page needs to be loaded into physical memory , and there is no free space, these algorithms
determine which existing page to replace.
67. The Not Recently Used (NRU) page replacement algorithm categorizes pages based on
their recent usage and reference status. Pages are grouped into four classes based on two bits: a
reference bit and a modified bit. When a page needs to be replaced, the algorithm selects a page
from the lowest, non-empty class, prioritizing pages that are not recently used and not modified.
This approach aims to replace pages that are least likely to be used again soon.
67. The Second Chance page replacement algorithm is a modified version of the FIFO algorithm
that gives each page a "second chance" before replacement. It uses a circular queue and a
reference bit for each page. When a page needs to be replaced, the algorithm checks the
reference bit of each page in sequence. If the bit is 1, it is reset to 0, and the page is given another
chance; if the bit is 0, the page is replaced.
Unit -6
68. I/O Hardware in Operating System: I/O Hardware is a set of specialized hardware devices
that help the operating system access disk drives, printers, and other peripherals. These devices
are located inside the motherboard and connected to the processor using a bus. They often have
specialized controllers that allow them to quickly respond to requests from software running on
top of them or even respond directly to commands from an application program.
Hardware Access: Device drivers provide an interface for the OS to access and communicate with
hardware devices.
Compatibility: They work as translators, allowing the OS to use many hardware types without
needing unique code for each.
Error Handling: Drivers detect and report errors from devices to the OS, helping maintain smooth
operations.
Resource Efficiency: They manage system resources (like memory or I/O buffers) to improve data
speed and reduce delays.
71. The device controller knows how to communicate with the operating system as well as how
to communicate with I/O devices. So device controller is an interface between the computer
system (operating system) and I/O devices. The device controller communicates with the system
using the system bus.
Uniform Access: Provides a standard way for applications to interact with any device, regardless
of its type.
Device Independence: Abstracts away hardware details so that applications can use files,
networks, or devices without special handling.
Consistent Error Reporting: Ensures all devices report errors in a standard way, simplifying
troubleshooting.
Resource Management: Allocates resources (such as IDs or buffers) for all devices consistently.
Security and Access Control: Provides security measures to control who can use certain devices,
ensuring privacy and access management.
73. Direct Memory Access (DMA) is a computer system feature that allows data to be
transferred directly between a computer's main memory and attached devices without the central
processing unit (CPU) getting involved
b. NTFS (New Technology File System): A modern file system used by Windows. It supports
features such as file and folder permissions, compression, and encryption.
c. ext (Extended File System): A file system commonly used on Linux and Unix-based operating
systems.
e. APFS (Apple File System): A new file system introduced by Apple for their Macs and iOS
devices.
75. File Access Methods in Operating System: File access methods in an operating system are
the techniques and processes used to read from and write to files stored on a computer’s storage
devices. There are several ways to access this information in the file. Some systems provide only
one access method for files. File Access Methods
There are three ways to access a file in a computer system:
Sequential-Access
Direct Access
Index sequential Method
76. File System Hierarchy
The file system hierarchy is the organization of files and directories in a logical and hierarchical
structure. It provides a way to organize files and directories based on their purpose and location.
Here are the main components of the file system hierarchy −
a. Root Directory − The root directory is the top-level directory in the file system hierarchy.
b. Subdirectories − Subdirectories are directories that are located within other directories.
c. File Paths − File paths are the routes that are used to locate files within the file system
hierarchy.
d. File System Mounting − File system mounting is the process of making a file system available
for use.
77. File Allocation Methods: File allocation methods determine how files are stored and
organized on a storage device.
a. Contiguous Allocation − Contiguous allocation is a method of storing files in which each file is
allocated a contiguous block of storage space on the storage device. This method allows for quick
and efficient access to files, but it can lead to fragmentation if files are frequently added and
deleted.
b. Linked Allocation − Linked allocation is a method of storing files in which each file is divided
into blocks that are scattered throughout the storage device. Each block contains a pointer to the
next block in the file. This method can help prevent fragmentation, but it can also lead to slower
access times due to the need to follow the links between blocks.
c. Indexed Allocation − Indexed allocation is a method of storing files in which a separate index
is maintained that contains a list of all the blocks that make up each file. This method allows for
quick access to files and helps prevent fragmentation, but it requires additional overhead to
maintain the index.
78. Free Space Management in Operating System: Free space management is a critical aspect
of operating systems as it involves managing the available storage space on the hard disk or
other secondary storage devices. The operating system uses various techniques to manage free
space and optimize the use of storage devices. Here are some of the commonly used free space
management techniques:
a. Bitmap or Bit Vector: A bit vector is a most frequently used method to implement the free
space list. A bit vector is also known as a Bit map. It is a series or collection of bits in which each
bit represents a disk block. The values taken by the bits are either 1 or 0. If the block bit is 1, it
means the block is empty and if the block bit is 0, it means the block is not free.
b. Linked List: A linked list is another approach for free space management in an operating
system. In it, all the free blocks inside a disk are linked together in a linked list. These free blocks
on the disk are linked together by a pointer. These pointers of the free block contain the address
of the next free block and the last pointer of the list points to null which indicates the end of the
linked list.
c. Grouping: The grouping technique is also called the "modification of a linked list technique". In
this method, first, the free block of memory contains the addresses of the n-free blocks. And the
last free block of these n free blocks contains the addresses of the next n free block of memory
and this keeps going on.
79. The directory implementation algorithms are classified according to the data structure they
are using. There are mainly two algorithms which are used in these days.
a. Linear List: In this algorithm, all the files in a directory are maintained as singly lined list.When
a new file is created, then the entire list is checked whether the new file name is matching to a
existing file name or not. In case, it doesn't exist, the file can be created at the beginning or at the
end. Therefore, searching for a unique name is a big concern because traversing the whole list
takes time.
b. Hash Table: In a hash table, each file in a directory is given a key-value pair. The key is created
by using a hash function on the file's name, which gives it a unique ID. This key then points to the
location of the file in the directory, making it easy to find and access.
80. Disk Management: Disk management is one of the critical operations carried out by the
operating system. It deals with organizing the data stored on the secondary storage devices
which includes the hard disk drives and the solid-state drives.
A. Process Management
B. Memory Management
Disk Scheduling Algorithms: There are several Disk Several Algorithms. We will discuss in
detail each one of them.
a. FCFS Scheduling Algorithm: It is the simplest Disk Scheduling algorithm. It services the IO
requests in the order in which they arrive.
b. SSTF (Shortest Seek Time First): In SSTF (Shortest Seek Time First), requests having the
shortest seek time are executed first. So, the seek time of every request is calculated in advance
in the queue and then they are scheduled according to their calculated seek time.
c. SCAN: In the SCAN algorithm the disk arm moves in a particular direction and services the
requests coming in its path and after reaching the end of the disk, it reverses its direction and
again services the request arriving in its path
d. C-SCAN: In the SCAN algorithm, the disk arm again scans the path that has been scanned,
after reversing its direction. So, it may be possible that too many requests are waiting at the other
end or there may be zero or few requests pending at the scanned area.
82. Disk formatting is a process to configure the data-storage devices such as hard-drive, floppy
disk and flash drive when we are going to use them for the very first time or we can say initial
usage. Disk formatting is usually required when new operating system is going to be used by the
user.