0% found this document useful (0 votes)
10 views15 pages

Operating Systems

Uploaded by

mondalbharat13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views15 pages

Operating Systems

Uploaded by

mondalbharat13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Operating Systems

1. A system call is a programmatic way in which a computer program requests a service from the
kernel of the operating system it is executed on. A system call is a way for programs to interact
with the operating system.

2. What is a System Structure for an Operating System?


A system structure for an operating system is like the blueprint of how an OS is organized and
how its different parts interact with each other.

3. Layered Structure: An OS can be broken into pieces and retain much more control over the
system. In this structure, the OS is broken into a number of layers (levels). The bottom layer (layer
0) is the hardware, and the topmost layer (layer N) is the user interface.

4. What is Monolithic Architecture: In a monolithic architecture, the operating system kernel is


designed to provide all operating system services, including memory management, process
scheduling, device drivers, and file systems, in a single, large binary. This means that all code
runs in kernel space, with no separation between kernel and user-level processes.
5. What is a MicroKernel?
Microkernel is a type of Operating System that provides some basic services for an operating
system/ These services include memory management, process scheduling, etc. Some other
services like Device Drivers, File Systems, etc are managed by user-level processes.
6. What is a virtual machine?
In its simplest form, a virtual machine, or VM, is a digitized version of a physical computer. Virtual
machines can run programs and operating systems, store data, connect to networks, and do other
computing functions.

Unit -2

7. A process in an operating system is a program that's currently running, managed by the OS to


perform tasks independently.

8. Process Relationship: In an operating system, processes can have relationships with each
other, mainly through parent-child structures. Here’s a simple breakdown:

Parent and Child Processes

a. A parent process is a process that creates another process.

b. The new process it creates is called the child process.

c. A parent can have multiple child processes, but each child has only one parent.

d. Process States in Operating System

9. The states of a process are as follows:

a. New State: In this step, the process is about to be created but not yet created. It is the program
that is present in secondary memory that will be picked up by the OS to create the process.
Ready State: New -> Ready to run. After the creation of a process, the process enters the ready
state i.e. the process is loaded into the main memory. The process here is ready to run and is
waiting to get the CPU time for its execution. Processes that are ready for execution by the CPU
are maintained in a queue called a ready queue for ready processes.

Run State: The process is chosen from the ready queue by the OS for execution and the
instructions within the process are executed by any one of the available processors.

Blocked or Wait State: Whenever the process requests access to I/O needs input from the user
or needs access to a critical region(the lock for which is already acquired) it enters the blocked or
waits state. The process continues to wait in the main memory and does not require CPU. Once
the I/O operation is completed the process goes to the ready state.

Terminated or Completed State: Process is killed as well as PCB is deleted. The resources
allocated to the process will be released or deallocated. logically contains a PCB for all of the
current processes in the system.

10. Structure of the Process Control Block: A Process Control Block (PCB) is a data structure
used by the operating system to manage information about a process. The process control keeps
track of many important pieces of information needed to manage processes efficiently.

a. Pointer: It is a stack pointer that is required to be saved when the process is switched from one
state to another to retain the current position of the process.

b. Process state: It stores the respective state of the process.

c. Process number: Every process is assigned a unique id known as process ID or PID which
stores the process identifier.

d. Program counter: Program Counter stores the counter, which contains the address of the next
instruction that is to be executed for the process.

e. Register: Registers in the PCB, it is a data structure. When a processes is running and it’s time
slice expires, the current value of process specific registers would be stored in the PCB and the
process would be swapped out. When the process is scheduled to be run, the register values is
read from the PCB and written to the CPU registers. This is the main purpose of the registers in
the PCB.

f. Memory limits: This field contains the information about memory management system used by
the operating system. This may include page tables, segment tables, etc.

g. List of Open files: This information includes the list of files opened for a process.

11. Context switching is a process in an operating system (OS) that allows multiple processes to
share a single central processing unit (CPU)

12. In an operating system (OS), a thread is a sequence of instructions that a computer can
manage and execute independently. It's the smallest unit of processor time that the OS allocates.
A thread can run any part of a process's code, including parts that are already being executed by
another thread.
13. Thread States in Operating Systems: When a thread moves through the system, it is always
in one of the five states:

(1) Ready

(2) Running

(3) Waiting

(4) Delayed

(5) Blocked

14. Types of Thread in Operating System: Threads are of two types. These are described below.

a. User Level Thread: User Level Thread is a type of thread that is not created using system
calls. The kernel has no work in the management of user-level threads. User-level threads can be
easily implemented by the user.
b. Kernel Level Threads: A kernel Level Thread is a type of thread that can recognize the
Operating system easily. Kernel Level Threads has its own thread table where it keeps track of
the system. The operating System Kernel helps in managing threads.

15. Advantages of User-Level Threads:


a. Implementation of the User-Level Thread is easier than Kernel Level Thread.
b. Context Switch Time is less in User Level Thread.
C. User-Level Thread is more efficient than Kernel-Level Thread.
d. Because of the presence of only Program Counter, Register Set, and Stack Space, it has a
simple representation.

16. What is Process Scheduling?


Process scheduling is the activity of the process manager that handles the removal of the running
process from the CPU and the selection of another process based on a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating system.

Types of Process Schedulers: There are three types of process schedulers:

1. Long Term or Job Scheduler: It brings the new process to the ‘Ready State’. It controls the
Degree of Multi-programming, i.e., the number of processes present in a ready state at any point
in time. It is important that the long-term scheduler make a careful selection of both I/O and
CPU-bound processes.

2. Short-Term or CPU Scheduler: It is responsible for selecting one process from the ready state
for scheduling it on the running state. Note: Short-term scheduler only selects the process to
schedule it doesn’t load the process on running.

3. Medium-Term Scheduler: It is responsible for suspending and resuming the process. It mainly
does swapping (moving processes from main memory to disk and vice versa). Swapping may be
necessary to improve the process mix or because a change in memory requirements has
overcommitted available memory, requiring memory to be freed up.
17. Throughput: A measure of the work done by the CPU is the number of processes being
executed and completed per unit of time. This is called throughput. The throughput may vary
depending on the length or duration of the processes.

18. CPU utilization: The main objective of any CPU scheduling algorithm is to keep the CPU as
busy as possible. Theoretically, CPU utilization can range from 0 to 100 but in a real-time system,
it varies from 40 to 90 percent depending on the load upon the system.

19. What is Multithreading?


Multithreading is a feature in operating systems that allows a program to do several tasks at the
same time. Think of it like having multiple hands working together to complete different parts of
a job faster. Each “hand” is called a thread, and they help make programs run more efficiently.
Multithreading makes your computer work better by using its resources more effectively.

20. Turnaround Time: For a particular process, an important criterion is how long it takes to
execute that process. The time elapsed from the time of submission of a process to the time of
completion is known as the turnaround time.

21. Waiting Time: A scheduling algorithm does not affect the time required to complete the
process once it starts execution. It only affects the waiting time of a process i.e. time spent by a
process waiting in the ready queue.

22. Response Time: In an interactive system, turn-around time is not the best criterion. A process
may produce some output fairly early and continue computing new results while previous results
are being output to the user. Thus another criterion is the time taken from submission of the
process of the request until the first response is produced.

23. What is Preemptive Scheduling?


Preemptive scheduling is used when a process switches from the running state to the ready state
or from the waiting state to the ready state. The resources (mainly CPU cycles) are allocated to
the process for a limited amount of time and then taken away, and the process is again placed
back in the ready queue if that process still has CPU burst time remaining.

24. What is Non-Preemptive Scheduling?


Non-preemptive Scheduling is used when a process terminates, or a process switches from
running to the waiting state. In this scheduling, once the resources (CPU cycles) are allocated to a
process, the process holds the CPU till it gets terminated or reaches a waiting state.

25. First Come First Served CPU Scheduling: FCFS stands for First Come First Served (FCFS).
First Come First Served (FCFS) is the simplest type of algorithm. It is a non-preemptive algorithm
i.e. the process cannot be interrupted once it starts executing. The FCFS is implemented with the
help of a FIFO queue. The processes are put into the ready queue in the order of their arrival time.

26. Shortest Job First CPU Scheduling: SJF stands for Shortest Job First (SJF). Shortest Job First
(SJF) Scheduling Algorithm is based upon the burst time of the process. The processes are put
into the ready queue based on their burst times. In this algorithm, the process with the least burst
time is processed first.
27. Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a fixed
time slot. It is the preemptive version of the First come First Serve CPU Scheduling algorithm.
Round Robin CPU Algorithm generally focuses on Time Sharing technique.

28. What is Multiple-Processor Scheduling?


In systems containing more than one processor, multiple-processor scheduling addresses task
allocations to multiple CPUs. This will involve higher throughputs since several tasks can be
processed concurrently in separate processors. It would also involve the determination of which
CPU handles a particular task and balancing loads between available processors.

29. Scheduling in Real Time Systems: Real-time systems are systems that carry real-time tasks.
These tasks need to be performed immediately with a certain degree of urgency. In particular,
these tasks are related to control of certain events (or) reacting to them. Real-time tasks can be
classified as hard real-time tasks and soft real-time tasks.

30. Rate-monotonic scheduling: Rate monotonic scheduling is a priority algorithm that belongs
to the static priority scheduling category of Real Time Operating Systems. It is preemptive in
nature. The priority is decided according to the cycle time of the processes that are involved. If the
process has a small job duration, then it has the highest priority.

31. Earliest Deadline First (EDF) CPU scheduling algorithm: Earliest Deadline First (EDF) is an
optimal dynamic priority scheduling algorithm used in real-time systems.
It can be used for both static and dynamic real-time scheduling.

EDF uses priorities to the jobs for scheduling. It assigns priorities to the task according to the
absolute deadline. The task whose deadline is closest gets the highest priority. The priorities are
assigned and changed in a dynamic fashion. EDF is very efficient as compared to other
scheduling algorithms in real-time systems.

Unit- 3

32. A critical section in an operating system (OS) is a code segment that multiple programs
access and must be executed by only one process or thread at a time.

33. Producer Consumer Problem using Semaphores: The Producer-Consumer problem is a


classic synchronization issue in operating systems. It involves two types of processes: producers,
which generate data, and consumers, which process that data. Both share a common buffer. The
challenge is to ensure that the producer doesn’t add data to a full buffer and the consumer
doesn’t remove data from an empty buffer while avoiding conflicts when accessing the buffer.

34. Semaphores are a tool used in computer science to help manage how different processes (or
programs) share resources, like memory or data, without causing conflicts. A semaphore is a
special kind of synchronization data that can be used only through specific synchronization
primitives. Semaphores are used to implement critical sections, which are regions of code that
must be executed by only one process at a time.
35. What is Message Passing?
In this message passing process model, the processes communicate with others by exchanging
messages. A communication link between the processes is required for this purpose, and it must
provide at least two operations: transmit (message) and receive (message). Message sizes might
be flexible or fixed.

36. Mutual exclusion in an operating system (OS) is a synchronization technique that prevents
multiple threads from accessing the same shared resource at the same time.

37. Strict alternation is a process synchronization approach in an operating system (OS) that
allows two processes to execute their critical sections in turn.

38. A race condition in an operating system (OS) is a software bug that occurs when multiple
processes or threads access or modify shared data at the same time, resulting in unexpected or
incorrect behavior.

39. What is The Readers-Writers Problem?


The Readers-Writers Problem is a classic synchronization issue in operating systems that
involves managing access to shared data by multiple threads or processes. The problem
addresses the scenario where:

Readers: Multiple readers can access the shared data simultaneously without causing any issues
because they are only reading and not modifying the data.

Writers: Only one writer can access the shared data at a time to ensure data integrity, as writers
modify the data, and concurrent modifications could lead to data corruption or inconsistencies.

40. Paterson Solution: This is a software mechanism implemented at user mode. It is a busy
waiting solution can be implemented for only two processes. It uses two variables that are turn
variable and interested variable.

41. Dining Philosophers Problem is a typical example of limitations in process synchronization in


systems with multiple processes and limited resources. According to the Dining Philosopher
Problem, assume there are K philosophers seated around a circular table, each with one
chopstick between them. This means, that a philosopher can eat only if he/she can pick up both
chopsticks next to him/her. One of the adjacent followers may take up one of the chopsticks, but
not both.

For example, let’s consider P0, P1, P2, P3, and P4 as the philosophers or processes and C0, C1,
C2, C3, and C4 as the 5 chopsticks or resources between each philosopher. Now if P0 wants to
eat, both resources/chopsticks C0 and C1 must be free, which would leave P1 and P4 void of the
resource and the process wouldn't be executed, which indicates there are limited
resources(C0,C1..) for multiple processes(P0, P1..), and this problem is known as the Dining.
Unit – 4

42. A Deadlock is a situation where each of the computer process waits for a resource which is
being assigned to some another process. In this situation, none of the process gets executed
since the resource it needs, is held by some other process which is also waiting for some other
resource to be released.

43. To prevent deadlock, an OS can ensure that - Mutual Exclusion, Hold and Wait, No
Preemption and Circular Wait one of these conditions is never allowed to hold. Here’s how each
condition can be tackled.

44. In deadlock avoidance, the request for any resource will be granted if the resulting state of
the system doesn't cause deadlock in the system. The state of the system will continuously be
checked for safe and unsafe states.

In order to avoid deadlocks, the process must tell OS, the maximum number of resources a
process can request to complete its execution.

45. Necessary Conditions for the Occurrence of a Deadlock


Let’s explain all four conditions related to deadlock in the context of the scenario with two
processes and two resources:

a. Mutual Exclusion: Mutual Exclusion condition requires that at least one resource be held in a
non-shareable mode, which means that only one process can use the resource at any given time.
Both Resource 1 and Resource 2 are non-shareable in our scenario, and only one process can
have exclusive access to each resource at any given time. As an example:
Process 1 obtains Resource 1.
Process 2 acquires Resource 2.

b. Hold and Wait: The hold and wait condition specifies that a process must be holding at least
one resource while waiting for other processes to release resources that are currently held by
other processes. In our example,
Process 1 has Resource 1 and is awaiting Resource 2.
Process 2 currently has Resource 2 and is awaiting Resource 1.
Both processes hold one resource while waiting for the other, satisfying the hold and wait
condition.

c. No Preemption: Preemption is the act of taking a resource from a process before it has finished
its task. According to the no preemption condition, resources cannot be taken forcibly from a
process a process can only release resources voluntarily after completing its task.

d. Circular Wait: Circular wait is a condition in which a set of processes are waiting for resources
in such a way that there is a circular chain, with each process in the chain holding a resource that
the next process needs. This is one of the necessary conditions for a deadlock to occur in a
system.
46. Banker algorithm used to avoid deadlock and allocate resources safely to each process in the
computer system. The 'S-State' examines all possible tests or activities before deciding whether
the allocation should be allowed to each process. It also helps the operating system to
successfully share the resources between all the processes. The banker's algorithm is named
because it checks whether a person should be sanctioned a loan amount or not to help the bank
system safely simulate allocation resources.

47. Deadlock Detection And Recovery: Deadlock Detection and Recovery is the mechanism of
detecting and resolving deadlocks in an operating system. In operating systems, deadlock
recovery is important to keep everything running smoothly. A deadlock occurs when two or more
processes are blocked, waiting for each other to release the resources they need.

48. Detection methods help identify when this happens, and recovery techniques are used to
resolve these issues and restore system functionality. This ensures that computers and devices
can continue working without interruptions caused by deadlock situations. This can lead to a
system-wide stall, where no process can make progress.

Unit -5

49. What do you mean by memory management?


Memory is the important part of the computer that is used to store the data. Its management is
critical to the computer system because the amount of main memory available in a computer
system is very limited. At any time, many processes are competing for it.

50. What is a Logical Address?


A logical address, also known as a virtual address, is an address generated by the CPU during
program execution. It is the address seen by the process and is relative to the program’s address
space. The process accesses memory using logical addresses, which are translated by the
operating system into physical addresses. An address that is created by the CPU while a program
is running is known as a logical address.

51. What is a Physical Address?


A physical address is the actual address in the main memory where data is stored. It is a location
in physical memory, as opposed to a virtual address. Physical addresses are used by the Memory
Management Unit (MMU) to translate logical addresses into physical addresses. The user must
use the corresponding logical address to go to the physical address rather than directly accessing
the physical address.

52. What is Contiguous Memory Allocation?


It is the type of memory allocation method. When a process requests the memory, a single
contiguous section of memory blocks is allotted depending on its requirements.

It is completed by partitioning the memory into fixed-sized partitions and assigning every
partition to a single process. However, it will limit the degree of multiprogramming to the number
of fixed partitions done in memory.
53. What is Fixed Partitioning?
Fixed Partitioning is a contiguous memory management technique in which the main memory is
divided into fixed sized partitions which can be of equal or unequal size. Whenever we have to
allocate a process memory then a free partition that is big enough to hold the process is found.
Then the memory is allocated to the process.

54. What is Variable Partitioning?


Variable Partitioning is a contiguous memory management technique in which the main memory
is not divided into partitions and the process is allocated a chunk of free memory that is big
enough for it to fit. The space which is left is considered as the free space which can be further
used by other processes. It also provides the concept of compaction.

55. Internal fragmentation: Occurs when a process uses less or more space than the assigned
memory block size. This can happen when memory is allocated in fixed-sized blocks, and the
process is larger than the memory. Internal fragmentation wastes space within a memory block.

56. Paging: In Operating Systems, Paging is a storage mechanism used to retrieve processes
from the secondary storage into the main memory in the form of pages.

The main idea behind the paging is to divide each process in the form of pages. The main
memory will also be divided in the form of frames.

57. External fragmentation: Occurs when a process is removed from the main memory, or when
there are repeated allocations and deallocations. This can happen when memory is allocated in
variable-sized blocks, and there are small, non-contiguous memory pieces that cannot be
assigned to any process. External fragmentation can lead to allocation failures even when there
is enough total memory.

58. Some hardware components that support paging:

Memory Management Unit (MMU): Or Memory Protection Unit (MPU), this is usually hardwired
into the CPU/MCU to support paged memory functionality.

CR0 control register: In CPUs that use the x86 instruction set architecture, this register enables
memory paging.

Base and limit registers: These are examples of address maps that require hardware support.

59. Disadvantages of paging:


a. Increased memory access time: Additional steps are required to access memory, slowing
down the process.

b. Internal fragmentation: Some space within pages may go unused, leading to wasted memory

c. Page table overhead: Page tables consume extra memory, especially in systems with many
processes.

d. Swapping overheads: Moving pages to and from disk (swapping) is slow and impacts
performance.

e. Complicated management: Managing page tables and swapping adds complexity to the OS.
f. Limited page size options: Choosing page size is challenging; smaller pages increase
overhead, larger ones cause fragmentation.

60. Virtual memory Hardware and control structures:

Page Fault Handler: Manages loading pages from disk to RAM when needed.

Replacement Algorithms: Decide which pages to replace when RAM is full (e.g., LRU, FIFO).

Access Permissions: Control read/write access to pages, ensuring process isolation.

61. What is Virtual Memory?


Virtual memory is a memory management technique used by operating systems to give the
appearance of a large, continuous block of memory to applications, even if the physical memory
(RAM) is limited. It allows the system to compensate for physical memory shortages, enabling
larger applications to run on systems with less RAM.

62. What is Page Fault in Operating System?


Page faults dominate more like an error. A page fault will happen if a program tries to access a
piece of memory that does not exist in physical memory (main memory). The fault specifies the
operating system to trace all data into virtual memory management and then relocate it from
secondary memory to its primary memory, such as a hard disk.

63. What is Demand Paging?


Demand paging is a technique used in virtual memory systems where pages enter main memory
only when requested or needed by the CPU. In demand paging, the operating system loads only
the necessary pages of a program into memory at runtime, instead of loading the entire program
into memory at the start. A page fault occurred when the program needed to access a page that
is not currently in memory.

64. Page Replacement Algorithms: Page replacement algorithms are techniques used in
operating systems to manage memory efficiently when the virtual memory is full. When a new
page needs to be loaded into physical memory , and there is no free space, these algorithms
determine which existing page to replace.

Common Page Replacement Techniques:

a. First In First Out (FIFO)


b. Optimal Page replacement
c. Least Recently Used
d. Most Recently Used (MRU)

65. Least Recently Used (LRU) Replacement Algorithm:


This is the last basic algorithm of Page Replacement Algorithms. This algorithm is basically
dependent on the number of frames used. Then each frame takes up the certain page and tries to
access it. When the frames are filled then the actual problem starts. The fixed number of frames
is filled up with the help of first frames present. This concept is fulfilled with the help of Demand
Paging.
66. First in First out Page Replacement Algorithm:
This is the first basic algorithm of Page Replacement Algorithms. This algorithm is basically
dependent on the number of frames used. Then each frame takes up the certain page and tries to
access it. When the frames are filled then the actual problem starts. The fixed number of frames
is filled up with the help of first frames present. This concept is fulfilled with the help of Demand
Paging.

67. The Not Recently Used (NRU) page replacement algorithm categorizes pages based on
their recent usage and reference status. Pages are grouped into four classes based on two bits: a
reference bit and a modified bit. When a page needs to be replaced, the algorithm selects a page
from the lowest, non-empty class, prioritizing pages that are not recently used and not modified.
This approach aims to replace pages that are least likely to be used again soon.

67. The Second Chance page replacement algorithm is a modified version of the FIFO algorithm
that gives each page a "second chance" before replacement. It uses a circular queue and a
reference bit for each page. When a page needs to be replaced, the algorithm checks the
reference bit of each page in sequence. If the bit is 1, it is reset to 0, and the page is given another
chance; if the bit is 0, the page is replaced.

Unit -6

68. I/O Hardware in Operating System: I/O Hardware is a set of specialized hardware devices
that help the operating system access disk drives, printers, and other peripherals. These devices
are located inside the motherboard and connected to the processor using a bus. They often have
specialized controllers that allow them to quickly respond to requests from software running on
top of them or even respond directly to commands from an application program.

69. Goals of Interrupt Handlers


a. Quick Response: Interrupt handlers are small programs that respond instantly to device
signals (like a key press or data from a network).
b. Low Delay (Minimal Latency): They keep interruptions short and resume normal processes
quickly, so the system stays fast.
c. Stability: They manage events without causing issues in other parts of the system, preserving
stability.
d. Clear Signal: Once done, they clear the signal to avoid handling the same interrupt repeatedly.

70. Goals of Device Drivers

Hardware Access: Device drivers provide an interface for the OS to access and communicate with
hardware devices.

Compatibility: They work as translators, allowing the OS to use many hardware types without
needing unique code for each.

Error Handling: Drivers detect and report errors from devices to the OS, helping maintain smooth
operations.

Resource Efficiency: They manage system resources (like memory or I/O buffers) to improve data
speed and reduce delays.
71. The device controller knows how to communicate with the operating system as well as how
to communicate with I/O devices. So device controller is an interface between the computer
system (operating system) and I/O devices. The device controller communicates with the system
using the system bus.

72. Goals of Device-Independent I/O Software:

Uniform Access: Provides a standard way for applications to interact with any device, regardless
of its type.

Device Independence: Abstracts away hardware details so that applications can use files,
networks, or devices without special handling.

Consistent Error Reporting: Ensures all devices report errors in a standard way, simplifying
troubleshooting.

Resource Management: Allocates resources (such as IDs or buffers) for all devices consistently.

Security and Access Control: Provides security measures to control who can use certain devices,
ensuring privacy and access management.

73. Direct Memory Access (DMA) is a computer system feature that allows data to be
transferred directly between a computer's main memory and attached devices without the central
processing unit (CPU) getting involved

74. What is a File System?


A file system is a method an operating system uses to store, organize, and manage files and
directories on a storage device. Some common types of file systems include:
a. FAT (File Allocation Table): An older file system used by older versions of Windows and other
operating systems.

b. NTFS (New Technology File System): A modern file system used by Windows. It supports
features such as file and folder permissions, compression, and encryption.

c. ext (Extended File System): A file system commonly used on Linux and Unix-based operating
systems.

d. HFS (Hierarchical File System): A file system used by macOS.

e. APFS (Apple File System): A new file system introduced by Apple for their Macs and iOS
devices.

75. File Access Methods in Operating System: File access methods in an operating system are
the techniques and processes used to read from and write to files stored on a computer’s storage
devices. There are several ways to access this information in the file. Some systems provide only
one access method for files. File Access Methods
There are three ways to access a file in a computer system:
Sequential-Access
Direct Access
Index sequential Method
76. File System Hierarchy
The file system hierarchy is the organization of files and directories in a logical and hierarchical
structure. It provides a way to organize files and directories based on their purpose and location.
Here are the main components of the file system hierarchy −

a. Root Directory − The root directory is the top-level directory in the file system hierarchy.

b. Subdirectories − Subdirectories are directories that are located within other directories.

c. File Paths − File paths are the routes that are used to locate files within the file system
hierarchy.

d. File System Mounting − File system mounting is the process of making a file system available
for use.

77. File Allocation Methods: File allocation methods determine how files are stored and
organized on a storage device.

There are three main file allocation methods –

a. Contiguous Allocation − Contiguous allocation is a method of storing files in which each file is
allocated a contiguous block of storage space on the storage device. This method allows for quick
and efficient access to files, but it can lead to fragmentation if files are frequently added and
deleted.

b. Linked Allocation − Linked allocation is a method of storing files in which each file is divided
into blocks that are scattered throughout the storage device. Each block contains a pointer to the
next block in the file. This method can help prevent fragmentation, but it can also lead to slower
access times due to the need to follow the links between blocks.

c. Indexed Allocation − Indexed allocation is a method of storing files in which a separate index
is maintained that contains a list of all the blocks that make up each file. This method allows for
quick access to files and helps prevent fragmentation, but it requires additional overhead to
maintain the index.

78. Free Space Management in Operating System: Free space management is a critical aspect
of operating systems as it involves managing the available storage space on the hard disk or
other secondary storage devices. The operating system uses various techniques to manage free
space and optimize the use of storage devices. Here are some of the commonly used free space
management techniques:

a. Bitmap or Bit Vector: A bit vector is a most frequently used method to implement the free
space list. A bit vector is also known as a Bit map. It is a series or collection of bits in which each
bit represents a disk block. The values taken by the bits are either 1 or 0. If the block bit is 1, it
means the block is empty and if the block bit is 0, it means the block is not free.

b. Linked List: A linked list is another approach for free space management in an operating
system. In it, all the free blocks inside a disk are linked together in a linked list. These free blocks
on the disk are linked together by a pointer. These pointers of the free block contain the address
of the next free block and the last pointer of the list points to null which indicates the end of the
linked list.

c. Grouping: The grouping technique is also called the "modification of a linked list technique". In
this method, first, the free block of memory contains the addresses of the n-free blocks. And the
last free block of these n free blocks contains the addresses of the next n free block of memory
and this keeps going on.

79. The directory implementation algorithms are classified according to the data structure they
are using. There are mainly two algorithms which are used in these days.

a. Linear List: In this algorithm, all the files in a directory are maintained as singly lined list.When
a new file is created, then the entire list is checked whether the new file name is matching to a
existing file name or not. In case, it doesn't exist, the file can be created at the beginning or at the
end. Therefore, searching for a unique name is a big concern because traversing the whole list
takes time.

b. Hash Table: In a hash table, each file in a directory is given a key-value pair. The key is created
by using a hash function on the file's name, which gives it a unique ID. This key then points to the
location of the file in the directory, making it easy to find and access.

80. Disk Management: Disk management is one of the critical operations carried out by the
operating system. It deals with organizing the data stored on the secondary storage devices
which includes the hard disk drives and the solid-state drives.

A. Process Management

B. Memory Management

C. File and Disk Management

D. I/O System Management

81. What are Disk Scheduling Algorithms?


Disk scheduling algorithms are crucial in managing how data is read from and written to a
computer’s hard disk. These algorithms help determine the order in which disk read and write
requests are processed.

Disk Scheduling Algorithms: There are several Disk Several Algorithms. We will discuss in
detail each one of them.

a. FCFS Scheduling Algorithm: It is the simplest Disk Scheduling algorithm. It services the IO
requests in the order in which they arrive.

b. SSTF (Shortest Seek Time First): In SSTF (Shortest Seek Time First), requests having the
shortest seek time are executed first. So, the seek time of every request is calculated in advance
in the queue and then they are scheduled according to their calculated seek time.

c. SCAN: In the SCAN algorithm the disk arm moves in a particular direction and services the
requests coming in its path and after reaching the end of the disk, it reverses its direction and
again services the request arriving in its path
d. C-SCAN: In the SCAN algorithm, the disk arm again scans the path that has been scanned,
after reversing its direction. So, it may be possible that too many requests are waiting at the other
end or there may be zero or few requests pending at the scanned area.

82. Disk formatting is a process to configure the data-storage devices such as hard-drive, floppy
disk and flash drive when we are going to use them for the very first time or we can say initial
usage. Disk formatting is usually required when new operating system is going to be used by the
user.

83. What are Boot Blocks?


The process of starting or restarting a computer system or any other computing device is called
booting. For booting a computer system, a set of data and instructions is required which is
managed by the operating system of the computer.

84. What is Bad Block in Operating System?


A bad block is an area of storage media that is no longer reliable for storing and retrieving data
because it has been completely damaged or corrupted. Bad blocks are also referred to as bad
sectors.

You might also like