0% found this document useful (0 votes)
21 views20 pages

Operating Systems Unit 2 & 3 Answers

The document provides a question bank and answers related to process management in operating systems, covering topics such as process definition, scheduling, synchronization problems, and algorithms like the Banker's Algorithm. It discusses the lifecycle of a process, the importance of semaphores in synchronization, and various scheduling techniques including FCFS and Round Robin. Additionally, it highlights the challenges and benefits of multicore programming and multithreading.

Uploaded by

Sarad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views20 pages

Operating Systems Unit 2 & 3 Answers

The document provides a question bank and answers related to process management in operating systems, covering topics such as process definition, scheduling, synchronization problems, and algorithms like the Banker's Algorithm. It discusses the lifecycle of a process, the importance of semaphores in synchronization, and various scheduling techniques including FCFS and Round Robin. Additionally, it highlights the challenges and benefits of multicore programming and multithreading.

Uploaded by

Sarad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Operating Systems Unit 2 & 3 Question Bank Answers

UNIT II - PROCESS MANAGEMENT

Short Answer Questions (2 Marks)


1. What is a process in operating systems? A process is a program in execution that includes the
program code, current activity (program counter), stack, data section, and heap. It represents an active
entity that requires system resources like CPU time, memory, files, and I/O devices to accomplish its task.

2. Define process scheduling. Process scheduling is the method by which the operating system decides
which process gets access to the CPU at any given time. It manages the allocation of CPU time among
multiple processes to ensure efficient system utilization and fair resource distribution.

3. List the criteria commonly used for CPU scheduling.

CPU Utilization
Throughput

Turnaround Time
Waiting Time
Response Time

4. What is a context switch in process management? A context switch is the mechanism of saving the
state of a currently running process and loading the saved state of another process. It involves storing the
current process's register values, program counter, and other state information in its PCB, then loading
the new process's state.

5. State two classical synchronization problems in operating systems.

Producer-Consumer Problem
Dining Philosophers Problem

6. What is the banker's algorithm used for? The Banker's Algorithm is used for deadlock avoidance. It
determines whether granting a resource request will lead to a safe state, ensuring the system can
complete all processes without entering a deadlock.

7. Define multicore programming with an example. Multicore programming involves writing


applications that can execute multiple threads simultaneously across multiple CPU cores. Example: A web
server handling multiple client requests concurrently, with each request processed by a separate thread
on different cores.

8. List any two challenges of multicore programming.

Synchronization between threads


Load balancing across cores

9. Differentiate between a thread and a process.

Process Thread

Independent memory space Shared memory space within process

Higher creation overhead Lower creation overhead

Heavy context switching Light context switching

Inter-process communication required Direct communication possible

10. Explain why mutual exclusion is important in process synchronization. Mutual exclusion ensures
that only one process can access a critical section at any given time, preventing race conditions. It
maintains data consistency and prevents corruption when multiple processes attempt to modify shared
resources simultaneously.

11. Identify the key components stored in a Process Control Block (PCB).

Process ID (PID) and parent process ID

Program counter and CPU registers

12. State the difference between user-level threads and kernel-level threads. User-level threads are
managed by thread library, while kernel-level threads are managed directly by the operating system
kernel.

13. What role does a semaphore play in process synchronization. Semaphores control access to
shared resources through wait() and signal() operations, preventing race conditions and ensuring mutual
exclusion.

14. Describe the conditions necessary for a deadlock to occur.

Mutual exclusion
Hold and wait

No preemption
Circular wait

15. Why might preemptive scheduling be preferred over non-preemptive scheduling? Preemptive
scheduling provides better response time and prevents long-running processes from monopolizing the
CPU.

16. Explain two benefits of multithreading in a multicore environment.

Parallel execution improves performance


Better resource utilization across cores
Medium to Long Answer Questions
17. Find the Turnaround Time for FCFS scheduling algorithm

Given Process Details:

P1: Burst Time = 8, Arrival Time = 0

P2: Burst Time = 1, Arrival Time = 1


P3: Burst Time = 3, Arrival Time = 2

P4: Burst Time = 2, Arrival Time = 3

Solution Using FCFS (First Come First Serve):

In FCFS scheduling, processes are executed in the order they arrive. Let's construct the Gantt chart and
calculate turnaround times:

Gantt Chart:

| P1 |P2| P3 | P4 |
0 8 9 12 14

Detailed Execution Timeline:

P1 arrives at time 0, starts immediately, runs from 0-8


P2 arrives at time 1, waits until P1 completes, runs from 8-9

P3 arrives at time 2, waits until P2 completes, runs from 9-12

P4 arrives at time 3, waits until P3 completes, runs from 12-14

Calculations:

Completion Times:

P1 completes at: 8

P2 completes at: 9

P3 completes at: 12
P4 completes at: 14

Turnaround Time = Completion Time - Arrival Time

P1: TAT = 8 - 0 = 8

P2: TAT = 9 - 1 = 8

P3: TAT = 12 - 2 = 10
P4: TAT = 14 - 3 = 11

Average Turnaround Time = (8 + 8 + 10 + 11) / 4 = 9.25

Additional Metrics:

Waiting Time = Turnaround Time - Burst Time

P1: WT = 8 - 8 = 0
P2: WT = 8 - 1 = 7

P3: WT = 10 - 3 = 7

P4: WT = 11 - 2 = 9

Average Waiting Time = (0 + 7 + 7 + 9) / 4 = 5.75

Response Time Analysis: Since FCFS is non-preemptive, response time equals waiting time for each
process.

Performance Analysis: FCFS scheduling suffers from the "convoy effect" where short processes wait
behind long processes. In this example, P2, P3, and P4 experience significant waiting times due to P1's
longer burst time. This demonstrates why FCFS is not optimal for interactive systems requiring quick
response times.

Advantages of FCFS:

Simple to understand and implement


No starvation - every process eventually gets CPU

Fair in the sense of first-come-first-serve

Disadvantages of FCFS:

Poor average waiting time performance

Convoy effect reduces system throughput

Not suitable for time-sharing systems

21. Define the lifecycle of a process in an operating system. Illustrate each state with a labeled
state diagram and explain transitions.

Process Lifecycle Overview:

A process undergoes various states during its lifetime, from creation to termination. Understanding these
states is crucial for effective process management and scheduling. The operating system maintains this
state information in the Process Control Block (PCB) and uses it to make scheduling decisions.
Process States in Detail:

1. New State: The process is being created. During this state, the operating system allocates memory,
creates PCB, initializes process attributes, and loads the program into memory. The process remains in
this state until all initialization is complete and resources are allocated.

2. Ready State: The process has all necessary resources and is waiting to be assigned CPU time.
Processes in this state are maintained in a ready queue, managed by the scheduler. They have everything
needed to run except CPU access.

3. Running State: The process is currently being executed by the CPU. Only one process per CPU core
can be in running state at any given time. The process executes its instructions until completion, I/O
request, or preemption occurs.

4. Waiting/Blocked State: The process cannot continue execution and is waiting for some event to
occur, such as I/O completion, resource availability, or signal from another process. Multiple waiting
queues exist for different types of events.

5. Terminated State: The process has finished execution and is being cleaned up. The operating system
deallocates resources, removes PCB, and frees memory. The process may remain briefly in this state for
parent process to read exit status.

State Transition Diagram:

[New] → [Ready] → [Running] → [Terminated]


↑ ↓
└── [Waiting] ←┘

Detailed State Transitions:

New → Ready (Admission):

Occurs when process creation is complete

OS has allocated necessary resources

Process is added to ready queue

Long-term scheduler makes this decision

Ready → Running (Dispatch):

Short-term scheduler selects process from ready queue

CPU is allocated to the process

Process context is loaded into CPU registers

Execution begins from where it was last interrupted


Running → Ready (Preemption):

Time quantum expires in time-sharing systems

Higher priority process becomes ready


Scheduler decides to switch processes

Current context is saved in PCB

Running → Waiting (Event Wait):

Process requests I/O operation

Process waits for resource to become available

Process requests synchronization primitive

System call requires waiting for completion

Waiting → Ready (Event Completion):

I/O operation completes

Requested resource becomes available


Signal or event occurs

Process moves to ready queue for rescheduling

Running → Terminated (Exit):

Process completes normal execution

Process encounters fatal error


Process is killed by another process

Parent process terminates child process

Additional States in Modern Systems:

Suspended Ready: Process is ready but swapped out to secondary storage due to memory constraints.
Can be swapped back when memory becomes available.

Suspended Waiting: Process is waiting for event and also swapped out. Must be swapped back before
event completion can be processed.

Process State Information Storage:

The current state of each process is maintained in its Process Control Block (PCB), which contains:

Process ID and parent process ID


Process state (new, ready, running, waiting, terminated)

Program counter and CPU registers


CPU scheduling information (priority, queue pointers)
Memory management information

Accounting information (CPU time used, time limits)


I/O status information (allocated devices, open files)

Scheduling Queues:

Ready Queue: Contains all processes in ready state

Device Queues: Contains processes waiting for specific I/O devices

Job Queue: Contains all processes in the system

Performance Implications:

Understanding process states helps in:

Efficient CPU utilization through proper scheduling


Resource allocation and management

Deadlock detection and prevention


System performance monitoring and tuning

The state transitions are controlled by the operating system's scheduler components, ensuring efficient
resource utilization while maintaining system stability and responsiveness.

22. What is a semaphore in operating systems? Explain its purpose in process synchronization with
examples.

Semaphore Definition and Concept:

A semaphore is a synchronization primitive used in operating systems to control access to shared


resources and coordinate between concurrent processes or threads. Introduced by Edsger Dijkstra,
semaphores provide a robust mechanism for implementing mutual exclusion and synchronization in
multi-process environments.

A semaphore is essentially an integer variable that can only be accessed through two atomic operations:

wait() or P() operation: Decrements the semaphore value


signal() or V() operation: Increments the semaphore value

Types of Semaphores:

1. Binary Semaphore (Mutex):

Can have only values 0 or 1


Used primarily for mutual exclusion
Acts as a lock mechanism

Also called mutex (mutual exclusion)

2. Counting Semaphore:

Can have any non-negative integer value


Used to control access to resources with multiple instances

Value represents number of available resources

Semaphore Operations:

Wait Operation (P or Down):

wait(S) {
while (S <= 0)
; // busy wait
S--;
}

Signal Operation (V or Up):

signal(S) {
S++;
}

Purpose in Process Synchronization:

1. Mutual Exclusion: Ensures that only one process can access a critical section at a time, preventing race
conditions and maintaining data consistency.

2. Process Synchronization: Coordinates execution order between processes, ensuring certain


operations complete before others begin.

3. Resource Management: Controls access to limited resources, preventing over-allocation and ensuring
fair distribution.

Example 1: Producer-Consumer Problem

Problem Description: A producer process generates data and places it in a shared buffer, while a
consumer process removes data from the buffer. The buffer has limited capacity, and we must ensure:

Producer doesn't add data to full buffer

Consumer doesn't remove data from empty buffer


Only one process accesses buffer at a time

Solution Using Semaphores:

// Semaphore declarations
semaphore empty = n; // Count of empty slots
semaphore full = 0; // Count of full slots
semaphore mutex = 1; // Binary semaphore for mutual exclusion

// Producer Process
while (true) {
produce_item();

wait(empty); // Wait for empty slot


wait(mutex); // Enter critical section

add_item_to_buffer();

signal(mutex); // Exit critical section


signal(full); // Signal full slot available
}

// Consumer Process
while (true) {
wait(full); // Wait for full slot
wait(mutex); // Enter critical section

remove_item_from_buffer();

signal(mutex); // Exit critical section


signal(empty); // Signal empty slot available

consume_item();
}

Explanation:

empty semaphore tracks available buffer slots

full semaphore tracks occupied buffer slots

mutex ensures exclusive buffer access

Producer waits for empty space, consumer waits for data

Mutual exclusion prevents simultaneous buffer access


Example 2: Reader-Writer Problem

Problem Description: Multiple readers can read shared data simultaneously, but writers need exclusive
access. No reader should access data while a writer is writing.

Solution Using Semaphores:

// Semaphore declarations
semaphore resource = 1; // Controls access to shared resource
semaphore read_count_access = 1; // Controls access to read_count
int read_count = 0; // Number of active readers

// Reader Process
while (true) {
wait(read_count_access);
read_count++;
if (read_count == 1)
wait(resource); // First reader locks resource
signal(read_count_access);

// Reading occurs here


read_shared_data();

wait(read_count_access);
read_count--;
if (read_count == 0)
signal(resource); // Last reader unlocks resource
signal(read_count_access);
}

// Writer Process
while (true) {
wait(resource); // Writer needs exclusive access

write_shared_data();

signal(resource); // Release exclusive access


}

Advantages of Semaphores:

1. Simplicity: Easy to understand and implement basic synchronization 2. Flexibility: Can handle both
binary and counting scenarios 3. No Busy Waiting: Processes block when waiting, saving CPU cycles 4.
Atomic Operations: wait() and signal() are guaranteed atomic
Disadvantages of Semaphores:

1. Programming Complexity: Easy to make errors leading to deadlock 2. No Built-in Priority: Can lead
to priority inversion problems 3. Debugging Difficulty: Hard to trace synchronization bugs 4. Potential
for Deadlock: Incorrect usage can cause system deadlock

Implementation in Operating Systems:

Modern operating systems implement semaphores with blocking queues instead of busy waiting:

typedef struct {
int value;
struct process *list; // Queue of waiting processes
} semaphore;

wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add_to_queue(S->list, current_process);
block(); // Block current process
}
}

signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove_from_queue(S->list, process);
wakeup(process); // Wake up waiting process
}
}

Real-World Applications:

Database transaction control


Thread synchronization in multi-threaded applications
Resource pool management

Print queue management


Network connection limiting

Semaphores remain fundamental synchronization primitives, though modern systems often use higher-
level constructs like monitors and locks for easier programming and better abstraction.
30. Round Robin Scheduling Analysis

Problem Statement: Using a Round Robin scheduling algorithm with a time quantum of 5 ms and 100
processes, each requiring 25 ms of CPU time, calculate the total number of context switches and analyze
the impact of time quantum on system performance.

Given Parameters:

Number of processes (n) = 100


Time quantum (q) = 5 ms

CPU time required per process = 25 ms


Total system workload = 100 × 25 = 2500 ms

Round Robin Algorithm Analysis:

Basic Calculation: Each process needs 25 ms ÷ 5 ms = 5 time slices to complete execution.

Context Switch Calculation:

Method 1 - Direct Calculation:

Total time slices needed = 100 processes × 5 slices = 500 time slices

In Round Robin, context switches occur between processes


Each process runs once before cycling back

Context switches = Total time slices - Number of processes = 500 - 100 = 400

Method 2 - Detailed Analysis: Let's trace the execution pattern:

Round 1: P1, P2, P3, ..., P100 (99 context switches) Round 2: P1, P2, P3, ..., P100 (99 context switches)
Round 3: P1, P2, P3, ..., P100 (99 context switches) Round 4: P1, P2, P3, ..., P100 (99 context switches)
Round 5: P1, P2, P3, ..., P100 (99 context switches, all processes complete)

Total context switches = 5 × 99 = 495

Wait, let's reconsider: The last process in each round doesn't cause a context switch to the next process
in the next round immediately. We need to be more precise.

Accurate Calculation:

Total execution time = 2500 ms

Time slices = 2500 ÷ 5 = 500 time slices


Context switches occur between different processes
Since processes complete after their final slice, we don't count those

Context switches = 500 - 100 = 400


Performance Metrics Analysis:

Response Time Calculation: For any process Pi:

First response occurs after (i-1) × 5 ms


Best case (P1): 0 ms

Worst case (P100): 495 ms

Average response time = (0 + 495) ÷ 2 = 247.5 ms

Turnaround Time Calculation: For process Pi in position i:

Completion time = 5 × (5 × 100 - 100 + i) = 5 × (400 + i) ms


Turnaround time = Completion time (since arrival time = 0)

Average turnaround time = 5 × (400 + 50.5) = 2252.5 ms

Waiting Time Analysis:

Waiting time = Turnaround time - Burst time

Average waiting time = 2252.5 - 25 = 2227.5 ms

Impact of Time Quantum on System Performance:

Time Quantum = 1 ms:

Context switches = (2500 ÷ 1) - 100 = 2400

Better response time but excessive overhead


Response time: 0 to 99 ms (average: 49.5 ms)

Time Quantum = 10 ms:

Each process needs 25 ÷ 10 = 3 slices (rounded up)

Context switches ≈ 200

Response time: 0 to 990 ms (average: 495 ms)

Time Quantum = 25 ms:

Each process completes in one slice (becomes FCFS)

Context switches = 99
Response time: 0 to 2475 ms (average: 1237.5 ms)

Performance Trade-offs:

Small Time Quantum (q → 0):

Advantages:
Better response time
More interactive system behavior

Fair CPU distribution

Disadvantages:
High context switching overhead

Increased system overhead


Lower CPU utilization efficiency

Large Time Quantum (q → ∞):

Advantages:
Lower context switching overhead

Higher CPU utilization efficiency


Reduced system overhead

Disadvantages:
Poor response time

Approaches FCFS behavior

Less interactive system

Optimal Time Quantum Selection:

Rule of Thumb: Time quantum should be:

Large enough to amortize context switch overhead

Small enough to provide reasonable response time

Typically 10-100 ms in practice

Context Switch Overhead Analysis:

If context switch time = 0.1 ms:

With q = 5 ms: Overhead = (400 × 0.1) ÷ 2500 = 1.6%

With q = 1 ms: Overhead = (2400 × 0.1) ÷ 2500 = 9.6%


With q = 25 ms: Overhead = (99 × 0.1) ÷ 2500 = 0.4%

System Performance Recommendations:

For Interactive Systems: Use smaller time quantum (5-20 ms) to ensure good response time despite
higher overhead.

For Batch Systems: Use larger time quantum (50-100 ms) to minimize overhead and maximize
throughput.
For Mixed Workloads: Use adaptive time quantum that adjusts based on process behavior and system
load.

Advanced Considerations:

Multi-level Queue Scheduling: Different process types can use different time quantums based on their
characteristics.

Process Behavior Analysis: I/O bound processes may not use full time quantum, reducing actual context
switches.

Priority Adjustments: Processes that don't use full quantum can receive priority boosts for better
interactivity.

This analysis demonstrates that time quantum selection significantly impacts system performance,
requiring careful balance between responsiveness and efficiency.

UNIT III - MEMORY MANAGEMENT

Short Answer Questions (2 Marks)


1. State two key benefits of using segmentation combined with paging.

Eliminates external fragmentation (from paging)

Provides logical organization of memory (from segmentation)

2. What is copy-on-write in the context of memory management? Copy-on-write is a memory


optimization technique where multiple processes share the same memory pages until one process
attempts to modify the page. Only then is a separate copy created for the modifying process.

3. Mention two common memory allocation strategies used in operating systems.

First Fit: Allocate first available block that fits

Best Fit: Allocate smallest block that fits

4. List the major concerns addressed by memory allocation techniques.

Memory utilization efficiency

Fragmentation minimization

5. Give two reasons why demand paging is useful in virtual memory systems.

Reduces memory requirements by loading only needed pages

Allows execution of programs larger than physical memory


6. Define hierarchical page tables and mention one of their advantages. Hierarchical page tables are
multi-level page tables that break the virtual address into multiple parts. Advantage: Reduces memory
overhead for sparse address spaces.

7. What does the term 'thrashing' refer to in memory systems? Thrashing occurs when a system
spends more time swapping pages between memory and disk than executing processes, leading to
severe performance degradation due to excessive page faults.

8. Name the main components of a page table.

Valid/Invalid bit

Frame number

9. Explain the difference between internal fragmentation and external fragmentation.

Internal Fragmentation: Wasted space within allocated memory blocks (unused portions of
allocated pages/segments)

External Fragmentation: Free memory exists but is scattered in small, non-contiguous blocks

10. How does paging differ from segmentation in memory management? Paging divides memory
into fixed-size blocks, while segmentation divides memory into variable-size logical units based on
program structure.

Medium to Long Answer Questions


23. Memory Allocation Problem - First Fit, Best Fit, Worst Fit Analysis

Problem Statement: Free memory holes of sizes 15K, 10K, 5K, 25K, 30K, 40K are available. The processes
of size 12K, 2K, 25K, 20K need to be allocated. Demonstrate how processes are placed using First Fit, Best
Fit, and Worst Fit algorithms. Calculate internal and external fragmentation for each method.

Initial Memory State: Available holes: [15K, 10K, 5K, 25K, 30K, 40K] Processes to allocate: [P1=12K,
P2=2K, P3=25K, P4=20K]

First Fit Algorithm:

Principle: Allocate the first available hole that is large enough to accommodate the process.

Step-by-step Allocation:

Process P1 (12K):

Scan holes: 15K ≥ 12K ✓


Allocate in 15K hole
Remaining: 15K - 12K = 3K
Updated holes: [3K, 10K, 5K, 25K, 30K, 40K]

Process P2 (2K):

Scan holes: 3K ≥ 2K ✓
Allocate in 3K hole

Remaining: 3K - 2K = 1K
Updated holes: [1K, 10K, 5K, 25K, 30K, 40K]

Process P3 (25K):

Scan holes: 1K < 25K, 10K < 25K, 5K < 25K, 25K ≥ 25K ✓
Allocate in 25K hole

Remaining: 25K - 25K = 0K


Updated holes: [1K, 10K, 5K, 30K, 40K]

Process P4 (20K):

Scan holes: 1K < 20K, 10K < 20K, 5K < 20K, 30K ≥ 20K ✓

Allocate in 30K hole

Remaining: 30K - 20K = 10K


Final holes: [1K, 10K, 5K, 10K, 40K]

First Fit Results:

P1 allocated in hole 1 (15K), internal fragmentation = 3K


P2 allocated in remaining of hole 1, internal fragmentation = 1K

P3 allocated in hole 4 (25K), internal fragmentation = 0K


P4 allocated in hole 5 (30K), internal fragmentation = 10K

Total Internal Fragmentation = 3K + 1K + 0K + 10K = 14K Remaining External Fragmentation = 1K


+ 10K + 5K + 10K + 40K = 66K

Best Fit Algorithm:

Principle: Allocate the smallest available hole that is large enough to accommodate the process.

Step-by-step Allocation:

Process P1 (12K):

Available holes: [15K, 10K, 5K, 25K, 30K, 40K]


Holes ≥ 12K: [15K, 25K, 30K, 40K]

Best fit: 15K (smallest among suitable)


Remaining: 15K - 12K = 3K
Updated holes: [3K, 10K, 5K, 25K, 30K, 40K]

Process P2 (2K):

Available holes: [3K, 10K, 5K, 25K, 30K, 40K]

Holes ≥ 2K: [3K, 10K, 5K, 25K, 30K, 40K]

Best fit: 3K (smallest among suitable)

Remaining: 3K - 2K = 1K
Updated holes: [1K, 10K, 5K, 25K, 30K, 40K]

Process P3 (25K):

Available holes: [1K, 10K, 5K, 25K, 30K, 40K]


Holes ≥ 25K: [25K, 30K, 40K]

Best fit: 25K (smallest among suitable)


Remaining: 25K - 25K = 0K

Updated holes: [1K, 10K, 5K, 30K, 40K]

Process P4 (20K):

Available holes: [1K, 10K, 5K, 30K, 40K]

Holes ≥ 20K: [30K, 40K]

Best fit: 30K (smallest among suitable)

Remaining: 30K - 20K = 10K


Final holes: [1K, 10K, 5K, 10K, 40K]

Best Fit Results:

P1 allocated in 15K hole, internal fragmentation = 3K


P2 allocated in 3K hole, internal fragmentation = 1K

P3 allocated in 25K hole, internal fragmentation = 0K


P4 allocated in 30K hole, internal fragmentation = 10K

Total Internal Fragmentation = 3K + 1K + 0K + 10K = 14K Remaining External Fragmentation = 1K


+ 10K + 5K + 10K + 40K = 66K

Worst Fit Algorithm:

Principle: Allocate the largest available hole to accommodate the process.

Step-by-step Allocation:
Process P1 (12K):

Available holes: [15K, 10K, 5K, 25K, 30K, 40K]


Largest hole: 40K

Allocate in 40K hole


Remaining: 40K - 12K = 28K

Updated holes: [15K, 10K, 5K, 25K, 30K, 28K]

Process P2 (2K):

Available holes: [15K, 10K, 5K, 25K, 30K, 28K]

Largest hole: 30K


Allocate in 30K hole
Remaining: 30K - 2K = 28K

Updated holes: [15K, 10K, 5K, 25K, 28K, 28K]

Process P3 (25K):

Available holes: [15K, 10K, 5K, 25K, 28K, 28K]


Largest hole: 28K (either one)
Allocate in first 28K hole

Remaining: 28K - 25K = 3K


Updated holes: [15K, 10K, 5K, 25K, 3K, 28K]

Process P4 (20K):

Available holes: [15K, 10K, 5K, 25K, 3K, 28K]

Largest hole: 28K


Allocate in 28K hole
Remaining: 28K - 20K = 8K

Final holes: [15K, 10K, 5K, 25K, 3K, 8K]

Worst Fit Results:

P1 allocated in 40K hole, internal fragmentation = 28K


P2 allocated in 30K hole, internal fragmentation = 28K
P3 allocated in 28K hole, internal fragmentation = 3K

P4 allocated in 28K hole, internal fragmentation = 8K


Total Internal Fragmentation = 28K + 28K + 3K + 8K = 67K Remaining External Fragmentation =
15K + 10K + 5K + 25K + 3K + 8K = 66K

Comparative Analysis:

Algorithm Internal Fragmentation External Fragmentation Total Waste

First Fit 14K 66K

You might also like