0% found this document useful (0 votes)
6 views20 pages

os unit 2 new questions

The document covers various concepts related to processes and threading in operating systems, including definitions of process states, process control blocks, context switching, and types of processes. It also discusses CPU scheduling, multithreading benefits, critical sections, deadlocks, and synchronization mechanisms like semaphores and monitors. Additionally, it differentiates between user-level and kernel-level threads, and outlines the requirements for mutual exclusion and handling deadlocks.

Uploaded by

sangeetha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views20 pages

os unit 2 new questions

The document covers various concepts related to processes and threading in operating systems, including definitions of process states, process control blocks, context switching, and types of processes. It also discusses CPU scheduling, multithreading benefits, critical sections, deadlocks, and synchronization mechanisms like semaphores and monitors. Additionally, it differentiates between user-level and kernel-level threads, and outlines the requirements for mutual exclusion and handling deadlocks.

Uploaded by

sangeetha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

PART – A

1. Define Process?[R]
A Process can be thought of as a program in execution. A process will need certain resources such as CPU time,
memory, files & I/O devices to accomplish its task.
Draw & briefly explain the process states?[U] or Name and draw five different process states with proper
definition. (NOV/DEC 2017)

New- The process is being created. Running –


Instructions are being executed
Waiting – The process is waiting for some event to occur Ready – The process
is waiting to be assigned a processor
Terminated - the process has finished execution
2. What is process control block? List out the data field associated with PCB. (APR/MAY2015)[R]
Each process is represented in the operating system by a process control block also called a task control block.
(PCB) also called a task control block.

Process state

Process number

Program counter

CPU registers

Memory limits

List of open files

CPU scheduling information

Memory management information

Accounting information

I/O status information


3. What is meant by context switching?[R]
Switching the CPU to another process requires saving the state of the old process and lo ading the savetthe state
for the new process. This task is known as context switch.
4. Define co- operating process and independent process.[R]
Independent process:
o A process is independent if it cannot affect or be affected by the other processes executing in the
system.
o A process that does not share data with any other process is independent. Cooperating
process:
o A process is co-operating if it can affect or be affected by other processes executing in the
system.
o Any process that shares data with any other process is cooperating.
5. What are the benefits of multithreaded programming? [R]
The benefits of multithreaded programming can be broken down into four major categ ories

 Responsiveness

 Resource sharing

 Economy scalability

 Utilization of multiprocessor architectures.


6. What is a thread?[R]
A thread otherwise called a lightweight process (LWP) is a basic unit of CPU uti lization, it comprises
of a thread id, a program counter, a register set and a stack. It shares with otherthreads belonging to the same
process its code section, data section, and opera ting system resources such as open files and signals.

7. Under What circumstances CPU scheduling decision takes place.[An]


(1) When a process switches from running state to waiting state
(2) When a process switches from running state to ready state.
(3) When a process switches from running state to waiting state to ready state
(4) When a process terminates.
8. What are the various scheduling criteria for CPU scheduling?[R]
The various scheduling criteria are

 CPU utilization

 Throughput

 Turnaround time

 Waiting time

 Response time
9.Write down the definition of TestAndSet() Instruction.[R] boolean
TestAndSet (boolean &target)
{
boolean rv = *target;
*target = true; return
rv;
}
10. Define busy waiting and spinlock. [R]
Busy waiting:-
When a process is in its critical section, any other process that tries to enter its critical section must loop
continuously in the entry code. This is called as busy waiting.
Spinlock:-
Busy waiting waster CPU cycles that some other process might be able to use productively.This type
of semaphore is also called a spinlock because the process―spin‖ while waiting for the lock.
11. What is mean by monitors?[R]
A high level synchronization construct. A monitor type is an ADT which presents set of programmer define
operations that are provided mutual exclusion within the monitor.
12. What are the characterizations of deadlock?[R]
1. Mutual exclusion: only one process at a time can use a resource.
2. Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by
other processes.
3. No preemption: a resource can be released only voluntarily by the process holding it, after that process
has completed its task.
4. Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a
resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a
resource that is held by Pn, and P0 is waiting for a resource that is held by P0.Deadlock can arise if four
conditions hold simultaneously.
13. Differentiate a Thread form a Process. (NOV/DEC 2012)[An] Threads

 Will by default share memory

 Will share file descriptors

 Will share file system context

 Will share signal handling


Processes

 Will by default not share memory

 Most file descriptors not shared

 Don't share file system context

 Don't share signal handling

14. What are the difference b/w user level threads and kernel level threads? (MAY/JUNE 2012) (MAY/ JUNE
2016) (NOV/DEC 2015)[An]
User threads
User threads are supported above the kernel and are implemented by a thread library at the user level. Thread
creation & scheduling are done in the user space, without kernel intervention. Therefore they are fast to
create and manage blocking system call will cause the entire process to block
Kernel threads
Kernel threads are supported directly by the operating system .Thread creation, scheduling and management
are done by the operating system. Therefore they are slower to create & manage compared to user threads. If the
thread performs a blocking system call, the kernel can schedule another thread in the application for execution
15. What is the use of fork and exec system calls?[R]
Fork is a system call by which a new process is created. Exec is also a system call, which is used after a fork
by one of the two processes to place the process memory space with a new program.
16. Define thread cancellation & target thread.[R]
The thread cancellation is the task of terminating a thread before it has completed. A thread that is to be
cancelled is often referred to as the target thread. For example, if multiple threads are concurrently searching
through a database and one thread returns the result, the remaining threads might be cancelled.
17. What are the different ways in which a thread can be cancelled?[An]
Cancellation of a target thread may occur in two different scenarios:
• Asynchronous cancellation: One thread immediately terminates the target thread is called asynchronous
cancellation.
• Deferred cancellation: The target thread can periodically check if it should terminate, allowing the
target thread an opportunity to terminate itself in an orderly fashion.
18. Define PThreads[R]
PThreads refers to the POSIX standard defining an API for thread creation and synchronization. This is a
specification for thread behavior, not an implementation.
19. What is critical section problem?[R]
Consider a system consists of 'n' processes. Each process has segment of code called a critical section, in
which the process may be changing common variables, updating a table, writing a file. When one process is
executing in its critical section, no other process can be allowed to execute in its critical section.
21. What are the requirements that a solution to the critical section problem must satisfy?[R]
The three requirements are

 Mutual exclusion
 Progress & Bounded waiting
22. Define mutual exclusion. (MAY/JUNE 2013)[R]
Mutual exclusion refers to the requirement of ensuring that no two process or threads are in their critical
section at the same time.
i.e. If process Pi is executing in its critical section, then no other processes can be executing in their critical
sections.
23. Define entry section and exit section.[R]
The critical section problem is to design a protocol that the processes can use to cooperate. Each process must
request permission to enter its critical section.
Entry Section: The section of the code implementing this request is the entry section.
Exit Section: The section of the code following the critical section is an exit section.
The general structure:
do {

entry section
critical section

exit section
remainder section
} while(1);
24. Give two hardware instructions and their definitions which can be used for implementing
mutual exclusion.[An]
TestAndSet
boolean TestAndSet (boolean &target)
{
boolean rv = target; target =
true;
return rv;
}
Swap
void Swap (boolean &a, boolean &b)
{
boolean temp = a;
a = b;
b = temp;
}
25. What is semaphore? Mention its importance in operating system. (APRIL/MAY 2010, NOV/DEC
2012)[R]
A semaphore 'S' is a synchronization tool which is an integer value that, apart from initialization, is
accessed only through two standard atomic operations; wait and signal. Semaphores can be used to deal with
the n-process critical section problem. It can be also used to solve various Synchronization problems.
26. How the mutual exclusion may be violated if the signal and wait operations are not executed
automatically (MAY/JUNE 2014)[An]
A wait operation atomically decrements the value associated with a semaphore. If two wait operations are
executed on a semaphore when its value is1, if the two operations are not performed atomically, then it is
possible that both operations might proceed to decrement the semaphore value, thereby violating mutual
exclusion
27. Define CPU scheduling.[R]
CPU scheduling is the process of switching the CPU among various processes. CPU scheduling is the basis
of multi programmed operating systems. By switching the CPU among processes, the operating system can make
the computer more productive.
28. What is preemptive and non-preemptive scheduling? [An] (NOV/DEC 2008
,APRIL/MAY2010, MAY /JUNE 2012)
Under non preemptive scheduling once the CPU has been allocated to a process, the process keeps the
CPU until it releases the CPU either by terminating or switching to the waiting state.
Preemptive scheduling can preempt a process which is utilizing the CPU in between its execution and
give the CPU to another process.
29. What is a Dispatcher?[R]
The dispatcher is the module that gives control of the CPU to the process selected by the short-term
scheduler. This function involves:

 Switching context.

 Switching to user mode.

 Jumping to the proper location in the user program to restart that program.
30. Define the term „dispatch latency‟ (APR/MAY 2015)[R]
The time taken by the dispatcher to stop one process and start another running is known as dispatch
latency.
31. Define throughput?[R]
Throughput in CPU scheduling is the number of processes that are completed per unit time. For long
processes, this rate may be one process per hour; for short transactions, throughput might be 10 processes per
second.
32. What is turnaround time? (NOV/DEC 2013)[R]
Turnaround time is the interval from the time of submission to the time of completion of a process.
It is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the
CPU, and doing I/O.
33. Define race condition.[R]
When several process access and manipulate same data concurrently, then the outcome of the execution depends
on particular order in which the access takes place is called race condition. To avoid race condition, only one
process at a time can manipulate the shared variable.
34. Write the four situations under which CPU scheduling decisions take place (MAY/JUNE 2014)
[R]
CPU scheduling decisions take place under one of four conditions:

 When a process switches from the running state to the waiting state, such as for an I/O request or
invocation of the wait ( ) system call.

 When a process switches from the running state to the ready state, for example in response to an
interrupt.

 When a process switches from the waiting state to the ready state, say at completion of I/O or
a return from wait ( ).

 When a process terminates.


35. Define deadlock. (APRIL/MAY 2010)[R]
A process requests resources; if the resources are not available at that time, the process enters a wait
state. Waiting processes may never again change state, because the resources they have requested are held by
other waiting processes. This situation is called a deadlock.
36. What is the sequence in which resources may be utilized?[R]
Under normal mode of operation, a process may utilize a resource in the following sequence:

 Request: If the request cannot be granted immediately, then the requesting process must wait
until it can acquire the resource.

 Use: The process can operate on the resource.

 Release: The process releases the resource.


37. What are conditions under which a deadlock situation may arise? (MAY/JUNE 2009 , MAY/JUNE
2012, MAY/JUNE 2013) (NOV/DEC 2013) [R]
A deadlock situation can arise if the following four conditions hold simultaneously in a system:
a. Mutual exclusion
b. Hold and wait
c. No pre-emption
d. Circular wait
38. What is a resource-allocation graph? [R]
Resource allocation graph is directed graph which is used to describe deadlocks. This graph consists of a
set of vertices V and a set of edges E. The set of vertices V is partitioned into two different types of nodes; P the
set consisting of all active processes in the system and R the set consisting of all resource types in the system.
39. Define request edge and assignment edge. [R]
A directed edge from process Pi to resource type Rj (denoted by Pi → Rj) is called as request edge; it
signifies that process Pi requested an instance of resource type Rj and is currently waiting for that resource. A
directed edge from resource type Rj to process Pi (denoted by Rj → Pi) is called an assignment edge; it
signifies that an instance of resource type has been allocated to a process Pi.
40. What are the methods for handling deadlocks? (APRIL/MAY 2011)[R]
The deadlock problem can be dealt with in one of the three ways:
1. Use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a deadlock state.
2. Allow the system to enter the deadlock state, detect it and then recover.
3. Ignore the problem all together, and pretend that deadlocks never occur in the system.
41. How real-time Scheduling does differs from normal scheduling? (NOV/DEC 2012) [R]
In a normal Scheduling, we have two types of processes. User process & kernel Process. Kernel
processes have time constraints. However, user processes do not have time constraints.
In a RTOS, all process are Kernel process & hence time constraints should be strictly followed. All
process/task (can be used interchangeably) are based on priority and time constraints are important for the system
to run correctly.
42. What do you meant by short-term scheduler (NOV/DEC 2010) [R]
The selection process is carried out by the short-term scheduler or CPU scheduler.
The scheduler selects the process form the process in memory that is ready to execute and allocates the CPU
to the process.
43. What is the concept behind strong semaphore and spinlock? (NOV/DEC 2015) [R]
A spinlock is one possible implementation of a lock, namely one that is implemented by busy waiting
("spinning"). A semaphore is a generalization of a lock (or, the other way around, a lock is a special case of a
semaphore). Usually, but not necessarily, spinlocks are only valid within one process whereas semaphores can be
used to synchronize between different processes, too.
A semaphore has a counter and will allow itself being acquired by one or several threads, depending on
what value you post to it, and (in some implementations) depending on what its maximum allowable value is.
43. What is the meaning of the term busy waiting? (May/Jun 2016) [R]
Busy waiting means that a process is waiting for a condition to be satisfied in a tight loop without
relinquish the processor. Alternatively, a process could wait by relinquishing the processor, and block on a
condition and wait to be awakened at some appropriate time in the future.
44. Distinguish between CPU-bounded and I/O bounded processes (NOV/DEC 2016) [An]
CPU Bound means the rate at which process progresses is limited by the speed of the CPU. A task that
performs calculations on a small set of numbers, for example multiplying small matrices, is likely to be CPU
bound. I/O Bound means the rate at which a process progresses is limited by the speed of the I/O subsystem. A
task that processes data from disk, for example, counting the number of lines in a file is likely to be I/O bound.
45. What resources are required to create threads (NOV/DEC 2016) [R]
When a thread is created, the thread does not require any new resources to execute the thread shares the
resources like memory of the process to which they belong. The benefit of code sharing is that it allows an
application to have several different threads of activity all within the same address space.
46. ”Priority inversion is a condition that occurs in real time systems where a low priority process is
starved because higher priority processes have gained hold of the CPU”-Comment on this statement.
(APR/MAY 2017) [An]
Priority inversion is a problematic scenario in scheduling in which a high priority task is indirectly
preempted by a lower priority task effectively "inverting" the relative priorities of the two tasks. This violates the
priority model that high priority tasks can only be prevented from running by higher priority tasks and briefly by
low priority tasks which will quickly complete their use of a resource shared by the high and low priority
tasks.

47. Differentiate single threaded and multi-threaded processes. (APR/MAY 2017) [An]
S. Multithreaded Programming Single Threaded Programming
No.

1 In this type of programming multiple In this type of programming a single


threads run at the same time thread runs at a time.

2 Multi-threaded model doesn‘t use event Single threaded model uses a process
loop with polling event loop with polling
3 CPU time is never wasted. CPU time is wasted.

4 Idle time is minimum. Idle time is more.

5 It results in more efficient programs. It results in less efficient programs.

6 When one thread is paused due to some When one thread is paused, the system
reason, other threads run as normal. waits until this thread is resumed.

48. Elucidate mutex locks with its procedure. (NOV/DEC 2017)


Mutex is a program object that allows multiple program threads to share the same resource, such as file
access, but not simultaneously. When a program is started a mutex is created with a unique name. After this
stage, any thread that needs the resource must lock the mutex from other threads while it is using the resource.
The mutex is set to unlock when the data is no longer needed or the routine is finished. In mutex locks approach,
in the entry section of code, a LOCK is acquired over the critical resources modified and used inside critical
section, and in the exit section that LOCK is released. As the resource is locked while a process executes its
critical section hence no other process can access it.
49. What are the benefits of synchronous and asynchronous communication? (APR/MAY 2018)
Benefits of synchronous communication:

 Synchronous communication enables flexibility and offer higher availability.

 There‘s less pressure on the system to act on the information or immediately respond in some way.

 Also, one system being down does not impact the other system. For example, emails –thousands of
emails can be sent without having to revert back..
Benefits of Asynchronous communication:

 Asynchronous message passing allows more parallelism.

 Since a process does not block, it can do some computation while the message is in transit.

 In the case of receive, this means a process can express its interest in receving messages on
multiple ports simultaneously.
50. Give a programming example in which multithreading does not provide better performance than
single-threaded solutions. (APR/MAY 2018)
Multi-threading does not perform well for any sequential program. For example; program to calculate an
individual tax return. Another example where multithreading does not work good would be shell program like
―Korn‖ shell.
51. Give the queuing diagram representation of process scheduling. (APR/MAY 2019)

52. List out the benefits and challenge of thread handling. (APR/MAY 2019)
Benefits

 Responsiveness.

 Resource sharing

 Economy

 Scalability.
Challenges

 Dividing activities

 Balance

 Data splitting

 Data dependency

 Testing and debugging


PART-B&C
1) Explain the FCFS, preemptive and non-preemptive versions of Shortest-Job First and Round Robin
(time slice = 2) scheduling algorithms with Gantt charts for the four Processes given. Compare their average
turnaround and waiting time. [E] (NOV/DEC 2012)
Process Arrival Time Waiting Time

P1 0 8

P2 1 4

P3 2 9

P4 3 5

2) Discuss how scheduling algorithms are selected for a system. What are the criteria considered?
Explain the different evaluation Methods.[An] (MAY/JUNE 2014)
3) Write in detail about several CPU scheduling algorithms. [An] (APRIL/MAY2011)
4) What is critical section? Specify the requirements for a solution to critical section problem.
[An] (NOV/DEC 2012)
5) How monitors help in process synchronization. [An] (NOV/DEC 2009)
6) Write in detail about deadlock avoidance. [U] (NOV/DEC 2009)
7) Write in detail about deadlock recovery. [U] (APRIL/MAY2011)
8) Explain the Banker algorithm for deadlock avoidance in detail with an example. [Ap]
(APRIL/MAY2010, NOV/DEC 2012) (NOV/DEC 2013)
9) Consider the following set of processes, with the length of the CPU – burst time given in
Milliseconds:
Process Burst Time Priority

P1 10 3

P2 1 1

P3 2 3

P4 1 4

P5 5 2

The processes are arrived in the order P1, P2, P3, P4, P5, all at time 0.
1. Draw 4 Gantt charts illustrating the execution of these processes using FCFS, SJF Priority and RR
(Time Slice = 1) scheduling
2. What is the turnaround time of each process for each of the scheduling?
3. Calculate the waiting time for each of the process [E] (MAY/JUNE2012) (NOV/DEC2015)
10) Consider the following questions based on the banker‘s algorithm:[E] (MAY/JUNE 2012)
Process Allocation Max Available

P0 A B C D A B C D A B C D

P1 0 0 1 2 0 0 1 2 1 5 2 0

P2 1 0 0 0 1 7 5 0

P3 1 3 5 4 2 3 5 6

P4 0 6 3 2 0 6 5 2

P5 0 0 1 4 0 6 5 6

(1) Define safety algorithm.


(2) What is the content of the matrix Need?
(3) Is the system in a safe state?
(4) If a request from process P1 arrives for (0, 4, 2, 0), can the request be granted immediately?
11) (i) What is meant by critical section problem? Propose a solution based on bakery algorithm.
(ii) Consider the following snapshot of a system:
P0 – P4 are 5 processes present and A, B, C, D are the resources. The maximum need of a Process and the
allocated resources details are given in the table.
Answer the following based on banker‘s algorithm.
(1) What is the content of NEED matrix?
(2) Is the system in a safe state?
(3) If a request from process P0 arrives for (0, 2, 0) can the request be granted immediately.[E]
Allocation Max Available

A B C A B C A B C

P0 0 1 0 7 5 3 3 3 2

P1 2 0 0 3 2 2

P2 3 0 2 9 0 2

P3 2 1 1 2 2 2

P4 0 0 2 4 3 3

12) Discuss the threading issues which are considered with multithreaded programs.
[An] MAY/JUNE 2014)(APRIL/MAY2011, MAY/JUNE 2012)
Consider the following snapshot of a system:
P0-P4 are 5 processes present and A, B, C, D are the resources .The maximum need of a process and the
allocated resources details are given in the table.
Allocation Max Available

A B C D A B C D A B C D

P0 0 0 1 2 0 0 1 2 1 5 2 0

P1 1 0 0 0 1 7 5 0

P2 1 3 5 4 2 3 5 6

P3 0 6 3 2 0 6 5 2

P4 1 0 1 4 0 6 5 6

Answer the following based on banker‘s algorithm


i) What is the content of NEED matrix?
ii) Is the system in a safe state?
iii) Which processes may cause deadlock if the system is not safe.
iv) If a request from process p1 arrives for (0, 4, 3, 1) can the request be granted immediately?
Justify. [E] (MAY/JUNE 2014)
13) Discuss in detail the critical section problem and also write the algorithm for Readers-
Writers Problem with semaphores [An] (NOV/DEC 2013)
14) Explain the FCFS, preemptive and non-preemptive versions of Shortest-Job First and Round Robin (time
slice = 2) scheduling algorithms with Gantt charts for the four Processes given. Compare their average
turnaround and waiting time. [Ap]
(APR/MAY 2015)

Process Arrival Time Waiting Time

P1 0 10

P2 1 6

P3 2 12

P4 3 15

Discuss how deadlocks could be detected in detail. [An] (APR/MAY 2015)


15) Show how wait () and signal () semaphore operations could be implemented in multiprocessor
environments using the test and set instruction. The solution should exhibit minimal busy waiting. Develop
pseudo code for implementing the operations. [An] (APR/MAY 2015)
16) Discuss about the issues to be considered in the multithreaded program. [An]
(APR/MAY 2015)
17) (i) Explain thread and SMP management.
(ii) Illustrate Semaphores with neat example.
(iii) The operating system contains 3 resources, the number of instance of each resource type are 7,
7, 10. The current resource allocation state is as shown below:
Process Current Allocation Maximum Need

R1 R2 R3 R1 R2 R3

P1 2 2 3 3 6 8

P2 2 0 3 4 3 3

P3 1 2 4 3 4 4

18) Is the current allocation in a safe state? [E] (NOV/DEC 2015) [An] (MAY/JUNE 2016)
20) (i) Is it possible to have concurrency but not parallelism? Explain.
(ii) Consider a system consisting of four resources of the same type that are shared by three processes, each of
which needs at most two resources. Show that the system is deadlock free.
(i) Describe the actions taken by a kernel to context-switch between processes.
(ii) Provide two programming examples in which multithreading does not provide better performance
than a single-threaded solution. [An] (MAY/JUNE 2016)
19) (i) Give an example of a situation in which ordinary pipes are more suitable than named pipes and an
example of a situation in which named pipes are more suitable than ordinary pipes. (8) (NOV/DEC 2016) [An]
(ii) Describe the differences among short-term, medium-term, and long term scheduling [U](8)
(NOV/DEC 2016)
20) (i) Explain why interrupts are not appropriate for implementing synchronization primitives in
multiprocessor systems[An] (8) (NOV/DEC 2016)
(i) What are the different thread libraries used? Explain any one with example [An](8)
(NOV/DEC 2016)
21) Consider the following set of processes, with the length of the CPU-burst time in given ms:

Process Burst Time Arrival Time


P1 8 0.00
P2 4 1.001
P3 9 2.001
P4 5 3.001
P5 3 4.001
Draw four Gantt charts illustrating the execution of these processes using FCFC,SJF, Priority and RR (Quantum=2)
scheduling. Also calculate waiting time and turnaround time for each scheduling algorithms [E]. (13) (APR/MAY
2017)
22) What is a race condition? Explain how a critical section avoids this condition. What are the
properties which a data item should possess to implement a critical section? Describe a solution to the
Dining philosopher problem so that no races arise. [An] (13) (APR/MAY 2017) (APR/MAY 2019).
23) i) What is a process ? Discuss components of process and various states of a process with the help of
a process state transition diagram. (8) [U](NOV/DEC 2017)
ii)Write the difference between user thread and kernel thread. (5)[An] (NOV/DEC 2017)
24) i) What is the average turnaround time for the following processes using
a) FCFS (3)
b) SJF non-preemptive. (3)
c) Preemptive SJF.(3) [U] (NOV/DEC 2017)
ii) With example elucidate livelock. (4) [R](NOV/DEC 2017)
25) Describe the difference among short-term, medium-term and long term scheduling with suitable
example. [An] (APR/MAY 2018)
26) Explain the differences in the degree to which the following scheduling algorithms discriminate in favor
of short processes: [An] (APR/MAY 2018)
i) RR
ii) Multilevel feedback queues.
27) What do you mean by term synchronization? 'What is Semaphore? Explain how semaphore can used as
synchronization tool. Consider a coke machine that has 10 slots. The producer is the delivery person and the
consumer is the student using the machine- It uses the following three semaphores (15)[An](APR/MAY
2017)
semaphore mutex
semaphore fullBuffer /* Number of filled slots: */ semaphore emptyBuffer
/* Number of empty slots */
(i) Write pseudo code for delivery_person() and student()
(ii) What will be the initial values of the semaphores?
(iii) Write a solution that guarantees the mutual exclusion and has no deadlocks
28) What is deadlock? What are the necessary conditions for deadlock to occur? Explain the deadlock
prevention method of handling deadlock. (15)[An] (APR/MAY 2017) Consider the following information
about resources in a system.
(i) There are two classes of allocatable resource labeled R1 and R2
(ii) There are two instances of each resource
(iii) There are four processes labeled p1 through p4
(iv) There are some resource instances already allocated to processes as follows:
 One instance of R1 held by p2, another held by p3

 One instance of R2 held by p1, another held by p4


(v) Some processes have requested additional resources, as follows:-
 p1 wants one instance of R1 .

 p3 wants one instance of R2


(1) Draw the resource allocation graph for this system
(2) What is the state (runnable, waiting) of each process? For each process that is waiting indicate what it is
waiting for
(3) Is this system deadlocked? If so, state which processes are involved. If not, give an execution sequence
that eventually ends, showing resource acquisition and release at each step.
29) Consider the following system snapshot using data structures in the Banker‘s algorithm, with resources
A, B, C and D and process P0 to P4 : [E](NOV/DEC 2017)

Using Banker‘s algorithm, answer the following questions:


a) How many resources of type A, B, C and D are there? (2)
b) What are the contents of the need matrix? (3)
c) Is the system in a safe state? Why? (3)
d) If a request from process P4 arrives for additional resources of (1,2,0,0) ,can the Banker‘s algorithm grant the
request immediately ? Show the new system state and other criteria. (7)
30) i) Consider the atomic fetch-and-set x, y instruction unconditionally sets the memory location x to 1 and
fetches the old value of x in y without allowing any intervening access to the memory location x. Consider
the following implementation of P and V functions on a binary semaphore.(15) [An] (NOV/DEC 2017)
void P (binary_semaphore *s) { unsigned y;
unsigned *x = & (s- > value); do {
fetch-and-set x, y; b
}While (3');
}
void V (binary_semaphore *s) { S-
>value = 0;
}
Write whether the implementation may or may not work if context switching is disabled in P.
(ii) Consider a situation where we have a file shared between many people. If one of the people tries editing the
file, no other person should be reading or writing at the same time, otherwise changes will not be visible to
him/her. However if some person is reading the file, then others may read it at the same time. [An](NOV/DEC
2017)
a) What kind of situation is this?
b) Consider the following problem parameters to solve this situation.
Problem parameters:
1) One set of data is shared among a number of processes.
2) Once a writer is ready, it performs its write. Only one writer may write at a time.
3) If a process is writing, no other process can read it.
4) If at least one reader is reading, no other process can write.
5) Readers may not write and only read.
31) Consider a system consisting of 'm' resources of the same type being shared by n Processes. Resource can
be requested and released by processes only one at a time. Show that the system is deadlock free if the
following two conditions hold :(15) [An] (APR/MAY 2018)
iii) The maximum need of each process is between 1 and m resources.
iv)The sum of all maximum needs is less then m + n.
32) Consider the following set of processes, with the length of the CPU burst given in milliseconds: [E]
(APR/MAY 2018)
The process is assumed to have arrived in the order P1, P2, P3, P4, P5 time 0.
(1) Draw Gantt charts that illustrate the execution of these processes using the scheduling
algorithms FCFS (smaller priority number implies higher priority) and RR (quantum = 1). (10)
(2) What is the waiting time of each process for each of the scheduling algorithms? (5)
33) Write the algorithm using test and set() instruction that satisfy all the critical section requirements. (5)
(APR/MAY 2019)
34) Consider the following snapshot of a system:
P0-P4 are 5 processes present and A, B, C, D are the resources. The maximum need of a process and the
allocated resources details are given in the table.
Allocation Max Available

A B C D A B C D A B C D

P0 2 0 0 1 4 2 1 2 3 3 2 1

P1 3 1 2 1 5 2 5 2

P2 2 1 0 3 2 3 1 6

P3 1 3 1 2 1 4 2 4

P4 1 4 3 2 3 6 6 5

Answer the following based on banker‘s algorithm


1. Illustrate that the system is in safe state by demonstrating an order in which the process may
complete?
2. If a request from a process p1 arrives for (1,1,0,0) can the request be granted immediately.
3. If the request from p4 arrives for (0,0,2,0) can the request be granted immediately?
(13) [E] (APR/MAY 2019)
35) (i) Consider the following set of processes with the length of CPU- burst time given in
milliseconds.
Brust Priority Arrival
Process Time Time

P1 10 3 0

P2 1 1 1

P3 2 3 2

P4 1 4 1

P5 5 2 2

Draw the Gantt chart for the execution of these processes using FCFS, SJF, SRTS, pre- emptive and non-pre-
emptive priority, round robin with time slice of 2 ms. Find the average waiting and turnaround time using each of
the methods. (10)
(ii)Explain Multi level queue and multi-level feedback queue scheduling with suitable example. (5)
(APR/MAY 2019)
36) (i) Consider two processes, p1 and p2 where p1 = 50, t1 = 25, p2 = -75 and t2 = 30. Can these two processes be
scheduled using rate-monotonic scheduling and earliest deadline first scheduling. Illustrate your answer using
Gantt charts. (10)
(ii) Explain in detail about paging in 32 bit and 64 bit architectures. (5) (APR/MAY 2019)
37) (i) Explain banker algorithm for deadlock avoidance with suitable example. (7)
(ii) A system has four processes and five resources. The current allocation and maximum need are as
follows (NOV/DEC 2021)

Consider value of x as 1,2,3.What is the smallest value of x in which the above system become a safe state?
38) (i) What is critical section? Discuss in detail reader‘s writer‘s problem. (7) (NOV/DEC 2021)
(ii) Define Deadlock. State the condition for deadlock. Explain the steps involved in deadlock recovery. (6)

You might also like