OS Chapter One and Two Sample Questions - 2
OS Chapter One and Two Sample Questions - 2
Multitasking is a logical extension to multiprogramming, and on the other hand,. The basic
difference between Multitasking and multithreading is that Multitasking allows CPU to
perform multiple tasks (program, process, task, threads) simultaneously.
Multithreading is thread-based multitasking. And whereas, Multithreading allows multiple
threads of the same process to execute simultaneously.
Definition of Multitasking
Multitasking is when a single CPU performs several tasks (program, process, task,
threads) at the same time. To perform multitasking, the CPU switches among theses tasks
very frequently so that user can interact with each program simultaneously.
In a multitasking operating system, several users can share the system simultaneously. As
we saw the CPU rapidly switches among the tasks, so a little time is needed to switch from
one user to the next user. This puts an impression on a user that entire computer system is
dedicated to him. When several users are sharing a multitasking operating system, CPU
scheduling and multiprogramming makes it possible for each user to have at least a small
portion of Multitasking OS and let each user have at least one program in the memory for
execution.
Definition of Multithreading
Before studying multithreading let us talk about what is a thread? A thread is a basic
execution unit which has its own program counter, set of the register, stack but it shares
the code, data, and file of the process to which it belongs. A process can have multiple
threads simultaneously, and the CPU switches among these threads so frequently making an
impression on the user that all threads are running simultaneously and this is called
multithreading.
Multithreading increases the responsiveness of system as, if one thread of the application is
not responding, the other would respond in that sense the user would not have to sit idle.
Multithreading allows resource sharing as threads belonging to the same process can share
code and data of the process, and it allows a process to have multiple threads at the same time
active in same address space.
Creating a different process is costlier as the system has to allocate different memory and
resources to each process, but creating threads is easy as it does not require allocating
separate memory and resources for threads of the same process.
Definition of Multiprocessing
Multiprocessing is adding more number of or CPUs/processors to the system which
increases the computing speed of the system.
Multiprocessing
A multiprocessing system is one which has more than two processors. The CPUs are added to
the system to increase the computing speed of the system. Each CPU has its own set of
registers and main memory. Just because CPUs are separate, it may happen that one CPU
must not have anything to process and may sit idle and the other may be overloaded with the
processes. In such cases, the processes and the resources are shared dynamically among the
processors.
If the processor has integrated memory controller then adding processor would increase the
amount of addressable memory in the system. Multiprocessing can change the memory
access model from uniform memory access to nonuniform memory access. The uniform
memory access amounts the same time for accessing any RAM from any Processor. On the
other hands, non-uniform memory access amounts longer time to access some part of
memory than the other parts.
The Process and Thread are the essentially correlated. The process is an execution of a
program whereas thread is an execution of a program driven by the environment of a
process. Another major point which differentiates process and thread is that processes are
isolated with each other whereas threads share memory or resources with each other.
Definition of Process
The process is the execution of a program and performs the relevant actions specified in a
program, or it is an execution unit where a program runs. The operating system creates,
schedules and terminates the processes for the use of the CPU. The other processes created by
the main process are known as child process.
A process operations are controlled with the help of PCB (Process control Block) can be
considered as the brain of the process, which contains all the crucial information regarding to
a process such as a process id, priority, state, PWS and contents CPU register.
PCB is also a kernel-based data structure which uses the three kinds of functions which are
scheduling, dispatching and context save.
Scheduling – It is a method of selecting the sequence of the process in simple words chooses
the process which has to be executed first in the CPU.
Dispatching – It sets up an environment for the process to be executed.
Context save – This function saves the information regarding to a process when it gets
resumed or blocked.
There are certain states involved in a process lifecycle such as ready, running, blocked and
terminated. Process States are used for keeping the track of the process activity at an instant.
From the programmer’s point of view, processes are the medium to achieve the concurrent
execution of a program. The chief process of a concurrent program creates a child process.
The main process and child process need to interact with each to achieve a common goal.
Interleaving operations of processes enhance the computation speed when i/o operation in
one process overlaps with a computational activity in another process.
Definition of Thread
The thread is a program execution that uses process resources for accomplishing the task. All
threads within a single program are logically contained within a process. The kernel allocates
a stack and a thread control block (TCB) to each thread. The operating system saves only the
stack pointer and CPU state at the time of switching between the threads of the same process.
Threads are implemented in three different ways; these are kernel-level threads, user-level
threads, hybrid threads. Threads can have three states running, ready and blocked; it only
includes computational state not resource allocation and communication state which reduces
the switching overhead. It enhances the concurrency (parallelism) hence speed also increases.
Multithreading also comes with demerits, Multiple threads doesn’t create complexity, but the
interaction between them does.
A thread must have priority property when there are multiple threads are active. The priority
of a thread determines how much execution time it gets relative to other active threads in the
same process. What is thread in java??
3. Difference between Non-Preemptive and Preemptive Scheduling schemes in OS
It is the responsibility of CPU scheduler to allot a process to CPU whenever the CPU is in the idle
state. The CPU scheduler selects a process from ready queue and allocates the process to CPU. The
scheduling which takes place when a process switches from running state to ready state or from
waiting state to ready state is called Preemptive Scheduling.
Let us take an example of Preemptive Scheduling, look in the picture below. We have four
processes P0, P1, P2, P3. Out of which, P2 arrives at time 0. So the CPU is allocated to the
process P2 as there is no other process in the queue. Meanwhile, P2 was executing, P3 arrives
at time 1, now the remaining time for process P2 (5 milliseconds) which is larger than the
time required by P3 (4 milli-sec). So CPU is allocated to processor P3.
Meanwhile, P3 was executing, process P1 arrives at time 2. Now the remaining time for P3 (3
milliseconds) is less than the time required by processes P1 (4 milliseconds) and P2 (5
milliseconds). So P3 is allowed to continue. While P3 is continuing process P0 arrives at time
3, now the remaining time for P3 (2 milliseconds) is equal to the time require by P0 (2
milliseconds). So P3 continues and after P3 terminates the CPU is allocated to P0 as it has less
burst time than other. After P0 terminates, the CPU is allocated to P1 and then to P2.
On the hands, the scheduling which takes place when a process terminates or switches from
running to waiting for state this kind of CPU scheduling is called Non-Preemptive
Scheduling. The basic difference between preemptive and non-preemptive scheduling lies in
their name itself. That is a Preemptive scheduling can be preempted; the processes can be
scheduled. In Non-preemptive scheduling, the processes cannot be scheduled.
In Non-preemptive scheduling, if a process with long CPU burst time is executing then the
other process will have to wait for a long time which increases the average waiting time of
the processes in the ready queue. However, the non-preemptive scheduling does not have any
overhead of switching the processes from ready queue to CPU but it makes the scheduling
rigid as the process in execution is not even preempted for a process with higher priority.
Let us solve the above scheduling example in non-preemptive fashion. As initially the process P2
arrives at time 0, so CPU is allocated to the process P2 it takes 6 milliseconds to execute. In between
all the processes i.e. P0, P1, P3 arrives into ready queue. But all waits till process P2 completes its
CPU burst time. Then process that arrives after P2 i.e. P3 is then allocated the CPU till it finishes it’s
burst time. Similarly, then P1 executes, and CPU is later given to process P0. You can calculate
waiting T,AVG Wt,TurnAT ,Avg TurnA Time for each scheduling algorithms by using C+
+ program??
Sometimes the number of processes submitted to the system are more than it can be executed
immediately. Then in such cases, the processes are spooled on the mass storage, where they
reside to get executed later. The Long-Term Scheduler then select the process from this
spool which is also called as Job Pool and load them in the Ready Queue for their further
execution.
It is also called as the Job Scheduler. The frequency of Long-Term Scheduler to pick up
the processes from Job pool is less as compared to the Short-Term Scheduler.
Long-Term Scheduler controls the Degree of Multiprogramming, which is stable if the rate
of creation of the new processes is equal to the average rate of departure of the processes
leaving the system. The Long-Term Scheduler executes when a process leaves the system.
Long-Trem Schedulers seems to be absent or minimally present on some systems like Time
Sharing System such as Micro Soft Windows, Unix, etc.
Definition of Short-Term Scheduler
Short-Term Scheduler is also called a CPU Scheduler. The purpose of Short-Term Scheduler
is to select the process from the Ready Queue that is ready for the execution and allocate
CPU to it for its execution.
Independent process.
Co-operating process.
An independent process is not affected by the execution of other processes while a co-
operating process can be affected by other executing processes. Though one can think that
those processes, which are running independently, will execute very efficiently but in
practical, there are many situations when co-operative nature can be utilised for increasing
computational speed, convenience and modularity. Inter process communication (IPC) is a
mechanism which allows processes to communicate each other and synchronize their actions.
The communication between these processes can be seen as a method of co-operation
between them. Processes can communicate with each other using these two ways:
There are two processes: Producer and Consumer. Producer produces some item and
Consumer consumes that item. The two processes shares a common space or memory
location known as buffer where the item produced by Producer is stored and from where the
Consumer consumes the item if needed. There are two version of this problem: first one is
known as unbounded buffer problem in which Producer can keep on producing items and
there is no limit on size of buffer, the second one is known as bounded buffer problem in
which producer can produce up to a certain amount of item and after that it starts waiting for
consumer to consume it. We will discuss the bounded buffer problem. First, the Producer and
the Consumer will share some common memory, then producer will start producing items. If
the total produced item is equal to the size of buffer, producer will wait to get it consumed by
the Consumer. Sim-
ilarly, the consumer first check for the availability of the item and if no item is available,
Consumer will wait for producer to produce it. If there are items available, consumer will
consume it
In this method, processes communicate with each other without using any kind of of shared
memory. If two processes p1 and p2 want to communicate with each other, they proceed as
follow:
Establish a communication link (if a link already exists, no need to establish it again.)
Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destinaion) or send(message)
– receive(message, host) or receive(message)
The message size can be of fixed size or of variable size. if it is of fixed size, it is easy for OS
designer but complicated for programmer and if it is of variable size then it is easy for
programmer but complicated for the OS designer. A standard message can have two parts:
header and body.
The header part is used for storing Message type, destination id, source id, message length
and control information. The control information contains information like what to do if runs
out of buffer space, sequence number, priority. Generally, message is sent using FIFO style.
A link has some capacity that determines the number of messages that can reside in it
temporarily for which every link has a queue associated with it which can be either of zero
capacity or of bounded capacity or of unbounded capacity. In zero capacity, sender wait until
receiver inform sender that it has received the message. In non-zero capacity cases, a process
does not know whether a message has been received or not after the send operation. For this,
the sender must communicate to receiver explicitly. Implementation of the link depends on
the situation, it can be either a Direct communication link or an In-directed communication
link.
Direct Communication links are implemented when the processes use specific process
identifier for the communication but it is hard to identify the sender ahead of time.
For example: the print server.
In-directed Communication is done via a shred mailbox (port), which consists of queue of
messages. Sender keeps the message in mailbox and receiver picks them up.
A process that is blocked is one that is waiting for some event, such as a resource becoming
available or the completion of an I/O operation. IPC is possible between the processes on
same computer as well as on the processes running on different computer i.e. in
networked/distributed system. In both cases, the process may or may not be blocked while
sending a message or attempting to receive a message so Message passing may be blocking
or non-blocking. Blocking is considered synchronous and blocking send means the sender
will be blocked until the message is received by receiver. Similarly, blocking receive has the
receiver block until a message is available. Non-blocking is considered asynchronous and
Non-blocking send has the sender sends the message and continue. Similarly, Non-blocking
receive has the receiver receive a valid message or null. After a careful analysis, we can come
to a conclusion that, for a sender it is more natural to be non-blocking after message passing
as there may be a need to send the message to different processes But the sender expect
acknowledgement from receiver in case the send fails.
Similarly, it is more natural for a receiver to be blocking after issuing the receive as the
information from the received message may be used for further execution but at the same
time, if the message send keep on failing, receiver will have to wait for indefinitely. That is
why we also consider the other possibility of message passing. There are basically three most
preferred combinations:
-RMI
Step 2: Implementing the remote interface
The next step is to implement the remote interface. To implement the remote interface, the
class should extend to UnicastRemoteObject class of java.rmi package. Also, a default
constructor needs to be created to throw the java.rmi.RemoteException from its parent
constructor in class.
// Java program to implement the Search interface
import java.rmi.*;
import java.rmi.server.*;
public class SearchQuery extends UnicastRemoteObject
implements Search
{
// Default constructor to throw RemoteException
// from its parent constructor
SearchQuery() throws RemoteException
{
super();
}
// Implementation of the query interface
public String query(String search)
throws RemoteException
{
String result;
if (search.equals("Reflection in Java"))
result = "Found";
else
result = "Not Found";
return result;
}
}
Step 3: Creating Stub and Skeleton objects from the implementation class using rmic
The rmic tool is used to invoke the rmi compiler that creates the Stub and Skeleton objects.
Its prototype is rmic classname. For above program the following command need to be
executed at the command prompt rmic SearchQuery
STEP 4: Start the rmiregistry
Start the registry service by issuing the following command at the command prompt start
rmiregistry
STEP 5: Create and execute the server application program
The next step is to create the server application program and execute it on a separate
command prompt.
The server program uses createRegistry method of LocateRegistry class to create
rmiregistry within the server JVM with the port number passed as argument.
The rebind method of Naming class is used to bind the remote object to the new
name.
//program for server application
import java.rmi.*;
import java.rmi.registry.*;
public class SearchServer
{
public static void main(String args[])
{
try
{
// Create an object of the interface
// implementation class
Search obj = new SearchQuery();
// rmiregistry within the server JVM with
// port number 1900
LocateRegistry.createRegistry(1900);
// Binds the remote object by the name
// geeksforgeeks
Naming.rebind("rmi://localhost:1900"+
"/geeksforgeeks",obj);
}
catch(Exception ae)
{
System.out.println(ae);
}
}
}
-Socket programming
First argument – IP address of Server. ( 127.0.0.1 is the IP address of localhost,
where code will run on single stand-alone machine).
Second argument – TCP Port. (Just a number representing which application to run
on a server. For example, HTTP runs on port 80. Port number can be from 0 to 65535)
Communication
To communicate over a socket connection, streams are used to both input and output the data.
Closing the connection
The socket connection is closed explicitly once the message to server is sent.
In the program, Client keeps reading input from user and sends to the server until “Over” is
typed.
Java Implementation
// A Java program for a Client
import java.net.*;
import java.io.*;
public class Client
{
// initialize socket and input output streams
private Socket socket = null;
private DataInputStream input = null;
private DataOutputStream out = null;
// constructor to put ip address and port
public Client(String address, int port)
{
// establish a connection
try
{
socket = new Socket(address, port);
System.out.println("Connected");
// takes input from terminal
input = new DataInputStream(System.in);
// sends output to the socket
out = new DataOutputStream(socket.getOutputStream());
}
catch(UnknownHostException u)
{
System.out.println(u);
}
catch(IOException i)
{
System.out.println(i);
}
// string to read message from input
String line = "";
// keep reading until "Over" is input
while (!line.equals("Over"))
{
try
{
line = input.readLine();
out.writeUTF(line);
}
catch(IOException i)
{
System.out.println(i);
}
}
// close the connection
try
{
input.close();
out.close();
socket.close();
}
catch(IOException i)
{
System.out.println(i);
}
}
public static void main(String args[])
{
Client client = new Client("127.0.0.1", 5000);
}
}
Server Programming
Establish a Socket Connection
To write a server application two sockets are needed.
A ServerSocket which waits for the client requests (when a client makes a new Socket())
A plain old Socket socket to use for communication with the client.
Communication
getOutputStream() method is used to send the output through the socket.
socket = server.accept()
The accept() method blocks(just sits there) until a client connects to the server.
Then we take input from the socket using getInputStream() method. Our Server keeps
receiving messages until the Client sends “Over”.
After we’re done we close the connection by closing the socket and the input stream.
To run the Client and Server application on your machine, compile both of them.
Then first run the server application and then run the Client application.
To run on Terminal or Command Prompt
Open two windows one for Server and another for Client
1. First run the Server application as ,
$ java Server
Server started
Waiting for a client …
2. Then run the Client application on another terminal as,
$ java Client
It will show – Connected and the server accepts the client and shows,
Client accepted
3. Then you can start typing messages in the Client window. Here is a sample input to the Client
Hello
Over
Hello
Over
Closing connection
On the basis of synchronization, processes are categorized as one of the following two types:
Independent Process : Execution of one process does not affects the execution of
other processes.
Cooperative Process : Execution of one process affects the execution of other
processes.
Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.
Critical Section:
In concurrent programming, if one thread tries to change the value of shared data at the same
time as another thread tries to read the value (i.e. data race across threads), the result is
unpredictable.
The access to such shared variable (shared memory, shared files, shared port, etc…) to be
synchronized. Few programming languages have built in support for synchronization.
It is critical to understand the importance of race condition while writing kernel mode
programming (a device driver, kernel thread, etc.). since the programmer can directly access
and modifying kernel data structures.
Mutual Exclusion : To avoid race conditions, the execution of critical sections must be
mutually exclusive (e.g., at most one process can be in its critical section at any time).
The critical-section problem is to design a protocol with which processes can use to
cooperate and ensure mutual exclusion.
Critical section is a code segment that can be accessed by only one process at a time. Critical
section contains shared variables which need to be synchronized to maintain consistency of
data variables
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
Mutual Exclusion : If a process is executing in its critical section, then no other
process is allowed to execute in the critical section.
Progress : If no process is in the critical section, then no other process from outside
can block it from entering the critical section.
Bounded Waiting : A bound must exist on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter
its critical section and before that request is granted.
Peterson’s Solution
Peterson’s Solution is a classical software based solution to the critical section problem.
Mutual Exclusion is assured as only one process can access the critical section at any
time.
Progress is also assured, as a process outside the critical section does not blocks other
processes from entering the critical section.
Bounded Waiting is preserved as every process gets a fair chance.
2. Readers-Writers Problem
If one of the people tries editing the file, no other person should be reading or writing
at the same time, otherwise changes will not be visible to him/her.
However if some person is reading the file, then others may read it at the same time.
Problem parameters:
Here priority means, no reader should wait if the share is currently opened for reading.
Writer process:
do {
// writer requests for critical section
wait(wrt);
} while(true);
Reader process:
1. Monitors
Monitor is one of the ways to achieve Process synchronization. Monitor is supported by
programming languages to achieve mutual exclusion between processes. For example Java
Synchronized methods. Java provides wait() and notify() constructs.
Syntax of Monitor
Condition Variables
Two different operations are performed on the condition variables of the monitor.
Wait.
signal.
Wait operation
x.wait() : Process performing wait operation on any condition variable are suspended. The
suspended processes are placed in block queue of that condition variable.
Signal operation
x.signal(): When a process performs signal operation on condition variable, one of the
blocked processes is given chance.
Please write comments if you find anything incorrect, or you want to share more information
about the topic discussed above
2. Semaphores
A Semaphore is an integer variable, which can be accessed only through two operations
wait() and signal(). There are two types of semaphores : Binary Semaphores and Counting
Semaphores
Binary Semaphores : They can only be either 0 or 1. They are also known as mutex
locks, as the locks can provide mutual exclusion. All the processes can share the same
mutex semaphore that is initialized to 1. Then, a process has to wait until the lock
becomes 0. Then, the process can make the mutex semaphore 1 and start its critical
section. When it completes its critical section, it can reset the value of mutex
semaphore to 0 and some other process can enter its critical section.
Counting Semaphores : They can have any value and are not restricted over a certain
domain. They can be used to control access a resource that has a limitation on the
number of simultaneous accesses. The semaphore can be initialized to the number of
instances of the resource. Whenever a process wants to use that resource, it checks if
the number of remaining instances is more than zero, i.e., the process has an instance
available. Then, the process can enter its critical section thereby decreasing the value
of the counting semaphore by 1. After the process is over with the use of the instance
of the resource, it can leave the critical section thereby adding 1 to the number of
available instances of the resource.
Deadlock and Starvation both are the conditions where the processes requesting for a
resource has been delayed for a long. Although deadlock and starvation both are different
from each other in many aspects. Deadlock is a condition where no process proceeds for
execution, and each waits for resources that have been acquired by the other processes.
Definition of Deadlock
Deadlock is a situation where the several processes in CPU compete for the finite number of
resources available within the CPU. Here, each process holds a resource and wait to acquire a
resource that is held by some other process. All the processes wait for resources in a circular
fashion. In the image below, you can see that Process P1 has acquired resource R2 that is
requested by process P2 and Process P1 is requesting for resource R1 which is again held by
R2. So process P1 and P2 form a deadlock.
There are four conditions which must occur simultaneously to raise the condition of
deadlock, which are Mutual exclusion, Hold and waits, No preemption, and Circular wait.
Mutual exclusion: Only one process at a time can use a resource if other process
requests the same resource, it has to wait till the process using resource releases it.
Hold and Wait: A process must be holding a resource and waiting to acquire another
resource that is held by some other process.
No Preemption: The process holding the resources can not be preempted. The
process holding the resource must release the resource voluntarily when it has
completed its task.
Circular wait: The process must wait for resources in a circular fashion. Suppose we
have three processes {P0, P1, P2}. The P0 must wait for the resource held by P1; P1
must wait to acquire the resource held by process P2, and P2 must wait to acquire the
process held by P0.
Although there are some applications that can detect the programs that may get deadlocked.
But the operating system is never responsible for preventing the deadlocks. It is the
responsibility of programmers to design deadlock free programs. It can be done by avoiding
the above conditions which are necessary for deadlock occurrence.
After u have read, identify How deadlock is occurred or how not occurred?
-Mutual Exclusion.
-Hold and Wait.
-No preemption.
-Circular wait.
Deadlock Prevention :We can prevent Deadlock by eliminating any of the above four
condition.
2. Process will make new request for resources after releasing the current set of resources.
This solution may lead to starvation.
Eliminate No Preemption
Preempt resources from process when resources required by other high priority process.
Deadlock Avoidance
Banker’s Algorithm
Bankers’s Algorithm is resource allocation and deadlock avoidance algorithm which test all
the request made by processes for resources, it check for safe state, if after granting request
system remains in the safe state it allows the request and if their is no safe state it don’t allow
the request made by the process.
2. If request made by process is less than equal to freely available resource in the system.
Example
2) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it
once occurred.
Deadlock Detection
In the above diagram, resource 1 and resource 2 have single instances. There is a cycle R1–
>P1–>R2–>P2. So Deadlock is Confirmed.
2. If there are multiple instances of resources:
Detection of cycle is necessary but not sufficient condition for deadlock detection, in this
case system may or may not be in deadlock varies according to different situations.
3. Deadlock Recovery
Traditional operating system such as Windows doesn’t deal with deadlock recovery as it is
time and space consuming process. Real time operating systems use Deadlock recovery.
Recovery method
2. Resource Preemption
Resources are preempted from the processes involved in deadlock, preempted resources are
allocated to other processes, so that there is a possibility of recovering the system from
deadlock. In this case system go into starvation.
4) Ignore the problem all together: If deadlock is very rare, then let it happen and reboot
the system. This is the approach that both Windows and UNIX take.
Definition of Starvation
Starvation can be defined as when a process request for a resource and that resource has been
continuously used by the other processes then the requesting process faces starvation. In
starvation, a process ready to execute waits for CPU to allocate the resource. But the process
has to wait indefinitely as the other processes continuously block the requested resources.
Aging can resolve the problem of starvation. Aging gradually increases the priority of the
process that has been waiting long for the resources. Aging prevents a process with low
priority to wait indefinitely for a resource.
The banker’s algorithm is a resource allocation and deadlock avoidance algorithm that tests
for safety by simulating the allocation for predetermined maximum possible amounts of all
resources, then makes an “s-state” check to test for possible activities, before deciding
whether allocation should be allowed to continue.
Let ‘n’ be the number of processes in the system and ‘m’ be the number of resources types.
Available :
It is a 1-d array of size ‘m’ indicating the number of available resources of each type.
Available[ j ] = k means there are ‘k’ instances of resource type Rj
Max :
It is a 2-d array of size ‘n*m’ that defines the maximum demand of each process in a system.
Max[ i, j ] = k means process Pi may request at most ‘k’ instances of resource type Rj.
Allocation :
It is a 2-d array of size ‘n*m’ that defines the number of resources of each type currently
allocated to each process.
Allocation[ i, j ] = k means process Pi is currently allocated ‘k’ instances of resource type Rj
Need :
It is a 2-d array of size ‘n*m’ that indicates the remaining resource need of each process.
Need [ i, j ] = k means process Pi currently allocated ‘k’ instances of resource type Rj
Need [ i, j ] = Max [ i, j ] – Allocation [ i, j ]
Allocationi specifies the resources currently allocated to process Pi and Needi specifies the
additional resources that process Pi may still request to complete its task.
Safety Algorithm
The algorithm for finding out whether or not a system is in a safe state can be described as
follows:
1. Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.
Initialize: Work= Available
Finish [i]=false; for i=1,2,……,n
2. Find an i such that both
a) Finish [i]=false
b) Need_i<=work
if no such i exists goto step (4)
3. Work=Work + Allocation_i
Finish[i]= true
goto step(2)
4. If Finish[i]=true for all i,
then the system is in safe state.
Safe sequence is the sequence in which the processes can be safely executed.
Output: