0% found this document useful (0 votes)
107 views

Process

A process is a currently executing program. It has attributes like a process ID, state, and information for CPU scheduling and I/O devices. During execution, a process transitions between states like ready, running, waiting, and terminated. The operating system uses process tables and process control blocks (PCBs) to manage process attributes and state changes. PCBs contain information needed to suspend and resume processes as they move between ready, running, waiting, and other states.

Uploaded by

Muskan Bachwani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
107 views

Process

A process is a currently executing program. It has attributes like a process ID, state, and information for CPU scheduling and I/O devices. During execution, a process transitions between states like ready, running, waiting, and terminated. The operating system uses process tables and process control blocks (PCBs) to manage process attributes and state changes. PCBs contain information needed to suspend and resume processes as they move between ready, running, waiting, and other states.

Uploaded by

Muskan Bachwani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 32

Process

In the Operating System, a Process is something that is


currently under execution. So, an active program can be
called a Process. For example, when you want to search
something on web then you start a browser. So, this can
be process. Another example of process can be starting
your music player to listen to some cool music of your
choice.

A Process has various attributes associated with it. Some


of the attributes of a Process are:

 Process Id: Every process will be given an id called


Process Id to uniquely identify that process from the
other processes.
 Process state: Each and every process has some states
associated with it at a particular instant of time. This is
denoted by process state. It can be ready, waiting,
running, etc.
 CPU scheduling information: Each process is
executed by using some process scheduling algorithms
like FCSF, Round-Robin, SJF, etc.
 I/O information: Each process needs some I/O
devices for their execution. So, the information about
device allocated and device need is crucial.

States of a Process
During the execution of a process, it undergoes a
number of states. So, in this section of the blog, we will
learn various states of a process during its lifecycle.

 New State: This is the state when the process is just


created. It is the first state of a process.
 Ready State: After the creation of the process, when
the process is ready for its execution then it goes in the
ready state. In a ready state, the process is ready for its
execution by the CPU but it is waiting for its turn to
come. There can be more than one process in the ready
state.
 Ready Suspended State: There can be more than one
process in the ready state but due to memory constraint,
if the memory is full then some process from the ready
state gets placed in the ready suspended state.
 Running State: Amongst the process present in the
ready state, the CPU chooses one process amongst them
by using some CPU scheduling algorithm. The process
will now be executed by the CPU and it is in the running
state.
 Waiting or Blocked State: During the execution of
the process, the process might require some I/O
operation like writing on file or some more priority
process might come. In these situations, the running
process will have to go into the waiting or blocked state
and the other process will come for its execution. So, the
process is waiting for something in the waiting state.
 Waiting Suspended State: When the waiting queue
of the system becomes full then some of the processes
will be sent to the waiting suspended state.
 Terminated State: After the complete execution of the
process, the process comes into the terminated state and
the information related to this process is deleted.
The following image will show the flow of a process from
the new state to the terminated state.
In the above image, you can see that when a process is
created then it goes into the new state. After the new
state, it goes into the ready state. If the ready queue is
full, then the process will be shifted to the ready
suspended state. From the ready sate, the CPU will
choose the process and the process will be executed by
the CPU and will be in the running state. During the
execution of the process, the process may need some I/O
operation to perform. So, it has to go into the waiting
state and if the waiting state is full then it will be sent to
the waiting suspended state. From the waiting state, the
process can go to the ready state after performing I/O
operations. From the waiting suspended state, the
process can go to waiting or ready suspended state. At
last, after the complete execution of the process, the
process will go to the terminated state and the
information of the process will be deleted.

This is the whole life cycle of a process.

Process Table and Process


Control Block (PCB)
While creating a process the operating system performs several
operations. To identify the processes, it assigns a process
identification number (PID) to each process. As the operating
system supports multi-programming, it needs to keep track of all
the processes. For this task, the process control block (PCB) is
used to track the process’s execution status. Each block of
memory contains information about the process state, program
counter, stack pointer, status of opened files, scheduling
algorithms, etc. All these information is required and must be
saved when the process is switched from one state to another.
When the process makes a transition from one state to another,
the operating system must update information in the process’s
PCB.
A process control block (PCB) contains information about the
process, i.e. registers, quantum, priority, etc. The process table is
an array of PCB’s, that means logically contains a PCB for all of
the current processes in the system.
 Pointer – It is a stack pointer which is required to be saved
when the process is switched from one state to another to
retain the current position of the process.
 Process state – It stores the respective state of the process.
 Process number – Every process is assigned with a unique id
known as process ID or PID which stores the process identifier.
 Program counter – It stores the counter which contains the
address of the next instruction that is to be executed for the
process.
 Register – These are the CPU registers which includes:

accumulator, base, registers and general purpose registers.


 Memory limits – This field contains the information about

memory management system used by operating system. This


may include the page tables, segment tables etc.
 Open files list – This information includes the list of files
opened for a process.
Miscellaneous accounting and status data – This field includes
information about the amount of CPU used, time constraints, jobs
or process number, etc.
The process control block stores the register content also known
as execution content of the processor when it was blocked from
running. This execution content architecture enables the
operating system to restore a process’s execution context when
the process returns to the running state. When the process
makes a transition from one state to another, the operating
system updates its information in the process’s PCB. The
operating system maintains pointers to each process’s PCB in a
process table so that it can access the PCB quickly.
Definition
The process scheduling is the activity of the process manager that
handles the removal of the running process from the CPU and the
selection of another process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating
systems. Such operating systems allow more than one process to be
loaded into the executable memory at a time and the loaded process
shares the CPU using time multiplexing.

Process Scheduling Queues


The OS maintains all PCBs in Process Scheduling Queues. The OS
maintains a separate queue for each of the process states and PCBs of
all processes in the same execution state are placed in the same queue.
When the state of a process is changed, its PCB is unlinked from its
current queue and moved to its new state queue.
The Operating System maintains the following important process
scheduling queues −
 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in
main memory, ready and waiting to execute. A new process is
always put in this queue.
 Device queues − The processes which are blocked due to
unavailability of an I/O device constitute this queue.

Process Scheduling Queues


The OS maintains all PCBs in Process Scheduling Queues. The OS
maintains a separate queue for each of the process states and PCBs of
all processes in the same execution state are placed in the same queue.
When the state of a process is changed, its PCB is unlinked from its
current queue and moved to its new state queue.
The Operating System maintains the following important process
scheduling queues −
 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in
main memory, ready and waiting to execute. A new process is
always put in this queue.
 Device queues − The processes which are blocked due to
unavailability of an I/O device constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round
Robin, Priority, etc.). The OS scheduler determines how to move
processes between the ready and run queues which can only have one
entry per processor core on the system; in the above diagram, it has been
merged with the CPU.

Two-State Process Model


Two-state process model refers to running and non-running states which
are described below −
Running
When a new process is created, it enters into the system as in the
running state.
Not Running
Processes that are not running are kept in queue, waiting for their turn to
execute. Each entry in the queue is a pointer to a particular process.
Queue is implemented by using linked list. Use of dispatcher is as
follows. When a process is interrupted, that process is transferred in the
waiting queue. If the process has completed or aborted, the process is
discarded. In either case, the dispatcher then selects a process from the
queue to execute

Schedulers
Schedulers are special system software which handle process
scheduling in various ways. Their main task is to select the jobs to be
submitted into the system and to decide which process to run.
Schedulers are of three types −
 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler


It is also called a job scheduler. A long-term scheduler determines
which programs are admitted to the system for processing. It selects
processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of
jobs, such as I/O bound and processor bound. It also controls the degree
of multiprogramming. If the degree of multiprogramming is stable, then
the average rate of process creation must be equal to the average
departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or
minimal. Time-sharing operating systems have no long term scheduler.
When a process changes the state from new to ready, then there is use
of long-term scheduler.

Short Term Scheduler


It is also called as CPU scheduler. Its main objective is to increase
system performance in accordance with the chosen set of criteria. It is
the change of ready state to running state of the process. CPU scheduler
selects a process among the processes that are ready to execute and
allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of
which process to execute next. Short-term schedulers are faster than
long-term schedulers.

Medium Term Scheduler


Medium-term scheduling is a part of swapping. It removes the processes
from the memory. It reduces the degree of multiprogramming. The
medium-term scheduler is in-charge of handling the swapped out-
processes.
A running process may become suspended if it makes an I/O request. A
suspended processes cannot make any progress towards completion. In
this condition, to remove the process from memory and make space for
other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be
swapped out or rolled out. Swapping may be necessary to improve the
process mix.

Comparison among Scheduler


S. Long-Term Short-Term Medium-Term Scheduler
N. Scheduler Scheduler

1 It is a job It is a CPU It is a process swapping


scheduler scheduler scheduler.

2 Speed is lesser Speed is Speed is in between both short


than short term fastest and long term scheduler.
scheduler among
other two

3 It controls the It provides It reduces the degree of


degree of lesser multiprogramming.
multiprogramming control over
degree of
multiprogra
mming

4 It is almost absent It is also It is a part of Time sharing


or minimal in time minimal in systems.
sharing system time sharing
system

5 It selects It selects It can re-introduce the process


processes from those into memory and execution can
pool and loads processes be continued.
them into memory which are
for execution ready to
execute

Context Switch
A context switch is the mechanism to store and restore the state or
context of a CPU in Process Control block so that a process execution
can be resumed from the same point at a later time. Using this technique,
a context switcher enables multiple processes to share a single CPU.
Context switching is an essential part of a multitasking operating system
features.
When the scheduler switches the CPU from executing one process to
execute another, the state from the current running process is stored into
the process control block. After this, the state for the process to run next
is loaded from its own PCB and used to set the PC, registers, etc. At that
point, the second process can start executing.
What is CPU Scheduling?
CPU Scheduling is a process of determining which process will
own CPU for execution while another process is on hold. The
main task of CPU scheduling is to make sure that whenever the
CPU remains idle, the OS at least select one of the processes
available in the ready queue for execution. The selection
process will be carried out by the CPU scheduler. It selects one
of the processes in memory that are ready for execution.

Types of CPU Scheduling


Here are two kinds of Scheduling methods:

Preemptive Scheduling
In Preemptive Scheduling, the tasks are mostly assigned with
their priorities. Sometimes it is important to run a task with a
higher priority before another lower priority task, even if the
lower priority task is still running. The lower priority task holds for
some time and resumes when the higher priority task finishes its
execution.

Non-Preemptive Scheduling
In this type of scheduling method, the CPU has been allocated
to a specific process. The process that keeps the CPU busy will
release the CPU either by switching context or terminating. It is
the only method that can be used for various hardware
platforms. That’s because it doesn’t need special hardware (for
example, a timer) like preemptive scheduling.

When scheduling is Preemptive or Non-


Preemptive?
To determine if scheduling is preemptive or non-preemptive,
consider these four parameters:
1.A process switches from the running to the waiting state.
2.Specific process switches from the running state to the
ready state.
3.Specific process switches from the waiting state to the ready
state.
4.Process finished its execution and terminated.
Only conditions 1 and 4 apply, the scheduling is called non-
preemptive.
All other scheduling are preemptive.

Important CPU scheduling Terminologies


 Burst Time/Execution Time: It is a time required by the
process to complete execution. It is also called running time.
 Arrival Time: when a process enters in a ready state
 Finish Time: when process complete and exit from a
system
 Multiprogramming: A number of programs which can be
present in memory at the same time.
 Jobs: It is a type of program without any kind of user
interaction.
 User: It is a kind of program having user interaction.
 Process: It is the reference that is used for both job and
user.
 CPU/IO burst cycle: Characterizes process execution,
which alternates between CPU and I/O activity. CPU times
are usually shorter than the time of I/O.

CPU Scheduling Criteria


A CPU scheduling algorithm tries to maximize and minimize the
following:
Maximize:
CPU utilization: CPU utilization is the main task in which the
operating system needs to make sure that CPU remains as busy
as possible. It can range from 0 to 100 percent. However, for the
RTOS, it can be range from 40 percent for low-level and 90
percent for the high-level system.
Throughput: The number of processes that finish their
execution per unit time is known Throughput. So, when the CPU
is busy executing the process, at that time, work is being done,
and the work completed per unit time is called Throughput.

Minimize:
Waiting time: Waiting time is an amount that specific process
needs to wait in the ready queue.
Response time: It is an amount to time in which the request
was submitted until the first response is produced.
Turnaround Time: Turnaround time is an amount of time to
execute a specific process. It is the calculation of the total time
spent waiting to get into the memory, waiting in the queue and,
executing on the CPU. The period between the time of process
submission to the completion time is the turnaround time.

Interval Timer
Timer interruption is a method that is closely related to
preemption. When a certain process gets the CPU allocation, a
timer may be set to a specified interval. Both timer interruption
and preemption force a process to return the CPU before its
CPU burst is complete.
Most of the multi-programmed operating system uses some
form of a timer to prevent a process from tying up the system
forever.

What is Dispatcher?
It is a module that provides control of the CPU to the process.
The Dispatcher should be fast so that it can run on every context
switch. Dispatch latency is the amount of time needed by the
CPU scheduler to stop one process and start another.
Functions performed by Dispatcher:
 Context Switching
 Switching to user mode
 Moving to the correct location in the newly loaded program.

Types of CPU scheduling Algorithm


There are mainly six types of process scheduling algorithms
1.First Come First Serve (FCFS)
2.Shortest-Job-First (SJF) Scheduling
3.Shortest Remaining Time
4.Priority Scheduling
5.Round Robin Scheduling
6.Multilevel Queue Scheduling
First Come First Serve
First Come First Serve is the full form of FCFS. It is the easiest
and most simple CPU scheduling algorithm. In this type of
algorithm, the process which requests the CPU gets the CPU
allocation first. This scheduling method can be managed with a
FIFO queue.
As the process enters the ready queue, its PCB (Process
Control Block) is linked with the tail of the queue. So, when CPU
becomes free, it should be assigned to the process at the
beginning of the queue.

Characteristics of FCFS method:


 It offers non-preemptive and pre-emptive scheduling
algorithm.
 Jobs are always executed on a first-come, first-serve basis
 It is easy to implement and use.
 However, this method is poor in performance, and the
general wait time is quite high.

Shortest Remaining Time


The full form of SRT is Shortest remaining time. It is also known
as SJF preemptive scheduling. In this method, the process will
be allocated to the task, which is closest to its completion. This
method prevents a newer ready state process from holding the
completion of an older process.
Characteristics of SRT scheduling method:
 This method is mostly applied in batch environments where
short jobs are required to be given preference.
 This is not an ideal method to implement it in a shared
system where the required CPU time is unknown.
 Associate with each process as the length of its next CPU
burst. So that operating system uses these lengths, which
helps to schedule the process with the shortest possible
time.

Priority Based Scheduling


Priority scheduling is a method of scheduling processes based
on priority. In this method, the scheduler selects the tasks to
work as per the priority.
Priority scheduling also helps OS to involve priority
assignments. The processes with higher priority should be
carried out first, whereas jobs with equal priorities are carried
out on a round-robin or FCFS basis. Priority can be decided
based on memory requirements, time requirements, etc.

Round-Robin Scheduling
Round robin is the oldest, simplest scheduling algorithm. The
name of this algorithm comes from the round-robin principle,
where each person gets an equal share of something in turn. It
is mostly used for scheduling algorithms in multitasking. This
algorithm method helps for starvation free execution of
processes.

Characteristics of Round-Robin Scheduling


 Round robin is a hybrid model which is clock-driven
 Time slice should be minimum, which is assigned for a
specific task to be processed. However, it may vary for
different processes.
 It is a real time system which responds to the event within a
specific time limit.

Shortest Job First


SJF is a full form of (Shortest job first) is a scheduling algorithm
in which the process with the shortest execution time should be
selected for execution next. This scheduling method can be
preemptive or non-preemptive. It significantly reduces the
average waiting time for other processes awaiting execution.

Characteristics of SJF Scheduling


 It is associated with each job as a unit of time to complete.
 In this method, when the CPU is available, the next process
or job with the shortest completion time will be executed
first.
 It is Implemented with non-preemptive policy.
 This algorithm method is useful for batch-type processing,
where waiting for jobs to complete is not critical.
 It improves job output by offering shorter jobs, which should
be executed first, which mostly have a shorter turnaround
time.

Multiple-Level Queues Scheduling


This algorithm separates the ready queue into various separate
queues. In this method, processes are assigned to a queue
based on a specific property of the process, like the process
priority, size of the memory, etc.
However, this is not an independent scheduling OS algorithm as
it needs to use other types of algorithms in order to schedule the
jobs.

Characteristic of Multiple-Level Queues


Scheduling:
 Multiple queues should be maintained for processes with
some characteristics.
 Every queue may have its separate scheduling algorithms.
 Priorities are given for each queue.

The Purpose of a Scheduling algorithm


Here are the reasons for using a scheduling algorithm:
 The CPU uses scheduling to improve its efficiency.
 It helps you to allocate resources among competing
processes.
 The maximum utilization of CPU can be obtained with multi-
programming.
 The processes which are to be executed are in ready
queue.

Arrival Time: Time at which the process arrives in the ready


queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and
arrival time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time(W.T): Time Difference between turn around time
and burst time.
Waiting Time = Turn Around Time – Burst Time
A Process Scheduler schedules different processes to be assigned to the
CPU based on particular scheduling algorithms. There are six popular
process scheduling algorithms which we are going to discuss in this
chapter −
 First-Come, First-Served (FCFS) Scheduling
 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Shortest Remaining Time
 Round Robin(RR) Scheduling
 Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-
preemptive algorithms are designed so that once a process enters the
running state, it cannot be preempted until it completes its allotted time,
whereas the preemptive scheduling is based on priority where a
scheduler may preempt a low priority running process anytime when a
high priority process enters into a ready state.

First Come First Serve (FCFS)


 Jobs are executed on first come, first serve basis.
 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

Wait time of each process is as follows −

Process Wait Time : Service Tim

P0 0-0=

P1 5-1=4

P2 8-2=6

P3 16 - 3 = 1
Average Wait Time: (0+4+6+13) / 4 = 5.75

Shortest Job Next (SJN)


 This is also known as shortest job first, or SJF
 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is
known in advance.
 Impossible to implement in interactive systems where required CPU
time is not known.
 The processer should know in advance how much time process will
take.
Given: Table of processes, and their Arrival time, Execution time

Process Arrival Time Execution Time Service Time

P0 0 5 0

P1 1 3 5

P2 2 8 14

P3 3 6 8
Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25

Priority Based Scheduling


 Priority scheduling is a non-preemptive algorithm and one of the
most common scheduling algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is
to be executed first and so on.
 Processes with same priority are executed on first come first served
basis.
 Priority can be decided based on memory requirements, time
requirements or any other resource requirement.
Given: Table of processes, and their Arrival time, Execution time, and
priority. Here we are considering 1 is the lowest priority.
Proc Arrival Executio Priority Service Time
ess Time n Time

P0 0 5 1

P1 1 3 2

P2 2 8 1

P3 3 6 3

Waiting time of each process is as follows −

Process Wa

P0

P1 1

P2 1
P3

Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6

Shortest Remaining Time


 Shortest remaining time (SRT) is the preemptive version of the SJN
algorithm.
 The processor is allocated to the job closest to completion but it can
be preempted by a newer ready job with shorter time to completion.
 Impossible to implement in interactive systems where required CPU
time is not known.
 It is often used in batch environments where short jobs need to give
preference.

Round Robin Scheduling


 Round Robin is the preemptive process scheduling algorithm.
 Each process is provided a fix time to execute, it is called
a quantum.
 Once a process is executed for a given time period, it is preempted
and other process executes for a given time period.
 Context switching is used to save states of preempted processes.

Wait time of each process is as follows −


Proc Wait Time : Service Time - A
ess

P0 (0 - 0) + (12 - 3) =

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 1

P3 (9 - 3) + (17 - 12) =

Average Wait Time: (9+2+12+11) / 4 = 8.5

Multiple-Level Queues Scheduling


Multiple-level queues are not an independent scheduling algorithm. They
make use of other existing algorithms to group and schedule jobs with
common characteristics.
 Multiple queues are maintained for processes with common
characteristics.
 Each queue can have its own scheduling algorithms.
 Priorities are assigned to each queue.
For example, CPU-bound jobs can be scheduled in one queue and all
I/O-bound jobs in another queue. The Process Scheduler then alternately
selects jobs from each queue and assigns them to the CPU based on the
algorithm assigned to the queue.

You might also like