0% found this document useful (0 votes)
12 views

OSY_3rd_Unit_Notes

This document provides an overview of process management in operating systems, defining key concepts such as processes, programs, and Process Control Blocks (PCBs). It explains the process life cycle, including various states (new, ready, running, waiting, terminated) and the role of scheduling in managing process execution. Additionally, it discusses different types of scheduling (preemptive and non-preemptive), scheduling queues, and the types of schedulers, highlighting their advantages and disadvantages.

Uploaded by

Gajanan Markad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

OSY_3rd_Unit_Notes

This document provides an overview of process management in operating systems, defining key concepts such as processes, programs, and Process Control Blocks (PCBs). It explains the process life cycle, including various states (new, ready, running, waiting, terminated) and the role of scheduling in managing process execution. Additionally, it discusses different types of scheduling (preemptive and non-preemptive), scheduling queues, and the types of schedulers, highlighting their advantages and disadvantages.

Uploaded by

Gajanan Markad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Unit 3: Process Management

Q.1] Define Process, Program and Process Control Block (PCB)


1] Process:
A process is basically a program in execution or instance of the
program execution. The execution of a process must progress in a
sequential fashion.
▪ An operating system executes a variety of programs that run as a
process.
▪ No parallel execution of instructions of a single process
▪ A process can be defined as an entity which represents the basic
unit of work to be implemented in the system.

Fig. Process in Memory


1. Text
2. Data
3. Heap
4. Stack
1. Text:
The Text section is made up of the compiled program code, read in from
non-volatile storage when the program is launched.
2. Data:
The Data section is made up the global and static variables, allocated and
initialized prior to executing the main.
3. Heap:
The Heap is used for the dynamic memory allocation, and is managed
via calls to new, delete, malloc, free, etc.
4. Stack:
The Stack is used for local variables. Space on the stack is reserved for
local variables when they are declared.
2] Program:
A program is a piece of code which may be a single line or millions
of lines. A computer program is usually written by a computer programmer in
a programming language.
▪ A computer program is a collection of instructions that performs a specific
task when executed by a computer. When we compare a program with a
process, we can conclude that a process is adynamic instance of a computer
program.
▪ A computer program that performs a well-defined task is also known as an
algorithm. A collection of computer programs, libraries and related data
are referred to as a software.
3] Process Control Block (PCB):
A Process Control Block is a data structure
maintained by the Operating System for every process.

Q.2] Explain Process States Diagram (Process Life Cycle)


Process States:
A process state can be defined as the condition of process at a
particular instant of time. Process state defines the current position of process.
This help to get detail of our process at a particular instant.
1. New
2. Ready
3. Running
4. Waiting
5. Terminated
1. New:
When a process enters into the system, it is in new state. In this state
a process is created. In new state the process is in job pool.
2. Ready:
When the process is loaded into the main memory, it is ready for
execution. In this state the process is waiting for processor allocation.
3. Running:
When CPU is available, system selects one process from main
memory and executes all the instructions from that process. So, when a
process is in execution, it is in running state. In single user system, only
one process can be in the running state. In multiuser system, there can be
multiple processes which are in the running state.
4. Waiting State:
When a process is in execution, it may request for I/O
resources. If the resource is not available, process goes into the waiting
state. When the resource is available, the process goes back to ready state.
5. Terminated State:
When the process completes its execution, it goes into
the terminated state. In this state the memory occupied by the process is
released.
Features of The Process State:
I. A process can move from the running state to the waiting state if it needs
to wait for a resource to become available.
II. A process can move from the waiting state to the ready state when the
resource it was waiting for becomes available.
III. A process can move from the ready state to the running state when it is
selected by the operating system for execution.
IV. The scheduling algorithm used by the operating system determines which
process is selected to execute from the ready state.
Q.3] Explain Process Control Block (PCB) with diagram.

Fig. PCB
1. Process State:
The state of the process is stored in the PCB which helps to
manage the processes and schedule them. There are different states for a
process which are “running,” “waiting,” “ready,” or “terminated.”
2. Process number:
Every process is assigned a unique id known as process
ID or PID which stores the process identifier.
3. Program Counter:
While running processes when the context switch
occurs the last instruction to be executed is stored in the program counter
which helps in resuming the execution of the process from where it left
off.
4. Register:
Registers in the PCB, it is a data structure. When a process is
running and it’s time slice expires, the current value of process specific
registers would be stored in the PCB and the process would be swapped
out. When the process is scheduled to be run, the register values is read
from the PCB and written to the CPU registers. This is the main purpose
of the registers in the PCB.
5. Memory limits:
This field contains the information about memory
management system used by the operating system. This may include page
tables, segment tables, etc.
6. Accounting Information:
This information includes the amount of CPU
used, time limits, account holders, job or process number and so on. It also
includes information about listed I/O devices allocated to the process such
as list of open files.
7. List of Open files:
This information includes the list of files opened for a
process.
8. I/O Status Information:
This information includes the list of I/O devices
allocated to this process, a list of open files and so on. The PCB simply
serves as the repository for any information that may vary from process
to process.

Advantages of Using Process Control Block:

1. Efficient Process Management: The Process Control Block allows


the operating system to manage multiple processes that ensures
maximum resource utilization.

2. Quick Context Switching: In order to keep the system responsive, the


PCB allows rapid context switching by storing the state of a process.

3. Resource Tracking and Allocation: The PCB tracks all the resources
allocated to a process to make sure that the resources are used
effectively and the errors are minimized.
4. Simplified Process Control: The PCB provides a centralized structure
for storing process information, simplifying the control and
management of processes within the operating system.

Disadvantages of using Process Control Block:


1. Increased Overhead: When there are a large number of processes,
maintaining them can create major overheads which consumes
memory and processing power.

2. Synchronization Issues: For ensuring that the PCBs are updated


correctly and consistently with multi-threads, complicated
synchronization mechanisms are required.

3. Difficulty in Modifications: Once a PCB structure is created making


changes or updates can be complex and may require significant
redesign efforts.
Q. 4] Explain Process Scheduling:
Process Scheduling:
The process scheduling is the activity of the process
manager that handles the removal of the running process from the CPU and the
selection of another process on the basis of a particular strategy.
▪ Process scheduling is an essential part of a Multiprogramming operating
systems.
▪ Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU using
time multiplexing.
Scheduling Objective:
1. Maximize throughput.
2. Maximize number of users receiving acceptable response times.
3. Be predictable
4. Balance resource use.
5. Avoid indefinite postponement.
6. Enforce Priorities.
Types of Scheduling:
1. Preemptive Scheduling
2. Non-Preemptive Scheduling
1] Preemptive Scheduling:
▪ Preemptive Scheduling is a scheduling method where the tasks are mostly
assigned with their priorities. Sometimes it is important to run a task with a
higher priority before another lower priority task, even if the lower priority
task is still running.
▪ At that time, the lower priority task holds for some time and resumes when the
higher priority task finishes its execution.
▪ Algorithms based on preemptive scheduling are Round Robin (RR) , Shortest
Remaining Time First (SRTF) , Priority (preemptive version) , etc.
Advantages of Preemptive Scheduling:
1. It is a more robust method because a process may not monopolize the
processor.
2. Each event causes an interruption in the execution of ongoing tasks.
3. It improves the average response time.
4. It is more beneficial when you use this method in a multi-programming
environment.
Disadvantages of Preemptive Scheduling:
1. It requires the use of limited computational resources.
2. It takes more time suspending the executing process, switching the context,
and dispatching the new incoming process.
3. If several high-priority processes arrive at the same time, the low-priority
process would have to wait longer.
2] Non-Preemptive Scheduling:
▪ Non-Preemptive Scheduling is one in which once the resources (CPU
Cycle) have been allocated to a process, the process holds it until it
completes its burst time or switches to the 'wait' state.
▪ In non-preemptive scheduling, a process cannot be interrupted until it
terminates itself or its time is over. If a process that has a long burst time
is running the CPU, then the process that has less CPU burst time would
starve.
▪ Non-preemptive scheduling is not flexible in nature, also it is not
expensive.
▪ Algorithms based on non-preemptive scheduling are: Shortest Job First
(SJF basically non preemptive) and Priority (nonpreemptive version) , etc.
Advantages of Non-Preemptive Scheduling:
▪ It has a minimal scheduling burden.
▪ It is a very easy procedure.
▪ Less computational resources are used.
▪ It has a high throughput rate.
Disadvantages of Non- Preemptive Scheduling:
1. It has a poor response time for the process.
2. A machine can freeze up due to bugs.

Difference Between Preemptive Scheduling and Non-Preemptive Scheduling


Preemptive Scheduling Non-Preemptive Scheduling
1. It affects the design of the operating 1. It doesn't affect the design of the
system kernel. OS kernel.
2. It has overheads of scheduling the 2. It does not have overheads
processes
3. It is flexible. 3. It is rigid
4. It is cost associated. 4. It does not cost associated.
5. Its CPU utilization is very high. 5. Its CPU utilization is very low.
6. Preemptive scheduling waiting 6. Non-preemptive scheduling
time is less waiting time is high
7. Preemptive scheduling response 7. Non-preemptive scheduling
time is less response time is high
8. Examples: Round Robin and 8. Examples: First Come First Serve
Shortest Remaining Time First and Shortest Job First

Q.5] Explain Process Scheduling Queues:


The OS maintains all PCBs in Process
Scheduling Queues. The OS maintains a separate queue for each of the
process states and PCBs of all processes in the same execution state are
placed in the same queue. When the state of a process is changed, its PCB
is unlinked from its current queue and moved toits new state queue.
1. Job queue
2. Ready queue
3. Device queues
1] Job queue:
This queue is known as the job queue, it contains all the processes
or jobs in the list that are waiting to be processed. Job: In the working of PASW
Statistics 18, when a job is created, it goes into the job queue and waits until it
is ready for processing. This queue aids the operating system in tracking all the
jobs yet to be handled.
Characteristics
1. Contains all submitted jobs.
2. Processes are stored here in a wait state until they are ready to go to the
execution stage.
3. This is the first and most basic state that acts as a default storage of new
jobs added to a scheduling system.
2] Ready queue:
The Stand-by queue contains all the processes ready to be
fetched from the memory, for execution. When the process is initiated, it joins
the ready queue to wait for the CPU to be free. The operating system assigns a
process to the executing processor from this queue based on the scheduling
algorithm it implements.
Characteristics
1. FIFO contains processes waiting for the CPU to execute various processes
it contains.
2. This flow of activities is chosen from this queue for execution.
3. They are controlled utilizing what can be referred to as scheduling
algorithms like FCFS, SJF, or Priority Scheduling among others.
3] Device queue:
Those processes that are blocked due to the unavailability of an
I/O device constitute this queue. So, this queue contains all the blocked
processes and is waiting for I/O. When a process resides in the Device Queue
then the state of the process is state Blocked or wait for State.
Characteristics
1. It depends on the processes that are waiting for resources.
2. Processes are transferred here when a certain process needs resources that
are not obtainable here.
3. The processes that are not yet available are returned back to the ready
queue once the requested resource is available.
Fig. Process Scheduling Queues
Use of Scheduling Queue:
Scheduling queues are a fundamental component of
process scheduling in operating systems. They help manage the state and
execution of processes effectively.
Benefits of Scheduling Queue:
1. Efficiency:
Scheduling queues help manage multiple processes efficiently,
allowing for multitasking and better CPU utilization.
2. Prioritization:
They allow the operating system to prioritize processes based
on various factors (e.g., priority levels, time requirements), ensuring that
critical processes get CPU time.
3. Fairness:
Scheduling queues contribute to a fair distribution of CPU time
among processes, helping prevent starvation and ensuring that all processes
make progress.
4. Handling Different States:
They effectively handle processes in various
states (ready, running, waiting), making transitions smooth and organized.
Q. 7] Explain Schedulers with its Types
Scheduler:
Schedulers are special system software which handle process
scheduling in various ways.
Their main task is to select the jobs to be submitted into the system and to decide
which process to run.
Schedulers are of three types-
1. Long-term scheduler
2. Short-term scheduler
3. Medium term Scheduler
1] Long term scheduler:
▪ Long term scheduler is also known as job scheduler. It chooses the
processes from the pool (secondary memory) and keeps them in the ready
queue maintained in the primary memory.
▪ Long Term scheduler mainly controls the degree of Multiprogramming.
The purpose of long-term scheduler is to choose a perfect mix of IO bound
and CPU bound processes among the jobs present in the pool.
▪ If the job scheduler chooses more IO bound processes, then all of the jobs
may reside in the blocked state all the time and the CPU will remain idle
most of the time. This will reduce the degree of Multiprogramming.
Therefore, the Job of long-term scheduler is very critical and may affect
the system for a very long time.
2] Short term scheduler:
▪ Short term scheduler is also known as CPU scheduler. It selects one of the
Jobs from the ready queue and dispatch to the CPU for the execution.
▪ A scheduling algorithm is used to select which job is going to be dispatched
for the execution. The Job of the short-term scheduler can be very critical
in the sense that if it selects job whose CPU burst time is very high then all
the jobs after that, will have to wait in the ready queue for a very long
time.
▪ This problem is called starvation which may arise if the short-term
scheduler makes some mistakes while selecting the job.
3] Medium term scheduler:
▪ Medium term scheduler takes care of the swapped-out processes. If the
running state processes needs some 10 time for the completion, then there
is a need to change its state from running to waiting.
▪ Medium term scheduler is used for this purpose. It removes the process
from the running state to make room for the other processes. Such
processes are the swapped-out processes and
this procedure is called swapping. The medium-
term scheduler is responsible for suspending and resuming the processes.
▪ It reduces the degree of multiprogramming. The swapping is necessary to
have a perfect mix of processes in the ready queue.
Comparison among Schedulers:
Short Term Scheduler Medium Term Scheduler Long Term Scheduler
1. It is also known as “CPU 1. It is also known as 1. It is also known as
Scheduler.” “Swapping Scheduler.” “Admission Scheduler.”
2. It offers less control. 2. It reduces the level of 2. It offers more control.
multiprogramming.
3. It offers the fastest speed. 3. It offers medium speed. 3. It offers comparatively
lesser speed.
4. It is minimal in time- 4. It is minimal or absent in 4. It is present in time-
sharing. time-sharing. sharing systems.
5. Process state is ready to 5. Process state is not present 5. Process state is new to
running ready.
6. Select a new process for a 6. Select that process, which is 6. Select a good process, mix
CPU quite frequently. currently not need to load of I/O bound and CPU
fully on RAM, so it swaps it bound.
into swap partition.

Q. 8] Explain Context Switching:


Context Switching:
A context switch is the mechanism to store and restore the
state or context of a CPU in Process Control block so that a process execution
can be resumed from the same point at a later time.
▪ A context switcher enables multiple processes to share a single CPU.
▪ Context switching is an essential part of a multitasking operating system
features.
▪ When the scheduler switches the CPU from executing one process to
execute another, the state from the current running process is stored into
the processes control block.
▪ After this, the state for the process to run next is loaded from its own PCB
and used to set the PC, registers, etc. At that point the second process can
start executing.
▪ Context switching is a fundamental concept in operating systems, and is
necessary for multitasking and efficient resource management. In a
multitasking operating system, multiple processes or threads can be
running concurrently.
When the process is switched, the following information is stored for later use:
▪ Program counter
▪ Scheduling information
▪ Base and limit register value
▪ Current used register
▪ Changed state
▪ I/O state information
▪ Accounting information

Fig. Context Switching


How Context Switching Works:
1. The current process state is saved
2. A new process is selected from the ready queue
3. The new process state is loaded
4. Execution of the new process begins
Context switching triggers:
1. Interrupts
2. Multitasking
3. Kernel/User switch
1. Interrupts: A CPU requests for the data to read from a disk, and if there are
any interrupts, the context switching automatic switches a part of the hardware
that requires less time to handle the interrupts.
2. Multitasking: A context switching is the characteristic of multitasking that
allows the process to be switched from the CPU so that another process can be
run. When switching the process, the old state is saved to resume the process's
execution at the same point in the system.
3. Kernel/User Switch: It is used in the operating systems when switching
between the user mode, and the kernel/user mode is performed.

Context Switching Steps:

1. Step – 1: The data in the register and program counter will be saved in the
PCB of process P1, let’s call it PCB1, and the state in PCB1 will be
changed.
2. Step – 2: Process P1 will be moved to the appropriate queue, which could
be ready, I/O, or waiting.
3. Step – 3: The next process, say P2, will be chosen from the ready queue.
4. Step – 4: The process P2’s state will be changed to running, and if P2 was
previously executed by the CPU, it will restart execution from where it was
put on hold.
5. Step – 5: If we need to execute process P1, we must complete all of the
tasks stated in steps 1 to 4.
Advantage of Context Switching:

1. Multitasking: Allows multiple processes to be executed concurrently,


increasing overall system efficiency.
2. Responsiveness: Enables the system to quickly respond to user inputs and
events, improving the user experience.
3. Fault tolerance: Allows the system to recover from errors or crashes in
one process without affecting other processes.
4. Resource sharing: Enables multiple processes to share system resources,
such as memory, I/O devices, and network connections.
Disadvantages of Context Switching:
1. Time Overhead: Context switching requires time to save and restore the
context of a process, which can be significant, especially for processes with
large amounts of data.
2. Memory Overhead: Each process requires its own memory space, and
frequent context switching can lead to a large amount of memory
overhead.
3. Cache Misses: When a process is switched out and then switched back in,
there is a possibility that the CPU cache may need to be refilled.
4. Synchronization Overhead: Context switching in OS can lead to
synchronization overhead

Q.9] Explain IPC (Inter process communication) with types


Inter process communication:
Processes can coordinate and interact with one
another using a method called inter-process communication (IPC).
Types of IPC:
1. Shared Memory
2. Message passing
1)Shared memory:
▪ In this model, a region of the memory residing in an address space of
a process creates a shared memory segment which can be accessed by all
processes who want to communicate with each other.
▪ All the processes using the shared memory segment should attach to the
address space of the shared memory. All the processes can exchange
information by reading and/or writing data in shared memory segment.
▪ The form of data and location are determined by these processes who want
to communicate with each other.
▪ These processes are not under the control of the operating system.
▪ The processes are also responsible for ensuring that they are not writing to
the same location simultaneously.
▪ After establishing shared memory segment, all accesses to the shared
memory segment are treated as routine memory access and without
assistance of kernel.
Fig. Shared Memory

2)Message Passing:
▪ In this model, communication takes place by exchanging messages
between cooperating processes.
▪ It allows processes to communicate and synchronize their action without
sharing the same address space.
▪ It is particularly useful in a distributed environment when
communication process may reside on a different computer connected by
a network.
▪ Communication requires sending and receiving messages through the
kernel.
▪ The processes that want to communicate with each other must have a
communication link between them. Between each pair of processes exactly
one communication link exists.

Fig. Message Passing


Advantages of IPC:
1. Resource Sharing: IPC allows multiple processes to share resources such
as memory and files, enabling more efficient use of system resources.
2. Concurrency: IPC enables concurrent execution of processes, which can
lead to improved performance, especially on multi-core systems where
processes can run in parallel.
3. Synchronization: IPC mechanisms help synchronize processes, ensuring
that they operate in a coordinated manner. This is crucial in preventing data
corruption and ensuring consistency.
4. Error Handling: Processes can communicate errors or status updates,
allowing for better fault tolerance and recovery strategies.
Disadvantages of IPC:
1. Complexity: Implementing IPC can add complexity to system design.
Developers must manage communication protocols, synchronization, and
potential errors, making systems harder to understand and maintain.
2. Deadlocks: Improperly managed IPC can lead to deadlocks, where two or
more processes wait indefinitely for each other to release resources,
causing the system to halt.
3. Security Risks: If not properly managed, IPC can introduce security
vulnerabilities, such as unauthorized access to shared resources or sensitive
data.
4. Resource Contention: Multiple processes accessing shared resources
simultaneously can lead to contention, resulting in performance
bottlenecks or inconsistent states.
Difference Between Message Passing and Shared Memory

Parameter Shared Memory Model Message Passing Model


1. Definition Multiple processes access a Processes communicate by sending
common memory space to and receiving messages.
communicate
2. Communication Faster communication strategy. Relatively slower communication
Strategy strategy.
3. Kernel No kernel intervention. It involves kernel intervention.
Intervention
4. Amount of Data It can be used in exchanging larger It can be used in exchanging small
amounts of data. amounts of data.
5. Data Isolation Processes share a segment of Each process has its own private
memory, allowing direct access to memory space, reducing the risk of
shared variables. data corruption.
6. Synchronization Requires synchronization Often requires explicit
mechanisms (e.g., semaphores, synchronization mechanisms (e.g.,
mutexes) to prevent data races and message queues) to coordinate
ensure data consistency. communication.
7. Fault Tolerance Less fault-tolerant More fault-tolerant
8. Scalability less scalable More scalable

Q.10] Explain Threads with benefits and its Types:


Thread:
A thread is a single sequential flow of execution of tasks of a
process so it is also known as thread of execution or thread of control.
▪ A thread shares with its peer threads few information like code
segment, data segment and open files.
▪ When one thread alters a code segment memory item, all other threads
see that.
▪ A thread is also called a lightweight process.
▪ Threads provide a way to improve application performance through
parallelism. Threads represent a software approach to improving
performance of operating system by reducing the overhead thread is
equivalent to a classical process.
▪ Each thread belongs to exactly one process and no thread can exist
outside a process. Each thread represents a separate flow of control.
▪ Threads have been successfully used in implementing network servers
and web server.
Advantages of Thread:
1. Threads minimize the context switching time.
2. Use of threads provides concurrency within a process.
3. Efficient communication.
4. It is more economical to create and context switch threads.
5. Threads allow utilization of multiprocessor architectures to a greater scale
and efficiency.
Types of Threads:
1. User Level Threads
2. Kernel Level Threads
1] User Level Threads:
▪ The User-level Threads are implemented by the user-level software. These
threads are created and managed by the thread library, which the operating
system provides as an API for creating, managing, and synchronizing threads.
it is faster than the kernel-level threads, it is basically represented by the
program counter, stack, register, and PCB.
▪ User-level threads are typically employed in scenarios where fine control over
threading is necessary, but the overhead of kernel threads is not desired. They
are also useful in systems that lack native multithreading support, allowing
developers to implement threading in a portable way.
Examples: Java thread, POSIX threads, etc.
Advantages of ULT:
1. Thread switching does not require Kernel mode privileges.
2. User level thread can run on any operating system.
3. Scheduling can be application specific in the user level thread.
4. User level threads are fast to create and manage.
5. It is faster and efficient.
6. It does not require modifications of the operating system
Disadvantages of ULT:
1. In a typical operating system, most system calls are blocking.
2. Multithreaded application cannot take advantage of multiprocessing.
2] Kernel Level Threads:
▪ In this case, thread management is done by the Kernel. There is no thread
management code in
the application area. Kernel threads are supported directly by the operating
system. Any application can be programmed to be multithreaded. All of the
threads within an application are supported within a single process.
▪ The OS kernel is responsible for generating, scheduling, and overseeing
kernel-level threads since it controls them directly.
▪ Each kernel-level thread has its own context, including information about the
thread’s status, such as its name, group, and priority.
Example: Window Solaris, Linux, Windows XP
Advantages of KLT:
1. Kernel can simultaneously schedule multiple threads from the same
process on multiple processes.
2. If one thread in a process is blocked, the Kernel can schedule another
thread of the same process.
3. Kernel routines themselves can be multithreaded.
Disadvantages of KLT:
1. The kernel thread manages and schedules all threads.
2. The implementation of kernel threads is difficult than the user thread.
3. The kernel-level thread is slower than user-level threads.
Difference between User-Level & Kernel-Level Thread
Parameter User-Level Thread Kernel-Level Thread
1. Implemented User threads are implemented by Kernel threads are
by user-level libraries. implemented by Operating
System (OS).
2. Recognize The operating System doesn’t Kernel threads are recognized
recognize user-level threads by Operating System.
directly.
3. Implementation Implementation of User threads is Implementation of Kernel-
easy. Level thread is complicated.
4. Context switch Context switch time is less. Context switch time is more
time
5. Hardware No hardware support is required Hardware support is needed.
support for context switching.
6. Multithreading Multithreaded applications cannot Kernels can be multithreaded.
take full advantage of
multiprocessing.
7. Operating Any operating system can support Kernel-level threads are
System user-level threads. operating system-specific.
8. Example POSIX threads, Mach C-Threads. Java threads, POSIX
threads on Linux.

Benefits of Thread in Operating System:


1. Responsiveness: If the process is divided into multiple threads, if one thread
completes its execution, then its output can be immediately returned.
2. Faster context switch: Context switch time between threads is lower
compared to the process context switch. Process context switching requires
more overhead from the CPU.
3. Effective utilization of multiprocessor system: If we have multiple threads
in a single process, then we can schedule multiple threads on multiple
processors. This will make process execution faster.
4. Resource sharing: Resources like code, data, and files can be shared
among all threads within a process. Note: Stacks and registers can’t be shared
among the threads. Each thread has its own stack and registers.
5. Communication: Communication between multiple threads is easier, as
the threads share a common address space. while in the process we have to
follow some specific communication techniques for communication between
the two processes.
6. Enhanced throughput of the system: If a process is divided into multiple
threads, and each thread function is considered as one job, then the number
of jobs completed per unit of time is increased, thus increasing the
throughput of the system.

Need of Thread:
▪ It takes far less time to create a new thread in an existing process than to
create a new process.
▪ Threads can share the common data; they do not need to use Inter- Process
communication.
▪ Context switching is faster when working with threads.
▪ It takes less time to terminate a thread than a process.

Difference between Process and Thread


Parameter Thread Process

1. Weight A thread is a lightweight A Process is a heavy weight.


2. Context Threads require less time for Processes require more time for
switching context switching context switching/
3. Termination Threads require less time for Processes require more time for
Time termination. termination.
4. Dependent or Threads are dependent on Individual processes are
independent process. independent.
5. Code and data A thread shares the data Processes have independent data
sharing segments, and files etc. with its and code segments.
peer threads.
6. Memory A thread may share some Processes don’t share memory.
Sharing memory with its peer threads.
7. Resource Threads are Light weight so need Processes are Heavy weight so
Consumption less resources need more resources.
8. Creation Time Threads require less time for Processes require more time for
creation. creation.
9. Communication Communication is Faster Communication is Slower
Q.11] Explain Multithreading Models with its types:
Multithreading Model:
Multithreading models in operating systems define how
threads are managed and scheduled within a process.
▪ Multithreading is a programming model that allows multiple threads to exist
within the context of a single process, enabling concurrent execution of tasks.
This model is widely used in modern applications to enhance performance,
responsiveness, and resource utilization.
Types of Multithreading Model:
1. Many to one model
2. Many to many model
3. One to one model
1] Many to One Model:
Management is done in user space by the thread library. When thread makes a
blocking system call, the entire process will be blocked. Only one thread can
access the Kernel at a time, so multiple threads are unable to run in parallel on
multiprocessors. If the user-level thread libraries are implemented in the
operating system in such a way that the system does not support them, then the
Kernel threads use the many-to-one relationship modes.

Fig. Many to One Model


Advantages of Many to One Model:
1. Lightweight: Context switching between user threads is faster since it doesn't
involve the kernel.
2. Low Overhead: User-level thread management is more efficient and has less
overhead.
3. Simplicity: Easier to implement and manage at the user level.
4. Resource Efficiency: Uses fewer system resources since there's only one
kernel thread.
Disadvantages of Many to One Model:
1. Limited Concurrency: Only one thread can run at a time, limiting parallel
execution.
2. Blocking: If one thread blocks (e.g., waiting for I/O), all threads are blocked.
3. No Multiprocessing: Cannot take full advantage of multiprocessor systems.
4. No Kernel Awareness: The kernel does not manage user threads, which may
lead to inefficient scheduling.
2] One to One Model:
There is one-to-one relationship of user-level thread to the kernel-level thread.
This model provides more concurrency than the many-to-one model. It also
allows another thread to run when a thread makes a blocking system call. It
supports multiple threads to execute in parallel on microprocessors. Disadvantage
of this model is that creating user thread requires the corresponding Kernel thread.
OS/2, windows NT and windows 2000 use one to one relationship model.

Fig. One to One Model


Advantages of One-to-One Model:
1. True Concurrency: Multiple threads can run simultaneously on multiple
processors.
2. Responsiveness: If one thread blocks, others can continue executing,
improving responsiveness.
3. Kernel Scheduling: The kernel can manage threads efficiently, taking
advantage of system resources.
4. Better Scalability: Suitable for applications requiring high levels of
concurrency.
Disadvantages of One-to-One Model:
1. Higher Overhead: More system resources are consumed due to the
management of multiple kernel threads.
2. Context Switching Cost: Switching between kernel threads can be more
expensive.
3. Complexity: More complex to implement and manage compared to user-level
threading.
4. Resource Limitation: Limits on the number of threads due to kernel resource
constraints.
3] Many to Many Model:
▪ The many-to-many model multiplexes any number of user threads onto an
equal or smaller number of kernel threads.
▪ The following diagram shows the many-to-many threading model where 6
user level threads are multiplexing with 6 kernel level threads.
▪ In this model, developers can create as many user threads as necessary and
the corresponding Kernel threads can run in parallel on a multiprocessor
machine.
▪ This model provides the best accuracy on concurrency and when a thread
performs a blocking system call, the kernel can schedule another thread for
execution.
Fig. Many to Many Model
Advantages of Many to Many Model:
1. Flexibility: Dynamically maps user threads to kernel threads, optimizing
resource usage.
2. Enhanced Concurrency: Supports a high number of concurrent threads,
suitable for scalable applications.
3. Blocking Management: If one thread blocks, others can run, improving
application performance.
4. Efficient Scheduling: The kernel can make scheduling decisions based on
current load and resource availability.
Disadvantages of Many to Many Model:
1. Complexity: Increased complexity in managing the mapping and scheduling
of threads.
2. Overhead: Potential overhead in thread management and context switching.
3. Implementation Challenges: More challenging to implement than simpler
models.
4. Potential Bottlenecks: If not managed well, it can lead to performance
bottlenecks.
Q.12] Write syntax and use with suitable example of following commands
1. Kill
2. Sleep
3. PS
4. Wait
5. Exit
6. Cal
7. Date
1] kill:
The kill command in Unix/Linux systems is used to send signals to
processes, primarily to terminate them. The most common signal is SIGTERM,
which requests a process to terminate gracefully. The kill command can also be
used with different signals for various effects.
Syntax: kill Pid
Example:
1) Find the PID of the process:
pgrep myapp
2) Terminate the process:
kill 1234
3) Forcefully terminate the process:
kill -9 1234
2] sleep:
The sleep command in Unix/Linux systems is used to pause the
execution of a script or command for a specified duration. It is commonly used
in scripts to create delays or to wait for a certain amount of time before executing
the next command.
Syntax: sleep NUMBER[SUFFIX]…
sleep OPTION
Suffix:
• s: seconds (default)
• m: minutes
• h: hours
• d: days
Example:
1) Pause for a specified number of seconds:
sleep 5
2) Pause for a specified number of minutes:
sleep 2m
3) Pause for a specified number of hours:
sleep 1h
4) Pause for a specified number of days:
sleep 1d
3] ps: It is used to display the characteristics of a process. This command when
execute without options, it lists the processes associated with a user at a
particular terminal.
Syntax: ps [options]
Example: ps
Output:

Each line in the output shows PID, the terminal with which the process is
associated, the cumulative processor time that has been consumed since the
process has been started and the process name.
Options:
1]-a: It shows the processes of all users.
Example: ps -a

2] -u: Shows the activities of any specified user at any time.


Example: ps -u sakshi

3] -f: It is used to display full listing of attributes of a process. It includes UID


(user ID), PPID (Parent ID), C (amount of CPU time consumed by the process)
and STIME (chronological time that has elapsed since the process started).
Example: ps -f

4] -e: It displays processes including user and system processes.


Example: ps -e
This output shows:
• UID: User ID of the process owner.
• PID: Process ID.
• PPID: Parent Process ID.
• C: CPU usage.
• STIME: Start time of the process.
• TTY: Terminal associated with the process.
• TIME: Total CPU time used.
• CMD: Command that started the process
4] wait:
The wait command in Unix and Unix-like operating systems is used to
pause the execution of a script until a specified process completes. It can also be
used to retrieve the exit status of that process.
Syntax: wait [pid]
Example:
1] Basic Usage:
sleep 5 &
pid=$!
wait $pid
echo "Background process $pid has completed."
2] Waiting for Multiple Background Processes:
sleep 3 &
sleep 5 &
wait
echo "All background processes have completed."
3] Getting Exit Status:
(exit 1) &
pid=$!
wait $pid
status=$?
if [ $status -ne 0 ]; then
echo "Process $pid failed with exit status $status."
Else
echo "Process $pid completed successfully."
fi

5] exit:
The exit command in Unix and Unix-like operating systems is used to
terminate a shell script or to exit a shell session. You can specify an exit status
code to indicate success or failure.
Syntax: exit
Example:
1] Basic Usage:
Example: exit
2] Using Exit Status in a Calling Script:
Example: exit 2

6] cal:
The cal command is used to display a calendar in the terminal. It can show
the current month, a specific month, or an entire year.
Syntax: cal [options] [month] [year]

Options:
• -y: Display the current year.
• -3: Display the previous, current, and next month.
• -j: Show the Julian calendar.
Example:
1] Display the Current Month:
Example: cal
2] Display a Specific Month and Year:
Example: cal 10 2024
3] Display the Current Year:
Example: cal -y
4] Display Previous, Current, and Next Month:
Example: cal -3

7] date:
The date command in Unix and Unix-like operating systems is used to display
the current date and time.
Syntax: date [OPTION]... [+FORMAT]
Options:
• -u: Display the date and time in UTC (Coordinated Universal Time).
• -d: Display a date string.
• -R: Output in RFC-2822 format.
Formatting Examples:
• %Y: Year (e.g., 2024)
• %m: Month (01-12)
• %d: Day of the month (01-31)
• %H: Hour (00-23)

Example:
1] Display the Current Date and Time:
Example: date
2] Display a Specific Date:
Example: date -d "2024-12-25"
3] Display the Current Date in UTC:
Example: date -u
Q.13] Writer the outputs of following commands
(i) Wait 2385018
(ii) Sleep 09
(iii) PS –u Asha
1. Wait command waits until the termination of specified process ID 2385018
2. Sleep command is used to delay for 9 seconds during the execution of a
process i.e. it will pause the terminal for 9 seconds.
3. ps command with -u is used to display data/processes for the specific user
Asha.

Q. 14] Write Unix command for following:


i) create a folder OSY
ii) create a file FIRST in OSY folder
iii) List/display all files and directories.
iv) Write command to clear the screen
1. create a folder OSY: $mkdir OSY
2. create a file FIRST in OSY folder: $cd OSY
$cat>FIRST or $ touch FIRST
3. List/display all files and directories: $ls
4. to clear screen: $clear
Q.15] Give commands to perform following tasks:
i) To add delay in script
ii) To terminate a process

1. To add delay in script:


sleep 5
2. To terminate a process:
kill 1234

2nd Chapter Question


Q.1] Write two uses of following O.S. tools
i. User Management
ii. Security Policy
iii. Device Management
iv. Performance monitor
v. Task Scheduler
1] User Management:
1. Create and delete user accounts for access control.
2. Assign roles and permissions to manage access levels.
3. Enforce password policies for enhanced security.
4. Monitor user activity for auditing and security analysis.
2] Security Policy:
1. Establish data protection guidelines for sensitive information.
2. Define network access controls to prevent unauthorized connections.
3. Outline incident response procedures for security breaches.
4. Ensure compliance with legal and regulatory requirements.
3] Device management:
1. Managing all the hardware or virtual devices of computer system.
2. Allow interaction with hardware devices through device driver.
3. Used to install device and component-level drivers as well as associated
software.
4. Keeping track of all device’s data and location.
5. Allocate devices to the process as per process requirement and priority.
4] Performance monitor:
1. Monitor various activities on a computer such as CPU or memory usage.
2. Used to examine how programs running on their computer affect
computer’s performance
3. It is used to identify performance problems or bottleneck that affect
operating system or installed applications.
4. Used to observe the effect of system configuration changes.
5] Task Scheduler:
1. Assign processor to task ready for execution
2. Executing predefined actions automatically whenever a certain set of
condition is met.
3. Task Scheduler can automate regular backups of files or systems.
4. It can start applications at specific times or on startup.

You might also like