Chapter 3
Contents have been taken from Web Resources Namely
1. Tutorial Points
2. Geek for Geeks
3. Wikipedia
4. Lecture notes of experts as posted on Internet
What is Thread?
A thread is a flow of execution through the process code, with its own program counter that keeps
track of which instruction to execute next, system registers which hold its current working
variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment and open
files. When one thread alters a code segment memory item, all other threads see that.
A thread is also called a lightweight process. Threads provide a way to improve application
performance through parallelism. Threads represent a software approach to improving
performance of operating system by reducing the overhead thread is equivalent to a classical
process.
Each thread belongs to exactly one process and no thread can exist outside a process. Each thread
represents a separate flow of control. Threads have been successfully used in implementing
network servers and web server. They also provide a suitable foundation for parallel execution of
applications on shared memory multiprocessors. The following figure shows the working of a
single-threaded and a multithreaded process.
Difference between Process and Thread
S.N. Process Thread
Process is heavy weight or resource Thread is light weight, taking lesser resources
1
intensive. than a process.
Process switching needs interaction with Thread switching does not need to interact with
2
operating system. operating system.
In multiple processing environments, each
All threads can share same set of open files,
3 process executes the same code but has its
child processes.
own memory and file resources.
If one process is blocked, then no other
While one thread is blocked and waiting, a
4 process can execute until the first process is
second thread in the same task can run.
unblocked.
Multiple processes without using threads Multiple threaded processes use fewer
5
use more resources. resources.
In multiple processes each process operates One thread can read, write or change another
6
independently of the others. thread's data.
Advantages of Thread
• Threads minimize the context switching time.
• Use of threads provides concurrency within a process.
• Efficient communication.
• It is more economical to create and context switch threads.
• Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.
Types of Thread
Threads are implemented in following two ways −
• User Level Threads − User managed threads.
• Kernel Level Threads − Operating System managed threads acting on kernel, an operating
system core.
User Level Threads
In this case, the thread management kernel is not aware of the existence of threads. The thread
library contains code for creating and destroying threads, for passing message and data between
threads, for scheduling thread execution and for saving and restoring thread contexts. The
application starts with a single thread.
Advantages
• Thread switching does not require Kernel mode privileges.
• User level thread can run on any operating system.
• Scheduling can be application specific in the user level thread.
• User level threads are fast to create and manage.
Disadvantages
• In a typical operating system, most system calls are blocking.
• Multithreaded application cannot take advantage of multiprocessing.
Kernel Level Threads
In this case, thread management is done by the Kernel. There is no thread management code in the
application area. Kernel threads are supported directly by the operating system. Any application
can be programmed to be multithreaded. All of the threads within an application are supported
within a single process.
The Kernel maintains context information for the process as a whole and for individuals threads
within the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread
creation, scheduling and management in Kernel space. Kernel threads are generally slower to
create and manage than the user threads.
Advantages
• Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
• If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
• Kernel routines themselves can be multithreaded.
Disadvantages
• Kernel threads are generally slower to create and manage than the user threads.
• Transfer of control from one thread to another within the same process requires a mode
switch to the Kernel.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread facility.
Solaris is a good example of this combined approach. In a combined system, multiple threads
within the same application can run in parallel on multiple processors and a blocking system call
need not block the entire process. Multithreading models are three types
• Many to many relationship.
• Many to one relationship.
• One to one relationship.
Many to Many Model
The many-to-many model multiplexes any number of user threads onto an equal or smaller number
of kernel threads.
The following diagram shows the many-to-many threading model where 6 user level threads are
multiplexing with 6 kernel level threads. In this model, developers can create as many user threads
as necessary and the corresponding Kernel threads can run in parallel on a multiprocessor machine.
This model provides the best accuracy on concurrency and when a thread performs a blocking
system call, the kernel can schedule another thread for execution.
Many to One Model
Many-to-one model maps many user level threads to one Kernel-level thread. Thread management
is done in user space by the thread library. When thread makes a blocking system call, the entire
process will be blocked. Only one thread can access the Kernel at a time, so multiple threads are
unable to run in parallel on multiprocessors.
If the user-level thread libraries are implemented in the operating system in such a way that the
system does not support them, then the Kernel threads use the many-to-one relationship modes.
One to One Model
There is one-to-one relationship of user-level thread to the kernel-level thread. This model
provides more concurrency than the many-to-one model. It also allows another thread to run when
a thread makes a blocking system call. It supports multiple threads to execute in parallel on
microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding Kernel thread.
OS/2, windows NT and windows 2000 use one to one relationship model.
Difference between User-Level & Kernel-Level Thread
S.N. User-Level Threads Kernel-Level Thread
User-level threads are faster to create and Kernel-level threads are slower to create and
1
manage. manage.
Implementation is by a thread library at the Operating system supports creation of Kernel
2
user level. threads.
User-level thread is generic and can run on Kernel-level thread is specific to the
3
any operating system. operating system.
Multi-threaded applications cannot take Kernel routines themselves can be
4
advantage of multiprocessing. multithreaded.
States of thread
New State
A thread is in the new state once it has been created. It doesn't take any CPU resources until it is
actually running.
Runnable State
Now, the process is taking up CPU resources because it is ready to run. However, it is in a runnable
state because it could be waiting for another thread to run and so it has to wait for its turn.
Blocked State
A thread which is not allowed to continue remains in a blocked state. Let's say that a thread is
waiting for input/output (I/O), but it never gets those resources, so it will remain in a blocked state.
The good news is that a blocked thread won't use CPU resources.
The thread is not stopped forever. For example, if you allow emergency vehicles to pass, it does
not mean that you are forever barred from your final destination. Similarly, those threads
(emergency vehicles) that have a higher priority are processed ahead of you.
If a thread becomes blocked, another thread moves to the front of the line. How this is
accomplished is covered in the next section about scheduling and context switching.
Terminated State
A thread is terminated if it finishes a task successfully or abnormally. At this point, no CPU
resources are used.
Thread States in Operating Systems
When a thread moves through the system, it is always in one of the five states:
(1) Ready
(2) Running
(3) Waiting
(4) Delayed
(5) Blocked
Excluding CREATION and FINISHED state.
1. When an application is to be processed, then it creates a thread.
2. It is then allocated the required resources(such as a network) and it comes in the READY
queue.
3. When the thread scheduler (like a process scheduler) assign the thread with processor, it
comes in RUNNING queue.
4. When the process needs some other event to be triggered, which is outsides it’s control
(like another process to be completed), it transitions from RUNNING to WAITING
queue.
5. When the application has the capability to delay the processing of the thread, it when
needed can delay the thread and put it to sleep for a specific amount of time. The thread
then transitions from RUNNING to DELAYED queue.
An example of delaying of thread is snoozing of an alarm. After it rings for the first time
and is not switched off by the user, it rings again after a specific amount of time. During
that time, the thread is put to sleep.
6. When thread generates an I/O request and cannot move further till it’s done, it transitions
from RUNNING to BLOCKED queue.
7. After the process is completed, the thread transitions from RUNNING to FINISHED.
The difference between the WAITING and BLOCKED transition is that in WAITING the thread
waits for the signal from another thread or waits for another process to be completed, meaning the
burst time is specific. While, in BLOCKED state, there is no specified time (it depends on the user
when to give an input).
In order to execute all the processes successfully, the processor needs to maintain the information
about each thread through Thread Control Blocks (TCB).
Thread States in Operating Systems
When a thread moves through the system, it is always in one of the five states:
(1) Ready
(2) Running
(3) Waiting
(4) Delayed
(5) Blocked
Excluding CREATION and FINISHED state.
1. When an application is to be processed, then it creates a thread.
2. It is then allocated the required resources(such as a network) and it comes in the READY
queue.
3. When the thread scheduler (like a process scheduler) assign the thread with processor, it
comes in RUNNING queue.
4. When the process needs some other event to be triggered, which is outsides it’s control
(like another process to be completed), it transitions from RUNNING to WAITING
queue.
5. When the application has the capability to delay the processing of the thread, it when
needed can delay the thread and put it to sleep for a specific amount of time. The thread
then transitions from RUNNING to DELAYED queue.
An example of delaying of thread is snoozing of an alarm. After it rings for the first time
and is not switched off by the user, it rings again after a specific amount of time. During
that time, the thread is put to sleep.
6. When thread generates an I/O request and cannot move further till it’s done, it transitions
from RUNNING to BLOCKED queue.
7. After the process is completed, the thread transitions from RUNNING to FINISHED.
The difference between the WAITING and BLOCKED transition is that in WAITING the thread
waits for the signal from another thread or waits for another process to be completed, meaning the
burst time is specific. While, in BLOCKED state, there is no specified time (it depends on the user
when to give an input).
In order to execute all the processes successfully, the processor needs to maintain the information
about each thread through Thread Control Blocks (TCB).
Multithreading in Operating System
A thread is a path which is followed during a program’s execution. Majority of programs written
now a days run as a single thread.Lets say, for example a program is not capable of reading
keystrokes while making drawings. These tasks cannot be executed by the program at the same
time. This problem can be solved through multitasking so that two or more tasks can be executed
simultaneously.
Multitasking is of two types: Processor based and thread based. Processor based multitasking is
totally managed by the OS, however multitasking through multithreading can be controlled by the
programmer to some extent.
The concept of multi-threading needs proper understanding of these two terms – a process and
a thread. A process is a program being executed. A process can be further divided into
independent units known as threads.
A thread is like a small light-weight process within a process. Or we can say a collection of threads
is what is known as a process.
Applications –
Threading is used widely in almost every field. Most widely it is seen over the internet now days
where we are using transaction processing of every type like recharges, online transfer, banking
etc. Threading is a segment which divide the code into small parts that are of very light weight and
has less burden on CPU memory so that it can be easily worked out and can achieve goal in desired
field. The concept of threading is designed due to the problem of fast and regular changes in
technology and less the work in different areas due to less application. Then as says “need is the
generation of creation or innovation” hence by following this approach human mind develop the
concept of thread to enhance the capability of programming.
Benefits of Multithreading in Operating System
The benefits of multi threaded programming can be broken down into four major categories:
1. Responsiveness –
Multithreading in an interactive application may allow a program to continue running even
if a part of it is blocked or is performing a lengthy operation, thereby increasing
responsiveness to the user.
In a non multi threaded environment, a server listens to the port for some request and when
the request comes, it processes the request and then resume listening to another request.
The time taken while processing of request makes other users wait unnecessarily. Instead
a better approach would be to pass the request to a worker thread and continue listening to
port.
For example, a multi threaded web browser allow user interaction in one thread while an
video is being loaded in another thread. So instead of waiting for the whole web-page to
load the user can continue viewing some portion of the web-page.
2. Resource Sharing –
Processes may share resources only through techniques such as-
o Message Passing
o Shared Memory
Such techniques must be explicitly organized by programmer. However, threads share the
memory and the resources of the process to which they belong by default.
The benefit of sharing code and data is that it allows an application to have several threads
of activity within same address space.
3. Economy –
Allocating memory and resources for process creation is a costly job in terms of time and
space.
Since, threads share memory with the process it belongs, it is more economical to create
and context switch threads. Generally much more time is consumed in creating and
managing processes than in threads.
In Solaris, for example, creating process is 30 times slower than creating threads and
context switching is 5 times slower.
4. Scalability –
The benefits of multi-programming greatly increase in case of multiprocessor architecture,
where threads may be running parallel on multiple processors. If there is only one thread
then it is not possible to divide the processes into smaller tasks that different processors
can perform.
Single threaded process can run only on one processor regardless of how many processors
are available.
Multi-threading on a multiple CPU machine increases parallelism.
Context Switching involves storing the context or state of a process so that it can be reloaded when
required and execution can be resumed from the same point as earlier. This is a feature of a
multitasking operating system and allows a single CPU to be shared by multiple processes.
A diagram that demonstrates context switching is as follows:
In the above diagram, initially Process 1 is running. Process 1 is switched out and Process 2 is
switched in because of an interrupt or a system call. Context switching involves saving the state
of Process 1 into PCB1 and loading the state of process 2 from PCB2. After some time again a
context switch occurs and Process 2 is switched out and Process 1 is switched in again. This
involves saving the state of Process 2 into PCB2 and loading the state of process 1 from PCB1.