1.Thread
1.Thread
Explain how threads are created and how they are managed
Explain how threading API for Microsoft .NET framework will work. Give suitable example for creating thread and
also discuss about how an thread priority set.
Explain how Thread Synchronization takes place, What are POSIX Threads?
What are threads inside the OS and threads inside the Hardware?
A thread is a single sequence stream within a process. Threads are also called lightweight processes as they possess
some of the properties of processes. Each thread belongs to exactly one process.
In an operating system that supports multithreading, the process can consist of many threads. But threads
can be effective only if the CPU is more than 1 otherwise two threads have to context switch for that single
CPU.
All threads belonging to the same process share - code section, data section, and OS resources (e.g. open files
and signals)
But each thread has its own (thread control block) - thread ID, program counter, register set, and a stack
Any operating system process can execute a thread. we can say that single process can have multiple threads.
Threads run in concurrent manner that improves the application performance. Each such thread has its own
CPU state and stack, but they share the address space of the process and the environment. For example,
when we work on Microsoft Word or Google Docs, we notice that while we are typing, multiple things
happen together (formatting is applied, page is changed and auto save happens).
Threads can share common data so they do not need to use inter-process communication. Like the processes,
threads also have states like ready, executing, blocked, etc.
Priority can be assigned to the threads just like the process, and the highest priority thread is scheduled first.
Each thread has its own Thread Control Block (TCB). Like the process, a context switch occurs for the thread,
and register contents are saved in (TCB). As threads share the same address space and resources,
synchronization is also required for the various activities of the thread.
Components of Threads These are the basic components of the Operating System.
Stack Space: Stores local variables, function calls, and return addresses specific to the thread.
Register Set: Hold temporary data and intermediate results for the thread's execution.
Program Counter: Tracks the current instruction being executed by the thread.
Types of Thread in Operating System Threads are of two types. These are described below.
User Level Thread
Threads
1. User Level Thread User Level Thread is a type of thread that is not created using system calls. The kernel has no
work in the management of user-level threads. User-level threads can be easily implemented by the user. In case
when user-level threads are single-handed processes, kernel-level thread manages them.
Because of the presence of only Program Counter, Register Set, and Stack Space, it has a simple
representation.
The operating system is unaware of user-level threads, so kernel-level optimizations, like load balancing
across CPUs, are not utilized.
If a user-level thread makes a blocking system call, the entire process (and all its threads) is blocked, reducing
efficiency.
User-level thread scheduling is managed by the application, which can become complex and may not be as
optimized as kernel-level scheduling.
2. Kernel Level Threads A kernel Level Thread is a type of thread that can recognize the Operating system easily.
Kernel Level Threads has its own thread table where it keeps track of the system. The operating System Kernel helps
in managing threads. Kernel Threads have somehow longer context switching time. Kernel helps in the management
of threads.
Kernel-level threads can run on multiple processors or cores simultaneously, enabling better utilization of
multicore systems.
The kernel is aware of all threads, allowing it to manage and schedule them effectively across available
resources.
The kernel can distribute threads across CPUs, ensuring optimal load balancing and system performance.
Context switching between kernel-level threads is slower compared to user-level threads because it requires
mode switching between user and kernel space.
Managing kernel-level threads involves frequent system calls and kernel interactions, leading to increased
CPU overhead.
A large number of threads may overload the kernel scheduler, leading to potential performance degradation
in systems with many threads.
Implementation of this type of thread is a little more complex than a user-level thread.
Difference Between Process and Thread The primary difference is that threads within the same process run in a
shared memory space, while processes run in separate memory spaces. Threads are not independent of one another
like processes are, and as a result, threads share with other threads their code section, data section, and OS
resources (like open files and signals). But, like a process, a thread has its own program counter (PC), register set, and
stack space.
What is Multi-Threading? A thread is also known as a lightweight process. The idea is to achieve parallelism by
dividing a process into multiple threads. For example, in a browser, multiple tabs can be different threads. MS Word
uses multiple threads: one thread to format the text, another thread to process inputs, etc. More advantages of
multithreading are discussed below.
Multithreading is a technique used in operating systems to improve the performance and responsiveness of
computer systems. Multithreading allows multiple threads (i.e., lightweight processes) to share the same resources
of a single process, such as the CPU, memory, and I/O devices.
Multithreading can be done without OS support, as seen in Java's multithreading model. In Java, threads are
implemented using the Java Virtual Machine (JVM), which provides its own thread management. These threads, also
called user-level threads, are managed independently of the underlying operating system.
Application itself manages the creation, scheduling, and execution of threads without relying on the operating
system's kernel. The application contains a threading library that handles thread creation, scheduling, and context
switching. The operating system is unaware of User-Level threads and treats the entire process as a single-threaded
entity.
Responsiveness: If the process is divided into multiple threads, if one thread completes its execution, then its
output can be immediately returned.
Faster context switch: Context switch time between threads is lower compared to the process context
switch. Process context switching requires more overhead from the CPU.
Effective utilization of multiprocessor system: If we have multiple threads in a single process, then we can
schedule multiple threads on multiple processors. This will make process execution faster.
Resource sharing: Resources like code, data, and files can be shared among all threads within a process.
Note: Stacks and registers can't be shared among the threads. Each thread has its own stack and registers.
Communication: Communication between multiple threads is easier, as the threads share a common address
space. while in the process we have to follow some specific communication techniques for communication
between the two processes.
Enhanced throughput of the system: If a process is divided into multiple threads, and each thread function is
considered as one job, then the number of jobs completed per unit of time is increased, thus increasing the
throughput of the system.
What are Threads? Threads are small units of a computer program that can run independently. They allow a
program to perform multiple tasks at the same time, like having different parts of the program run simultaneously.
This makes programs more efficient and responsive, especially for tasks that can be divided into smaller parts.
A program counter
A register set
A stack space
Threads are not independent of each other as they share the code, data, OS resources, etc.
Threads allow multiple tasks to be performed simultaneously within a process, making them a fundamental concept
in modern operating systems.
Both have their own execution context: Each thread and process has its own execution context, which
includes its own register set, program counter, and stack. This allows each thread or process to execute
independently and make progress without interfering with other threads or processes.
Both can communicate with each other: Threads and processes can communicate with each other using
various inter-process communication (IPC) mechanisms such as shared memory, message queues, and pipes.
This allows threads and processes to share data and coordinate their activities.
Both can be preempted: Threads and processes can be preempted by the operating system, which means
that their execution can be interrupted at any time. This allows the operating system to switch to another
thread or process that needs to execute.
Both can be terminated: Threads and processes can be terminated by the operating system or by other
threads or processes. When a thread or process is terminated, all of its resources, including its execution
context, are freed up and made available to other threads or processes.
Resources: Processes have their own address space and resources, such as memory and file handles,
whereas threads share memory and resources with the program that created them.
Scheduling: Processes are scheduled to use the processor by the operating system, whereas threads are
scheduled to use the processor by the operating system or the program itself.
Creation: The operating system creates and manages processes, whereas the program or the operating
system creates and manages threads.
Communication: Because processes are isolated from one another and must rely on inter-process
communication mechanisms, they generally have more difficulty communicating with one another than
threads do. Threads, on the other hand, can interact with other threads within the same program directly.
Threads, in general, are lighter than processes and are better suited for concurrent execution within a single
program. Processes are commonly used to run separate program or to isolate resources between program.
Types of Threads
There are two main types of threads User Level Thread and Kernel Level Thread let's discuss each one by one in
detail:
User Level Thread is implemented in the user level library, they are not created using the system calls. Thread
switching does not need to call OS and to cause interrupt to Kernel. Kernel doesn't know about the user level thread
and manages them as if they were single-threaded processes.
Advantages of ULT
Disadvantages of ULT
Kernel knows and manages the threads. Instead of thread table in each process, the kernel itself has thread table (a
master one) that keeps track of all the threads in the system. In addition kernel also maintains the traditional process
table to keep track of the processes. OS kernel provides system call to create and manage threads.
Advantages of KLT
Since kernel has full knowledge about the threads in the system, scheduler may decide to give more time to
processes having large number of threads.
Disadvantages of KLT
Threading Issues
The fork() and exec() System Calls : The semantics of the fork() and exec() system calls change in a
multithreaded program. If one thread in a program calls fork(), does the new process duplicate all threads, or
is the new process single-threaded? Some UNIX systems have chosen to have two versions of fork(), one that
duplicates all threads and another that duplicates only the thread that invoked the fork() system call. The
exec() system , That is, if a thread invokes the exec() system call , the program specified in the parameter to
exec() will replace the entire process—including all threads.
Signal Handling : A signal is used in UNIX systems to notify a process that a particular event has occurred. A
signal may be received either synchronously or asynchronously depending on the source of and the reason
for the event being signaled. All signals, whether synchronous or asynchronous, follow the same pattern:1. A
signal is generated by the occurrence of a particular event.2. The signal is delivered to a process.3. Once
delivered, the signal must be handled. A signal may be handled by one of two possible handlers: 1. A default
signal handler .2. A user-defined signal handler. Every signal has a default signal handler that the kernel runs
when handling that signal. This default action can be overridden by a user-defined signal handler that is
called to handle the signal.
Thread Cancellation : Thread cancellation involves terminating a thread before it has completed. For
example, if multiple threads are concurrently searching through a database and one thread returns the
result, the remaining threads might be canceled. Another situation might occur when a user presses a button
on a web browser that stops a web page from loading any further. Often, a web page loads using several
threads—each image is loaded in a separate thread. When a user presses the stop button on the browser, all
threads loading the page are canceled. A thread that is to be canceled is often referred to as the target
thread. Cancellation of a target thread may occur in two different scenarios:1. Asynchronous cancellation.
One thread immediately terminates the target thread.2. Deferred cancellation. The target thread periodically
checks whether it should terminate, allowing it an opportunity to terminate itself in an orderly fashion.
Thread-Local Storage : Threads belonging to a process share the data of the process. Indeed, this data
sharing provides one of the benefits of multithreaded programming. However, in some circumstances, each
thread might need its own copy of certain data. We will call such data thread-local storage (or TLS.) For
example, in a transaction-processing system, we might service each transaction in a separate thread.
Furthermore, each transaction might be assigned a unique identifier. To associate each thread with its unique
identifier, we could use thread-local storage.
Scheduler Activations : One scheme for communication between the user-thread library and the kernel is
known as scheduler activation. It works as follows: The kernel provides an application with a set of virtual
processors (LWPs), and the application can schedule user threads onto an available virtual processor.
Advantages of Threading
Resource Sharing: Resources like code and data are shared between threads, thus allowing a multithreaded
application to have several threads of activity within the same address space.
Increased Concurrency: Threads may be running parallelly on different processors, increasing concurrency in
a multiprocessor machine.
Lesser Cost: It costs less to create and context-switch threads than processes.
Lesser Context-Switch Time: Threads take lesser context-switch time than processes.
Disadvantages of Threading
Complexity: Threading can make programs more complicated to write and debug because threads need to
synchronize their actions to avoid conflicts.
Resource Overhead: Each thread consumes memory and processing power, so having too many threads can
slow down a program and use up system resources.
Difficulty in Optimization: It can be challenging to optimize threaded programs for different hardware
configurations, as thread performance can vary based on the number of cores and other factors.
Debugging Challenges: Identifying and fixing issues in threaded programs can be more difficult compared to
single-threaded programs, making troubleshooting complex.
Multithreading in Operating System A thread is a path that is followed during a program’s execution. The majority
of programs written nowadays run as a single thread. For example, a program is not capable of reading keystrokes
while making drawings. These tasks cannot be executed by the program at the same time. This problem can be
solved through multitasking so that two or more tasks can be executed simultaneously.
What is Multithreading? Multithreading is a feature in operating systems that allows a program to do several tasks
at the same time. Think of it like having multiple hands working together to complete different parts of a job faster.
Each "hand" is called a thread, and they help make programs run more efficiently. Multithreading makes your
computer work better by using its resources more effectively, leading to quicker and smoother performance for
applications like web browsers, games, and many other programs you use every day.
How Does Multithreading Work? Multithreading works by allowing a computer's processor to handle multiple tasks
at the same time. Even though the processor can only do one thing at a time, it switches between different threads
from various programs so quickly that it looks like everything is happening all at once.
Processor Handling : The processor can execute only one instruction at a time, but it switches between
different threads so fast that it gives the illusion of simultaneous execution.
Thread Synchronization : Each thread is like a separate task within a program. They share resources and work
together smoothly, ensuring programs run efficiently.
Efficient Execution : Threads in a program can run independently or wait for their turn to process, making
programs faster and more responsive.
Programming Considerations : Programmers need to be careful about managing threads to avoid problems
like conflicts or situations where threads get stuck waiting for each other.
What is Multitasking? Multitasking is the ability of an operating system to run multiple programs or tasks at the
same time. It allows you to perform different activities simultaneously on your computer. For example, you can listen
to music while browsing the internet and typing a document all at once.
Multitasking is of two types: Processor-based and thread-based. Processor-based multitasking is managed by the OS,
however, multitasking through multithreading can be controlled by the programmer to some extent. The concept
of multi-threading needs a proper understanding of these two terms - a process and a thread. A process is a
program being executed. A process can be further divided into independent units known as threads. A thread is like
a small light-weight process within a process. Or we can say a collection of threads is what is known as a process.
Applications: Threa
ding is used widely in almost every field. Most widely it is seen over the internet nowadays where we are using
transaction processing of every type like recharges, online transfer, banking etc. Threading is a segment which divide
the code into small parts that are of very light weight and has less burden on CPU memory so that it can be easily
worked out and can achieve goal in desired field. The concept of threading is designed due to the problem of fast
and regular changes in technology and less the work in different areas due to less application. Then as says “need is
the generation of creation or innovation” hence by following this approach human mind develop the concept of
thread to enhance the capability of programming.
Multithreading vs Multitasking
Web browser loading a page, handling user Listening to music, browsing the web, and
Example input, and downloading files simultaneously. typing a document at the same time.
Resource Utilizes CPU resources more efficiently within a Manages system resources to allocate time and
Use program. memory to different programs.
Enhances the performance and responsiveness Improves overall system efficiency by allowing
Purpose of a single application. concurrent execution of multiple programs.
Lifecycle of a Thread
There are various stages in the lifecycle of a thread. Following are the stages a thread goes through in its whole life.
New: The lifecycle of a born thread (new thread) starts in this state. It remains in this state till a program
starts.
Runnable : A thread becomes runnable after it starts. It is considered to be executing the task given to it.
Waiting : While waiting for another thread to perform a task, the currently running thread goes into the
waiting state and then transitions back again after receiving a signal from the other thread.
Timed Waiting: A runnable thread enters into this state for a specific time interval and then transitions back
when the time interval expires or the event the thread was waiting for occurs.
Terminated (Dead) : A thread enters into this state after completing its task.
Parallel Execution: This occurs when every thread in the process runs on a separate processor at the same
time and in the same multithreaded process
Drawbacks of Multithreading Multithreading is complex and many times difficult to handle. It has a few drawbacks.
These are:
If you don’t make use of the locking mechanisms properly, while investigating data access issues there is a
chance of problems arising like data inconsistency and dead-lock.
If many threads try to access the same data, then there is a chance that the situation of thread starvation
may arise. Resource contention issues are another problem that can trouble the user.
Display issues may occur if threads lack coordination when displaying data.
Benefits of Multithreading
Multithreading can improve the performance and efficiency of a program by utilizing the
available CPU resources more effectively. Executing multiple threads concurrently, it can take advantage of
parallelism and reduce overall execution time.
Multithreading can enhance responsiveness in applications that involve user interaction. By separating time-
consuming tasks from the main thread, the user interface can remain responsive and not freeze or become
unresponsive.
Multithreading can enable better resource utilization. For example, in a server application, multiple threads
can handle incoming client requests simultaneously, allowing the server to serve more clients concurrently.
Multithreading can facilitate better code organization and modularity by dividing complex tasks into smaller,
manageable units of execution. Each thread can handle a specific part of the task, making the code easier to
understand and maintain.
Conclusion
In conclusion, multithreading is an important feature in operating systems that allows a program to do multiple tasks
at the same time. By dividing tasks into smaller threads, it helps make programs run faster and more efficiently. This
means better performance and a smoother experience for users. Understanding and using multithreading can
greatly improve how well software and applications perform.