0% found this document useful (0 votes)
23 views

4 Threads

Threads are lightweight processes that can be used to enable concurrency within a process. A thread shares code, data, and resources with other threads in the same process but has its own program counter, register set, and stack. There are two main types of threads - user-level threads managed by a thread library and kernel-level threads managed by the operating system kernel. Multithreading provides benefits like improved responsiveness and scalability but also introduces challenges around synchronization and resource sharing between threads.

Uploaded by

Aditi Verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

4 Threads

Threads are lightweight processes that can be used to enable concurrency within a process. A thread shares code, data, and resources with other threads in the same process but has its own program counter, register set, and stack. There are two main types of threads - user-level threads managed by a thread library and kernel-level threads managed by the operating system kernel. Multithreading provides benefits like improved responsiveness and scalability but also introduces challenges around synchronization and resource sharing between threads.

Uploaded by

Aditi Verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Threads

Threads

 A thread or lightweight process (LWP) is a basic unit of CPU utilization


and comprises following:
• A thread ID
• A program counter
• A register set
• A stack

 A thread shares the following with its parent process and siblings:
• Code section
• Data section
• Memory address space/ global memory
• I/O devices
• Other OS resources such as open files and signals
Single and Multithreaded Processes
Examples of Multithreaded Applications

 An application that creates photo thumbnails from a collection of


images may use a separate thread to generate a thumbnail from each
separate image.

 A web browser might have one thread display images or text while
another thread retrieves data from the network.

 Multithreaded Server Architecture


Benefits

 Takes less time to create and terminate a thread than a process


 Context switching is faster – Switching between two threads takes
less time than switching between processes
 Minimized communication overhead – Threads enhance
efficiency in communication between programs
 Responsiveness – may allow continued execution if part of
process is blocked, especially important for user interfaces
 Resource Sharing – threads share resources of process, easier
than shared memory or message passing
 Economy – cheaper than process creation, thread switching lower
overhead than context switching
 Scalability – process can take advantage of multicore
architectures
Thread Control Block (TCB)

 Like a PCB, a TCB also contains the information necessary for a


thread to execute
 Each thread has its own TCB.
 TCB information is private to each thread.
 The contents are:
• A pointer that will enable it to be chained into a linked list
• Value of its stack pointer
• A stack area that includes local variables
• Thread number, type, or name
• Age of the tread, or how long this thread has been active
• Priority
• Resources that this thread has been granted
Thread State Diagram

 A thread may undergo transitions between different states (like a


process) during its lifetime.
 The thread state diagram consists of the following states:
• Ready
• Running
• Blocked
Threads

 Operations on Threads
• Thread operations associated with a change in thread state are
 Spawn
 Block
 Unblock
 Finish

 Thread Synchronization
• It is necessary to synchronize the activities of the various threads
 All threads of a process share the same address space and
other resources
 Any alteration of a resource by one thread affects the other
threads in the same process
Types of Threads

 There are two broad categories of thread implementation:


• User-Level Thread (ULT) – Thread Libraries
• Kernel-Level Thread (KLT) – System Calls
User-Level Threads (ULTs)

 Thread management is done by the application that uses thread


library.
 The thread library manages all threads.
 The kernel is not aware of the existence of threads.
 Three primary thread libraries:
• POSIX Pthreads
• Windows threads
• Java threads
User-Level Threads (ULTs)

 Advantages of ULTs
• Thread switching does not require kernel mode privileges (no
mode switches)
• Scheduling can be application specific
• ULTs can run on any OS

 Disadvantages of ULTs
• In a typical OS many system calls are blocking in nature
 As a result, when a ULT executes a system call, not only is that
thread blocked, but all of the threads within the process are
blocked
• The kernel can only assign processes to processors.
Overcoming ULT Disadvantages

 Writing an application as multiple processes rather than multiple


threads

 Jacketing
• Converts a blocking system call into a non-blocking system call
Kernel-Level Threads (KLTs)

 Thread management is done by the kernel.


 There is no thread library.
 No thread management is done by the application.
 Examples – virtually all general -purpose operating systems, including:
• Windows
• Linux
• Mac OS X
• iOS
• Android
Kernel-Level Threads (KLTs)

 Advantages of KLTs
• The kernel can simultaneously schedule multiple threads from the
same process on multiple processors.
• If one thread in a process is blocked, the kernel can schedule
another thread of the same process.

 Disadvantages of KLTs
• The transfer of control from one thread to another within the same
process requires a mode switch to the kernel.
Combined Approaches

 Thread creation is done in the user space.


 Bulk of scheduling and synchronization of threads is by the application.
 User-level threads (i.e. threads library) are invisible to the OS.
 Solaris is an example.
Multithreading Models

 Many-to-One

 One-to-One

 Many-to-Many
Many-to-One

 Many user-level threads mapped to single kernel thread


 One thread blocking causes all to block
 Multiple threads may not run in parallel on muticore system because
only one may be in kernel at a time
 Few systems currently use this model
 Examples:
• Solaris Green Threads
• GNU Portable Threads
One-to-One

 Each user-level thread maps to kernel thread


 Creating a user-level thread creates a kernel thread
 More concurrency than many-to-one
 Number of threads per process sometimes restricted due to overhead
 Examples
• Windows
• Linux
Many-to-Many Model
 Allows many user level threads to be mapped to many kernel threads
 Allows the operating system to create a sufficient number of kernel
threads
 Windows with the ThreadFiber package
 Otherwise not very common
Two-level Model
 Similar to M:M, except that it allows a user thread to be bound to
kernel thread
Multicore Programming

 Multicore or multiprocessor systems putting pressure on programmers,


challenges include:
• Dividing activities
• Balance
• Data splitting
• Data dependency
• Testing and debugging
 Parallelism implies a system can perform more than one task
simultaneously
 Concurrency supports more than one task making progress
• Single processor / core, scheduler providing concurrency
Concurrency vs. Parallelism
 Concurrent execution on single-core system:

 Parallelism on a multi-core system:


Multicore Programming

 Types of parallelism
• Data parallelism – distributes subsets of the same data
across multiple cores, same operation on each
• Task parallelism – distributing threads across cores, each
thread performing unique operation
Data and Task Parallelism
Thread Libraries

 Thread library provides programmer with API for creating and


managing threads
 Two primary ways of implementing
• Library entirely in user space
• Kernel-level library supported by the OS
Pthreads

 May be provided either as user-level or kernel-level


 A POSIX standard (IEEE 1003.1c) API for thread creation and
synchronization
 Specification, not implementation
 API specifies behavior of the thread library, implementation is up to
development of the library
 Common in UNIX operating systems (Linux & Mac OS X)
Pthreads Example
Pthreads Example (Cont.)
Pthreads Code for Joining 10 Threads
Threading Issues
 Semantics of fork() and exec() system calls
 Signal handling
• Synchronous and asynchronous
 Thread cancellation of target thread
• Asynchronous or deferred
 Thread-local storage
 Scheduler Activations
Semantics of fork() and exec()

 Does fork()duplicate only the calling thread or all threads?


• Some UNIXes have two versions of fork
 exec()usually works as normal – replace the running process
including all threads
Signal Handling
 Signals are used in UNIX systems to notify a process that a particular
event has occurred.
 A signal handler is used to process signals
1. Signal is generated by particular event
2. Signal is delivered to a process
3. Signal is handled by one of two signal handlers:
1. default
2. user-defined
 Every signal has default handler that kernel runs when handling signal
• User-defined signal handler can override default
• For single-threaded, signal delivered to process
Signal Handling (Cont.)
 Where should a signal be delivered for multi-threaded?
• Deliver the signal to the thread to which the signal applies
• Deliver the signal to every thread in the process
• Deliver the signal to certain threads in the process
• Assign a specific thread to receive all signals for the process
Thread Cancellation
 Terminating a thread before it has finished
 Thread to be canceled is target thread
 Two general approaches:
• Asynchronous cancellation terminates the target thread
immediately
• Deferred cancellation allows the target thread to periodically check
if it should be cancelled
 Pthread code to create and cancel a thread:
Thread-Local Storage

 Thread-local storage (TLS) allows each thread to have its own copy of
data
 Useful when you do not have control over the thread creation process
(i.e., when using a thread pool)
 Different from local variables
• Local variables visible only during single function invocation
• TLS visible across function invocations
 Similar to static data
• TLS is unique to each thread
Scheduler Activations
 Both M:M and Two-level models require
communication to maintain the appropriate
number of kernel threads allocated to the
application
 Typically use an intermediate data structure
between user and kernel threads – lightweight
process (LWP)
• Appears to be a virtual processor on which
process can schedule user thread to run
• Each LWP attached to kernel thread
• How many LWPs to create?
 Scheduler activations provide upcalls - a
communication mechanism from the kernel to
the upcall handler in the thread library
 This communication allows an application to
maintain the correct number kernel threads
References

 Operating Systems Concepts by Silberschatz, Galvin, and Gagne


 Operating Systems, by William Stallings

You might also like