0% found this document useful (0 votes)
179 views8 pages

Thread

The document discusses multi-threading and its benefits. It describes how multi-threaded applications have multiple threads within a single process, with each thread having its own program counter, stack, and registers but sharing common code, data, and resources like open files. The main benefits of multi-threading are improved responsiveness, resource sharing, economy of threads over processes, and better utilization of multiprocessor architectures.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
179 views8 pages

Thread

The document discusses multi-threading and its benefits. It describes how multi-threaded applications have multiple threads within a single process, with each thread having its own program counter, stack, and registers but sharing common code, data, and resources like open files. The main benefits of multi-threading are improved responsiveness, resource sharing, economy of threads over processes, and better utilization of multiprocessor architectures.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set

of registers, ( and a thread ID. )


Traditional ( heavyweight ) processes have a single thread of control - There is one program
counter, and one sequence of instructions that can be carried out at any given time.
As shown in Figure 4.1, multi-threaded applications have multiple threads within a single
process, each having their own program counter, stack and set of registers, but sharing
common code, data, and certain structures such as open files.

benefits
There are four major categories of benefits to multi-threading:
Responsiveness - One thread may provide rapid response while other threads are blocked or
slowed down doing intensive calculations.
Resource sharing - By default threads share common code, data, and other resources, which
allows multiple tasks to be performed simultaneously in a single address space.
Economy - Creating and managing threads ( and context switches between them ) is much
faster than performing the same tasks for processes.
Scalability, i.e. Utilization of multiprocessor architectures - A single threaded process can
only run on one CPU, no matter how many may be available, whereas the execution of a
multi-threaded application may be split amongst available processors. ( Note that single
threaded processes can still benefit from multi-processor architectures when there are
multiple processes contending for the CPU, i.e. when the load average is above some certain
threshold. )
Types of Thread

Threads are implemented in following two ways −

User Level Threads − User managed threads.

Kernel Level Threads − Operating System managed threads acting on


kernel, an operating system core.

User Level Threads

In this case, the thread management kernel is not aware of the existence
of threads. The thread library contains code for creating and destroying
threads, for passing message and data between threads, for scheduling
thread execution and for saving and restoring thread contexts. The
application starts with a single thread

Advantages
Thread switching does not require Kernel mode privileges.
User level thread can run on any operating system.
Scheduling can be application specific in the user level thread.
User level threads are fast to create and manage.

Disadvantages
In a typical operating system, most system calls are blocking.
Multithreaded application cannot take advantage of multiprocessing.

Kernel Level Threads

In this case, thread management is done by the Kernel. There is no


thread management code in the application area. Kernel threads are
supported directly by the operating system. Any application can be
programmed to be multithreaded. All of the threads within an application
are supported within a single process.

The Kernel maintains context information for the process as a whole and
for individuals threads within the process. Scheduling by the Kernel is
done on a thread basis. The Kernel performs thread creation, scheduling
and management in Kernel space. Kernel threads are generally slower to
create and manage than the user threads.

Advantages
Kernel can simultaneously schedule multiple threads from the same
process on multiple processes.
If one thread in a process is blocked, the Kernel can schedule another
thread of the same process.
Kernel routines themselves can be multithreaded.

Disadvantages
Kernel threads are generally slower to create and manage than the user
threads.
Transfer of control from one thread to another within the same process
requires a mode switch to the Kernel.

Multithreading Models

Some operating system provide a combined user level thread and Kernel
level thread facility. Solaris is a good example of this combined approach.
In a combined system, multiple threads within the same application can
run in parallel on multiple processors and a blocking system call need not
block the entire process. Multithreading models are three types

Many to many relationship.


Many to one relationship.
One to one relationship.

One to One Relationship

Many to One Relationship


Threading issue

Semantics of fork() and exec() system calls

Signal handling

Synchronous and asynchronous

Thread cancellation of target thread

Asynchronous or deferred

Thread-local storage

Scheduler Activations

Semantics of fork() and exec() system calls

Does fork()duplicate only the calling thread or all threads?

Some UNIXes have two versions of fork

exec() usually works as normal – replace the running process including


all threads

Thread cancellation

Terminating a thread before it has finished

Two general approaches:


Asynchronous cancellation terminates the target thread immediately

Deferred cancellation allows the target thread to periodically check if it


should be cancelled

Thread-Local Storage ( was 4.4.5 Thread-Specific Data )


Most data is shared among threads, and this is one of the major benefits of using threads in
the first place.
However sometimes threads need thread-specific data also.
Most major thread libraries ( pThreads, Win32, Java ) provide support for thread-specific
data, known as thread-local storage or TLS. Note that this is more like static data than local
variables,because it does not cease to exist when the function ends.
Scheduler Activations
Many implementations of threads provide a virtual processor as an interface between the user
thread and the kernel thread, particularly for the many-to-many or two-tier models.
This virtual processor is known as a "Lightweight Process", LWP.
There is a one-to-one correspondence between LWPs and kernel threads.
The number of kernel threads available, ( and hence the number of LWPs ) may change
dynamically.
The application ( user level thread library ) maps user threads onto available LWPs.
kernel threads are scheduled onto the real processor(s) by the OS.
The kernel communicates to the user-level thread library when certain events occur ( such as
a thread about to block ) via an upcall, which is handled in the thread library by an upcall
handler. The upcall also provides a new LWP for the upcall handler to run on, which it can
then use to reschedule the user thread that is about to become blocked. The OS will also issue
upcalls when a thread becomes unblocked, so the thread library can make appropriate
adjustments.
If the kernel thread blocks, then the LWP blocks, which blocks the user thread.
Ideally there should be at least as many LWPs available as there could be concurrently
blocked kernel threads. Otherwise if all LWPs are blocked, then user threads will have to
wait for one to become available.

Example of pthread creation and termination


#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>

void *print_message_function( void *ptr );

main()
{
pthread_t thread1, thread2;
char *message1 = "Thread 1";
char *message2 = "Thread 2";
int iret1, iret2;

/* Create independent threads each of which will execute


function */

iret1 = pthread_create( &thread1, NULL,


print_message_function, (void*) message1);
iret2 = pthread_create( &thread2, NULL,
print_message_function, (void*) message2);
/* Wait till threads are complete before main continues.
Unless we */
/* wait we run the risk of executing an exit which will
terminate */
/* the process and all threads before the threads have
completed. */

pthread_join( thread1, NULL);


pthread_join( thread2, NULL);

printf("Thread 1 returns: %d\n",iret1);


printf("Thread 2 returns: %d\n",iret2);
exit(0);
}

void *print_message_function( void *ptr )


{
char *message;
message = (char *) ptr;
printf("%s \n", message);
}

Compile: cc -lpthread pthread1.c


Run: ./a.out
Results:

Thread 1

Thread 2

Thread 1 returns: 0

Thread 2 returns: 0

You might also like