Chapter 4
Threads
1
Outline and Objectives
Objectives
Outline
• Overview
To introduce the notion of a thread — a fundamental unit of CPU utilization that
• forms the basisModels
Multithreading of multithreaded computer systems
•• To discuss
Thread the APIs for the Pthreads, Win32, and Java thread libraries
Libraries
•• To examineIssues
Threading issues related to multithreaded programming
• Operating System Examples
• Windows XP Threads
• Linux Threads
• Re-entrancy
• Thread specific data
2
Threading Concept
3
Threads and Thread Usage
• A process has normally a single thread of control (execution
sequence/flow).
– Always at least one thread exists
– If blocks, no activity can be done as part of the process
– Better: be able to run concurrent activities (tasks) as part of the
same process.
– Now, a process can have multiple threads of control (multiple
concurrent tasks).
• Threads run in pseudo-parallel manner (concurrently), share
text(code) and data
4
5
6
Benefits of using threads for an App
• Responsiveness
– One thread blocks, another one runs.
– One thread may always wait for the user
• Resource Sharing
– Threads can easily share resources
• Economy
– Creating a thread is fast
– Context switching among threads may be faster
• Scalability
– Multiprocessors can be utilized better
7
Threads and Thread Usage
data data
CPU
blocks blocks
blocks
code blocks code
run enough
single-threaded process multi-threaded process
8
a multithreaded process’ execution flows:
threads
Instructions of the Program
main()
Thread0
Thread2
Thread1
time
Lifetime of the process
9
Multithreading Concept
CPU
single-threaded process multi-threaded process
10
Multithreading Concept
Process Process
thread thread thread thread
P1.T1 P2.T1 P2.T2 P2.T3
Schedulable Entities
We can select one of them and run
11
Multithreading Concept
thread1 thread2 thread3 thread4
function1(…)
{
….
{
function2(…)
{
….
{
main()
{ ….
thread_create (function1 ,…);
….
thread_create (function2, …);
….
thread_create (function1, …);
….
}
12
Single and Multithreaded Processes
13
Multicore programming and multithreading
challenges
• Multicore systems putting pressure on programmers.
– Threading can utilize Multicore systems better, but it has some challenges
• Threading Challenges include
– Dividing activities
• Come up with concurrent tasks
– Balance
• Tasks should be similar importance and load
– Data splitting
• Data may need to be split as well
– Data dependency
• Data dependencies should be considered; need synchronization of
activities
– Testing and debugging
• Debugging is more difficult
14
Multithreaded Server Architecture
15
Concurrent Execution on a Single-core System
16
Parallel Execution on a Multicore System
17
Speedup: Amdahl’s Law
• Potential performance gain (speed up) with multiple cores (CPUs)?
• Program: both serial and parallel parts
• Assume the program executes and finishes in 1 time unit.
– S: time to execute the serial portion (0 <= S <= 1)
– 1-S: is the runtime of the portion that can be parallelized
• N: number of processing cores.
– N tasks can run in parallel. Then parallelizable portion of the
program will finish N times faster
18
Threading Support
19
Threading Support
• Multithreading can be supported by:
– User level libraries (without Kernel being aware of it)
• Library creates and manages threads (user level implementation)
– Kernel itself
• Kernel creates and manages threads (kernel space implementation)
• No matter which is implemented, threads can be created, used, and
terminated via a set of functions that are part of a Thread API (a thread library)
– Three primary thread libraries: POSIX threads, Java threads, Win32
threads
20
Multithreading Models
• A user process wants to create one or more threads.
– Kernel can create one or more threads for the process.
• Even a kernel does not support threading, it can create one thread per
process (i.e., it can create a process which is a single thread of
execution).
• Finally a relationship must exist between user threads and kernel thread(s)
– Mapping user level threads to kernel level threads
• Three common ways of establishing such a relationship:
– Many-to-One model
– One-to-One model
– Many-to-Many model
21
Many-to-One Model:
Implementing Threads in User Space
• Many user-level threads
mapped to a single kernel thread
• Examples:
– Solaris Green Threads
– GNU Portable Threads
• Thread management
done at user space, by a
thread library
Kernel supports process concept;
not threading concept
22
Many-to-One Model:
Implementing Threads in User Space
• No need for kernel support for
Thread
multithreading (+)
Process A Process B
• Thread creation is fast (+)
• Switching between threads is fast;
efficient approach (+)
• Blocking system calls defeat the
purpose and have to be handled (-)
• A thread has to explicitly call a Run-time Thread
function to voluntarily give the CPU System (library) table
to some other thread (-)
PCB A
– example: thread_yield()
PCB B
• Multiple threads will run on a single Kernel process table
processor, not utilizing multi-
processor machines. (-)
23
One-to-One Model:
Implementing Threads in Kernel Space
• Kernel may implement threading and can manage threads, schedule threads.
Kernel is aware of threads.
• Examples (nearly all modern OSs): Windows, Linux, …
– All these kernels have threading support. They can schedule processes and
their threads (not only processes)
• Each user-level thread maps to a kernel thread
24
One-to-One Model:
Implementing Threads in Kernel Space
• Provides more concurrency; when a
thread blocks, another can run.
Process A Process B
Blocking system calls are not problem
anymore. (+)
• Multiple processors can be utilized as
well. (+).
• Kernel can stop a long running thread
and run another thread. No need for
explicit request from a thread to be
stopped. (+)
• PCB A
Need system calls to create threads
PCB B
and this takes time; (-) process table
Kernel
• thread switching costly; (-)
• any thread function requires a system
Thread table
call. (-)
25
Threading API
26
Thread Libraries
• Thread library provides programmer with API for creating and
managing threads
– Programmer just have to know the thread library interface (API).
– Threads may be implemented in user space or kernel space.
• library may be entirely in user space or may get kernel support
for threading
27
Pthreads Library
• Pthreads: POSIX threads
• May be provided either as user-level or kernel-level
• A POSIX standard (IEEE 1003.1c) API for thread creation and
synchronization
• API specifies behavior of the thread library, implementation is up to
development of the library
• Common in UNIX-like operating systems (Solaris, Linux, Mac OS X)
28
Pthreads Example
• We will show a program that creates a new thread.
– Hence a process will have two threads :
• 1 - the initial/main thread that is created to execute the main() function
(that thread is always created even though there is no support for
multithreading);
• 2 - the new thread.
(both threads have equal power)
• The program will just create a new thread to do a simple computation: will sum
all integers upto a given parameter value
– sum = 1+2+…+parameter_value N
sum i
i 1
• The main thread will wait until sum is computed into a global variable.
• Then the main thread will print the result.
29
Pthreads Example
#include <pthread.h>
#include <stdio.h>
int sum; /* shared sum by threads – global variable */
void *runner (void *param); /* thread start function */
30
Pthreads Example
int main(int argc, char *argv[]){
pthread_t tid; /* id of the created thread */
pthread_attr_t attr; /* set of thread attributes */
if (argc != 2) {
fprintf (stderr, “usage: a.out <value>\n”);
return -1;
}
if (atoi(argv[1]) < 0) {
fprintf (stderr, “%d must be >= 0\n”, atoi(argv[1]);
return -1;
}
pthread_attr_init (&attr);
pthread_create (&tid, &attr, runner, argv[1]);
pthread_join (tid, NULL);
printf (“sum = %d\n”, sum);
}
31
Pthreads Example
void *runner (void *param)
{
int i;
int upper;
upper = atoi(param);
sum = 0;
for (i = 1; i <= upper; ++i)
sum += i;
pthread_exit(0);
}
32
Pthreads Example
int main(…) thread1 thread2
{
…
….
pthread_create(&tid,…,runner,..);
wait
pthread_join(tid);
. printf (…, sum, …);
}
runner (…)
{
….
sum = …
pthread_exit();
{
33
Compiling and running the program
• You can put the above code into a .c file, say mysum.c
• In order to use the Pthreads functions, we need to include pthread.h header
file in our program (as shown in previous slides)
• We also need to link with the pthread library (the Pthreads API functions are
not implemented in the standard C library). The way to do that is using the –l
option of the C compiler. After -l you can provide a library name like pthread.
• Hence we can compile+link our program as follows:
– gcc -Wall -o mysum -lpthread mysum.c
• Then we run it as (for example):
– ./mysum 6
• It will print out 21
34
Windows Threads
35
Java Threads
• Java threads are managed by the JVM
• Typically implemented using the threads model provided by underlying
OS
• Java threads may be created by:
– Extending Thread class
– Implementing the Runnable interface
• Example given in the book
36
From Single-threaded to Multithreaded
• Scope of variables:
– Normally we have: global, local
– With threads, we want: global, local, and thread-specific (thread
wide)
• thread-specific: global inside the thread (thread-wide global), but not
global for the whole process. Other threads can not access it. But all
functions of the thread can.
• But we may not have language support to define such variables.
– C can not do that. Normally in C we have just global variables and local
variables. We do not have thread-wide (thread-local or thread-global) variables.
• Therefore thread API has special functions that can be used to create
such variables – data.
– This is called thread specific data.
37
Thread Specific Data
(Thread Local Storage: TLS)
• Allows each thread to have its own copy of data
– Each thread refers to the data with the same name.
• Example:
– Each thread can do:
• create_global (bufptr); // create a name (i.e., key)
• allocate space for a buffer // create block of memory
– buf = malloc (BUFFERSIZE)
• set_global (bufptr, buf); // associate the pointer
• buf = (char *) get_global (bufptr); // get the pointer to access
• use pointer to buffer to access data in the buffer
– Here bufptr is the same name of the variable used in each thread.
But each thread has its own copy of data.
38
Thread Specific Data
(Thread Local Storage: TLS)
Thread 1 Thread 2 Thread 3
bufptr bufptr bufptr
buffer buffer buffer
Data Data Data
space allocated space allocated space allocated
with malloc with malloc with malloc
Each thread is using the same variable name bufptr to get access to its own
specific data.
39
Thread Specific Data
• In POSIX Pthreads Library we have the following functions to create
thread specific data that has an associated name (key).
pthread_key_create key is seen by all threads; in each thread,
pthread_key_delete key is associated with thread specific data
(thread local storage)
pthread_setspecific
pthread_getspecific
Example:
pthread_key_t data1;
pthread_key_create (&data1, …);
// In each thread we can do the following:
char *buf = malloc(1024);
pthread_set_specific (data1, buf); // associate buf (buffer pointer) with data1
buf = pthread_get_specific (data1); // get pointer to access data
40
From Single-threaded to Multithreaded
• Many programs are written as a single threaded process.
• If we try to convert a single-threaded process to be a multi-threaded
process, we have to be careful about the following:
– the global variables
– the library functions we use
41
From Singlethread to Multithreaded
int status; // a global variable
func1(…) {
….
status = …
do_something_based_on(status);
This is a
}
single threaded
program
func2(…) {
…
status = …
do_something_based_on(status);
}
main() {
….
func1 (…);
func2 (…);
}
42
From Singlethread to Multithreaded
int status=60; • We can have problem here.
func1(…) {
…. • Just after func1 of thread 1
status = 40… updated status, a thread
do_something_based_on(status); switch may occur and 2nd
malloc thread can run and update
} status.
func2(…) { • Then thread 1 will run
… again, but will work with a
status = …60 different status value.
do_something_based_on(status); Wrong result!
malloc
}
main() {
….
thread_create(…, func1, …);
thread_create(…, func2, …); 43
Thread-safe / Reentrant libraries
• Many library procedures may not be reentrant.
– They are not designed to have a second call to itself from the same
process before it is completed (not re-entrant).
• (We are talking about non-recursive procedures.)
– They may be using global variables. Hence may not be thread-safe.
• We have to be sure that we use thread-safe (reentrant) library routines in
multi-threaded programs we are developing.
44
Examples from Operating Systems
45
Operating System Examples
• Windows XP Threads
• Linux Threads
46
Linux Threads
• Linux refers to them as tasks rather than threads
• Thread creation is done through clone() system call
• clone() allows a child task to share the address space of the parent
task (process)
47
Clone() and fork()
user program
library fork() clone()
sys_fork() sys_clone(){
{ ….
… }
kernel }
48
References
• The slides here are adapted/modified from the textbook and its slides:
Operating System Concepts, Silberschatz et al., 7th & 8th editions,
Wiley.
• Operating System Concepts, 7th and 8th editions, Silberschatz et al.
Wiley.
• Modern Operating Systems, Andrew S. Tanenbaum, 3rd edition, 2009.
49
Additional Study Material
50
Many-to-Many Model & Two-level Model
Two-level Model
Many-to-Many Model
• Similar
Allows many
to M:M,user
except
levelthat
threads
it allows
to bea mapped
user thread
to many
to bekernel
boundthreads
to a kernel
• thread
Allows the operating system to create a sufficient number of kernel threads
• Examples
Solaris prior to version 9
– IRIX NT/2000 with the ThreadFiber package
• Windows
– HP-UX
– Tru64 UNIX
– Solaris 8 and earlier
51
Implicit Threading
52
Implicit Threading
• Applications can create hundreds or thousands of threads.
– Can be creating explicitly. (App programmer worries)
– Can be created by compiler, library, platform, etc. (easier for App
programmer) - called Implicit Threading.
• Some alternatives for implicit threading
– Thread Pools
– OpenMP (Compiler Supported)
– Grand Central Dispatch (in Mac OS X)
53
Thread Pools
• Create a number of threads in a pool where they await for work
• Advantages:
– Faster
– Limit the count of threads:
• Allows the number of threads in the application to be bound to
the size of the pool
54
OpenMP
• Is a set of compiler directives as well as an API for programs in C and
C++.
• Parallel programming in shared memory environments.
• Parallel regions identified by programmer.
• Examples:
Number of threads ==
Number of cores
55
Grand Central Dispatch (GCD)
• A combination of extensions to the C language, and API, and a run-
time library that allows app developers to identify sections of code to
run in parallel.
• Programmer identifies the region to be parallel:
• GCD schedules such blocks for run-time execution by placing them in
a dispatch queue (serial or concurrent).
• Blocks can run in parallel in different cores.
56
Other Threading Issues
57
Threading Issues
• Semantics of fork() and exec() system calls
• Thread cancellation of target thread
– Asynchronous or deferred
• Signal handling
• Thread pools
• Thread-specific data
• Scheduler activations
58
Semantics of fork() and exec()
• Does fork() duplicate only the calling thread or all threads?
• How should we implement fork?
• logical thing to do is:
– 1) If exec() will be called after fork(), there is no need to duplicate
the threads. They will be replaced anyway.
– 2) If exec() will not be called, then it is logical to duplicate the
threads as well; so that the child will have as many threads as the
parent has.
• So we may implement two system calls: like fork1 and fork2!
59
Thread Cancellation
• Terminating a thread before it has finished
– Need at various cases
• Two general approaches:
– Asynchronous cancellation terminates the target thread
immediately
– Deferred cancellation allows the target thread to periodically
check if it should be cancelled
• Canceller thread indicates a target thread to be cancelled.
• Target threads performs checks at cancellation points and if it is
safe gets terminated.
60
Thread Cancellation
pthread_t tid;
pthread_create (&tid, 0, worker, NULL); Cancelling thread
…
/* cancel the thread */
pthread_cancel(tid); /* requesting cancellation */
while (1) {
/* do some work */
…
Target thread /* check if there is cancel request */
} pthread_test_cancel();
61
Signal Handling
• If a signal is sent to a Multithreaded Process, who will receive and
handle that?
• In a single threaded process, it is obvious.
• In a multi-threaded process, there are a number of options, like
delivering the signal to:
– thread to which signal applies
– all threads of the process
– a specific thread responsible for handling signals
62
Signal Handling
• Signals are used in UNIX systems to notify a process that a particular
event has occurred
– They are notifications
• a Signal:
1. Signal is generated by a particular event
2. Signal is delivered to a process (same or different process)
3. Signal is handled
• A signal handler is used to process signals
• Handled asynchronously
63
Signal Handling
• Options:
– Deliver the signal to the thread to which the signal applies
– Deliver the signal to every thread in the process
– Deliver the signal to certain threads in the process
– Assign a specific thread to receive all signals for the process
64
a C program using signals
#include <stdio.h> • While a program is running, if
#include <signal.h> we press CTRL-C keys, the
#include <stdlib.h> program will be terminated
(killed). We are sending a
SIGINT signal to the program
static void sig_int_handler() {
printf("I received SIGINT signal. bye... \n"); • By default, SIGINT is handled
fflush(stdout); by kernel. By default, kernel
exit(0); terminates the program.
}
• But if we specify a handler
function as here, then our
int main() { program can handle it.
signal (SIGINT, sig_int_handler);
• Kernel will notify our process
while (1) with a signal when user presses
; the CTRL-C keys.
}
Program X
65
delivering signal (notifying)
signal handler run
Program X
SIGINT signal delivered
Kernel
CTRL-C
Keyboard
66
kill program
process id = 3405
signal handler run
kill -s SIGINT 3405 Program X
SIGINT signal is delivered
Kernel
SIGINT signal is stored in PCB of X
Keyboard
67
Some Signals
SIGABRT Process abort signal.
SIGALRM Alarm clock.
SIGBUS Access to an undefined portion of a memory object.
SIGCHLD Child process terminated, stopped, or continued.
SIGCONT Continue executing, if stopped.
SIGFPE Erroneous arithmetic operation.
SIGHUP Hangup.
SIGILL Illegal instruction.
SIGINT Terminal interrupt signal.
SIGKILL Kill (cannot be caught or ignored).
SIGPIPE Write on a pipe with no one to read it.
68
SIGQUIT Terminal quit signal.
Scheduler Activations
• Kernel threads are good, but they are slower if we create short threads
too frequently, or threads wait for each other too frequently.
• Is there a middle way?
– Schedule Activation
• Goal is mimic kernel threads at user level with some more kernel
support. But kernel will not create another thread for each user thread
(M:1 or M:M model).
• Avoid unnecessary transitions between user and kernel space.
69
Scheduler Activations: Upcall mechanism
upcall handler can threads
re-start the 1st thread
Process
Run-time
System upcall handler
(i.e. thread library) Thread schedules
table another thread
library registers a handler makes system call
(upcall handler)
when kernel runs the upcall handler
process/thread (i.e. makes an upcall; activates
is started Kernel the user level scheduler)
kernel detects that I/O is finished
Kernel initiates I/O
and blocks the thread kernel informs the library via upcall
70
Windows XP Threads
71
Windows XP Threads
• Implements the one-to-one mapping, kernel-level
• Each thread contains
– A thread id
– Register set
– Separate user and kernel stacks
– Private data storage area
• The register set, stacks, and private storage area are known as the
context of the threads
• The primary data structures of a thread include:
– ETHREAD (executive thread block)
– KTHREAD (kernel thread block)
– TEB (thread environment block)
72